Math Neurosurgery: Isolating Language Models' Math Reasoning Abilities Using Only Forward Passes
Abstract
Math reasoning is a highly active area of Large Language Model (LLM) research because it is a hallmark of artificial intelligence. However, few works have explored how math reasoning is encoded within LLM parameters and if it is a skill that can be isolated within a model. Doing so could allow targeted intervention to improve math performance without altering non-math behavior and foster understanding of how models encode math reasoning. We introduce Math Neurosurgery (MathNeuro), a method for isolating math-specific parameters in LLMs using only forward passes. MathNeuro builds on existing work by using weights and activations to calculate parameter importance, but isolates math-specific parameters by removing those important for general language tasks. Pruning parameters MathNeuro identifies deletes a LLM's math reasoning ability without destroying its general language ability. Scaling these parameters by a small constant improves a pretrained or instruction-tuned LLM's performance by 4-17% on GSM8K while leaving non-math behavior unaltered. MathNeuro is also data efficient: most of its effectiveness holds when identifying math-specific parameters using a single sample. MathNeuro highlights the potential for future work to intervene on math-specific parameters.
Community
We identify math-specific parameters in LLMs using only forward passes with a method we call MathNeuro and intervene on these parameters to either 1) delete math performance or 2) increase it without further training, both of which we accomplish without catastrophic forgetting for non-math tasks. MathNeuro highlights the potential for other methods to intervene on math-specific parameters to improve performance without catastrophic forgetting.
Project repo: https://github.com/bryanchrist/MathNeuro
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code (2024)
- Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis (2024)
- Interpreting and Improving Large Language Models in Arithmetic Calculation (2024)
- Small Language Models are Equation Reasoners (2024)
- ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper