If You Can't Use Them, Recycle Them: Optimizing Merging at Scale Mitigates Performance Tradeoffs
Abstract
Model merging has shown great promise at combining expert models, but the benefit of merging is unclear when merging ``generalist'' models trained on many tasks. We explore merging in the context of large (sim100B) models, by recycling checkpoints that exhibit tradeoffs among different tasks. Such checkpoints are often created in the process of developing a frontier model, and many suboptimal ones are usually discarded. Given a pool of model checkpoints obtained from different training runs (e.g., different stages, objectives, hyperparameters, and data mixtures), which naturally show tradeoffs across different language capabilities (e.g., instruction following vs. code generation), we investigate whether merging can recycle such suboptimal models into a Pareto-optimal one. Our optimization algorithm tunes the weight of each checkpoint in a linear combination, resulting in a Pareto-optimal models that outperforms both individual models and merge-based baselines. Further analysis shows that good merges tend to include almost all checkpoints with with non-zero weights, indicating that even seemingly bad initial checkpoints can contribute to good final merges.
Community
We extend merging to a setup with many multi-tasked LLM checkpoints showing performance tradeoffs. We show that we optimize linear merging to yield a A Pareto-optimal model in severe tradeoff cases.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MoD: A Distribution-Based Approach for Merging Large Language Models (2024)
- ATM: Improving Model Merging by Alternating Tuning and Merging (2024)
- The Non-Local Model Merging Problem: Permutation Symmetries and Variance Collapse (2024)
- Agent Skill Acquisition for Large Language Models via CycleQD (2024)
- Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging (2024)
- Unconstrained Model Merging for Enhanced LLM Reasoning (2024)
- Model merging with SVD to tie the Knots (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper