DELIFT: Data Efficient Language model Instruction Fine Tuning
Abstract
Fine-tuning large language models (LLMs) is essential for enhancing their performance on specific tasks but is often resource-intensive due to redundant or uninformative data. To address this inefficiency, we introduce DELIFT (Data Efficient Language model Instruction Fine-Tuning), a novel algorithm that systematically optimizes data selection across the three key stages of fine-tuning: (1) instruction tuning, (2) task-specific fine-tuning (e.g., reasoning, question-answering), and (3) continual fine-tuning (e.g., incorporating new data versions). Unlike existing methods that focus on single-stage optimization or rely on computationally intensive gradient calculations, DELIFT operates efficiently across all stages. Central to our approach is a pairwise utility metric that quantifies how beneficial a data sample is for improving the model's responses to other samples, effectively measuring the informational value relative to the model's current capabilities. By leveraging different submodular functions applied to this metric, DELIFT selects diverse and optimal subsets that are useful across all stages of fine-tuning. Experiments across various tasks and model scales demonstrate that DELIFT can reduce the fine-tuning data size by up to 70% without compromising performance, offering significant computational savings and outperforming existing methods in both efficiency and efficacy.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- IterSelectTune: An Iterative Training Framework for Efficient Instruction-Tuning Data Selection (2024)
- Diversify and Conquer: Diversity-Centric Data Selection with Iterative Refinement (2024)
- Adapt-$\infty$: Scalable Lifelong Multimodal Instruction Tuning via Dynamic Data Selection (2024)
- Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning (2024)
- SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Dear Ishika,
I greatly enjoyed reading your impressive work.🥳 Your research on improving LLM performance in data-efficient settings is both timely and inspiring. Given our shared research interests, I would like to humbly share our recent exploration in this direction: SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation. We would be honored if you find it relevant to your research and would be deeply grateful for any discussion of our work in your future revisions.
Thank you for your consideration. Congratulations on your excellent contribution to the field!
Best regards,
Junyu
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper