Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning
Abstract
Recent studies have identified one aggravating factor of LLM hallucinations as the knowledge inconsistency between pre-training and fine-tuning, where unfamiliar fine-tuning data mislead the LLM to fabricate plausible but wrong outputs. In this paper, we propose a novel fine-tuning strategy called Prereq-Tune to address this knowledge inconsistency and reduce hallucinations. Fundamentally, Prereq-Tune disentangles the learning of skills and knowledge, so the model learns only the task skills without being impacted by the knowledge inconsistency. To achieve this, Prereq-Tune introduces an additional prerequisite learning stage to learn the necessary knowledge for SFT, allowing subsequent SFT to focus only on task skills. Prereq-Tune can also be combined with fictitious synthetic data to enhance the grounding of LLM outputs to their internal knowledge. Experiments show that Prereq-Tune outperforms existing baselines in improving LLM's factuality across short QA and long-form generation tasks. It also opens new possibilities for knowledge-controlled generation in LLMs. Our code is available at https://github.com/UCSB-NLP-Chang/Prereq_tune.git.
Community
We introduce Prereq-Tune, a fine-tuning strategy that reduces LLM hallucinations by resolving the knowledge inconsistency between pre-training and fine-tuning and enhancing the grounding of LLM outputs to their internal knowledge.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Fine-Tuning Large Language Models to Appropriately Abstain with Semantic Entropy (2024)
- Selective Self-Rehearsal: A Fine-Tuning Approach to Improve Generalization in Large Language Models (2024)
- RAC: Efficient LLM Factuality Correction with Retrieval Augmentation (2024)
- LoGU: Long-form Generation with Uncertainty Expressions (2024)
- Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper