DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Abstract
Despite their impressive capabilities, large language models (LLMs) are prone to hallucinations, i.e., generating content that deviates from facts seen during pretraining. We propose a simple decoding strategy for reducing hallucinations with pretrained LLMs that does not require conditioning on retrieved external knowledge nor additional fine-tuning. Our approach obtains the next-token distribution by contrasting the differences in logits obtained from projecting the later layers versus earlier layers to the vocabulary space, exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers. We find that this Decoding by Contrasting Layers (DoLa) approach is able to better surface factual knowledge and reduce the generation of incorrect facts. DoLa consistently improves the truthfulness across multiple choices tasks and open-ended generation tasks, for example improving the performance of LLaMA family models on TruthfulQA by 12-17% absolute points, demonstrating its potential in making LLMs reliably generate truthful facts.
Community
transformers
implementation being added here: https://github.com/huggingface/transformers/pull/29619
this paper https://arxiv.org/html/2402.06925v1 claims beam search is the most performant strategy on the factscore ds while dola is the weakest. what is your opinion on this? I don't see beam searching compared against dola in your paper
@vlisaia there seems to be something weird with DoLa going on in the paper (see appendix, section F and table 32). Not sure whether it is a property of DoLa or a bug in the implementation -- if it is the latter, then the results for DoLa are likely underestimated :)
Thanks, just checked out the sections. Is there any chance of seeing a dola vs beam search comparison from your team then?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper