Analysing the Residual Stream of Language Models Under Knowledge Conflicts
Abstract
Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context. Such conflicts can lead to undesirable model behaviour, such as reliance on outdated or incorrect information. In this work, we investigate whether LLMs can identify knowledge conflicts and whether it is possible to know which source of knowledge the model will rely on by analysing the residual stream of the LLM. Through probing tasks, we find that LLMs can internally register the signal of knowledge conflict in the residual stream, which can be accurately detected by probing the intermediate model activations. This allows us to detect conflicts within the residual stream before generating the answers without modifying the input or model parameters. Moreover, we find that the residual stream shows significantly different patterns when the model relies on contextual knowledge versus parametric knowledge to resolve conflicts. This pattern can be employed to estimate the behaviour of LLMs when conflict happens and prevent unexpected answers before producing the answers. Our analysis offers insights into how LLMs internally manage knowledge conflicts and provides a foundation for developing methods to control the knowledge selection processes.
Community
This paper analyses the residual stream of LLMs under context-memory knowledge conflict; the research questions include:
- in which layers can we detect knowledge conflict?
- what are the characteristics of the residual stream when LLMs use different sources of knowledge?
This paper is the preliminary study of Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering (2024)
- Probing Language Models on Their Knowledge Source (2024)
- Deciphering the Interplay of Parametric and Non-parametric Memory in Retrieval-augmented Language Models (2024)
- Understanding the Interplay between Parametric and Contextual Knowledge for Large Language Models (2024)
- Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper