Reflection-Bench: probing AI intelligence with reflection
Abstract
The ability to adapt beliefs or behaviors in response to unexpected outcomes, reflection, is fundamental to intelligent systems' interaction with the world. From a cognitive science perspective, this serves as a core principle of intelligence applicable to both human and AI systems. To address the debate on the intelligence of large language models (LLMs), we propose Reflection-Bench, a comprehensive benchmark comprising 7 tasks spanning core cognitive functions crucial for reflection, including perception, memory, belief updating, decision-making, prediction, counterfactual thinking, and meta-reflection. We evaluate the performances of 13 prominent LLMs such as OpenAI o1, GPT-4, Claude 3.5 Sonnet, etc. The results indicate that current LLMs still lack satisfactory reflection ability. We discuss the underlying causes of these results and suggest potential avenues for future research. In conclusion, Reflection-Bench offers both evaluation tools and inspiration for developing AI capable of reliably interacting with the environment. Our data and code are available at https://github.com/YabYum/ReflectionBench.
Community
We introduce reflection as a criterion for AI intelligence. It's the fundamental ability to learn from unexpected outcomes and adapt beliefs – a capability essential for intelligence.
Therefore, we propose Reflection-Bench to probe AI intelligence. Inspired by cognitive science, Reflection-Bench tests 7 core cognitive components crucial for reflection:
• Perception
• Memory
• Belief Updating
• Decision-Making
• Prediction
• Counterfactual Thinking
• Meta-Reflection
Our assessment of 13 LLMs (including OpenAI o1) reveals:
•Significant limitations in human-like reflection (and thus intelligence)
•Universal lack of meta-reflection abilities
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs (2024)
- Entering Real Social World! Benchmarking the Theory of Mind and Socialization Capabilities of LLMs from a First-person Perspective (2024)
- CLR-Bench: Evaluating Large Language Models in College-level Reasoning (2024)
- Enhancing LLM Problem Solving with REAP: Reflection, Explicit Problem Deconstruction, and Advanced Prompting (2024)
- Diversity of Thought Elicits Stronger Reasoning Capabilities in Multi-Agent Debate Frameworks (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper