Papers
arxiv:2410.16270

Reflection-Bench: probing AI intelligence with reflection

Published on Oct 21
· Submitted by LingyuLi on Oct 28
Authors:
,
,
,
,

Abstract

The ability to adapt beliefs or behaviors in response to unexpected outcomes, reflection, is fundamental to intelligent systems' interaction with the world. From a cognitive science perspective, this serves as a core principle of intelligence applicable to both human and AI systems. To address the debate on the intelligence of large language models (LLMs), we propose Reflection-Bench, a comprehensive benchmark comprising 7 tasks spanning core cognitive functions crucial for reflection, including perception, memory, belief updating, decision-making, prediction, counterfactual thinking, and meta-reflection. We evaluate the performances of 13 prominent LLMs such as OpenAI o1, GPT-4, Claude 3.5 Sonnet, etc. The results indicate that current LLMs still lack satisfactory reflection ability. We discuss the underlying causes of these results and suggest potential avenues for future research. In conclusion, Reflection-Bench offers both evaluation tools and inspiration for developing AI capable of reliably interacting with the environment. Our data and code are available at https://github.com/YabYum/ReflectionBench.

Community

Paper author Paper submitter

We introduce reflection as a criterion for AI intelligence. It's the fundamental ability to learn from unexpected outcomes and adapt beliefs – a capability essential for intelligence.

Therefore, we propose Reflection-Bench to probe AI intelligence. Inspired by cognitive science, Reflection-Bench tests 7 core cognitive components crucial for reflection:
• Perception
• Memory
• Belief Updating
• Decision-Making
• Prediction
• Counterfactual Thinking
• Meta-Reflection

Our assessment of 13 LLMs (including OpenAI o1) reveals:
•Significant limitations in human-like reflection (and thus intelligence)
•Universal lack of meta-reflection abilities

Code: https://github.com/YabYum/ReflectionBench
image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.16270 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.16270 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.16270 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.