Papers
arxiv:2406.10900

AUTOHALLUSION: Automatic Generation of Hallucination Benchmarks for Vision-Language Models

Published on Jun 16
· Submitted by akhaliq on Jun 28
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Large vision-language models (LVLMs) hallucinate: certain context cues in an image may trigger the language module's overconfident and incorrect reasoning on abnormal or hypothetical objects. Though a few benchmarks have been developed to investigate LVLM hallucinations, they mainly rely on hand-crafted corner cases whose fail patterns may hardly generalize, and finetuning on them could undermine their validity. These motivate us to develop the first automatic benchmark generation approach, AUTOHALLUSION, that harnesses a few principal strategies to create diverse hallucination examples. It probes the language modules in LVLMs for context cues and uses them to synthesize images by: (1) adding objects abnormal to the context cues; (2) for two co-occurring objects, keeping one and excluding the other; or (3) removing objects closely tied to the context cues. It then generates image-based questions whose ground-truth answers contradict the language module's prior. A model has to overcome contextual biases and distractions to reach correct answers, while incorrect or inconsistent answers indicate hallucinations. AUTOHALLUSION enables us to create new benchmarks at the minimum cost and thus overcomes the fragility of hand-crafted benchmarks. It also reveals common failure patterns and reasons, providing key insights to detect, avoid, or control hallucinations. Comprehensive evaluations of top-tier LVLMs, e.g., GPT-4V(ision), Gemini Pro Vision, Claude 3, and LLaVA-1.5, show a 97.7% and 98.7% success rate of hallucination induction on synthetic and real-world datasets of AUTOHALLUSION, paving the way for a long battle against hallucinations.

Community

Paper submitter

IMG_1894.jpeg

AI doesn't have hallucinations. It hasn't been told what is real and what is not in the training data. It doesn't know which data is a fact. opinion or fiction. All the data is coming from the same bucket. AI thinks everything it says is relative to the universe we live in because no one labeled the data to say it wasn't.

·

If you ask AI to embellish on ideas it has no idea of where it is suppsed to start or stop because in the AI multiverse anything is possible.
It doesn't require a logical path to come to a conclusion. It just needs a collection of words to resemble what would be required for a logical conclusion.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.10900 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.10900 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.10900 in a Space README.md to link it from this page.

Collections including this paper 1