Datasets:
metadata
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: CHIRP Benchmark
size_categories:
- n<1K
CHIRP Benchmark: An Open-Ended Free form Multimodal Benchmark
CHIRP is a new multimodal evaluation benchmark with 104 open ended questions. Free form questions consists of questions where the model has to generate a response that is more open-ended, creative and does not have a "correct" answer. We include 8 distinct categories of questions. Each category requires understanding the image, and presents the opportunity for analysis and a thorough response.
The categories are:
- Descriptive Analysis: This category involves questions that test the model's ability to identify and describe the physical elements in an image, including color, position, and interaction and also to recognize specific details.
- Inferential Reasoning: It examines the model's ability to infer things from the image, including predicting possible subsequent events, making assumptions about previous contexts, and hypothesizing alternative scenarios that contradict the present one in the image.
- Contextual Understanding: This category tests the model's awareness of the importance of context in image comprehension. This might involve understanding geographical or temporal aspects that bear upon the image.
- Emotional and Psychological Understanding: It measures the model's ability to gauge emotions and psychological states from an image. This incorporates interpreting the visible emotional expressions of characters in the image and hypothesizing about their mental state.
- Ethical Evaluations: Questions in this category check how the model deals with the ethical implications of images. Can it recognize potential ethical concerns and judge the public display acceptability of an image with respect to generally accepted ethical guidelines?
- Abstract Understanding: These questions gauge the model's capacity for abstract thought — can it identify underlying themes or messages in the image that aren't immediately visible? Can it engage in philosophical interpretation?
- Creative and Subjective Analysis: This category gauges the model's creativity and its ability to express subjective views on the image. It includes crafting extended narratives based on the image scenery and presenting a personal point of view for the image.
- Visual Aesthetics Evaluation: This category examines the model's ability to evaluate the visual aesthetics of an image including aspects like balance, symmetry, colour composition, lighting, etc.
Evaluations are conducted through pairwise comparisons of model responses, where either a stronger Visual Language Model (VLM) or a human evaluator chooses the preferred response. Potential prompts for evaluations can be found in our paper.