Datasets:
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
multi-modal-qa
geometry-qa
abstract-reasoning
geometry-reasoning
visual-puzzle
non-verbal-reasoning
License:
license: apache-2.0 | |
paperswithcode_id: marvel | |
pretty_name: MARVEL (Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning) | |
task_categories: | |
- visual-question-answering | |
- question-answering | |
- multiple-choice | |
- image-classification | |
task_ids: | |
- multiple-choice-qa | |
- closed-domain-qa | |
- open-domain-qa | |
- visual-question-answering | |
tags: | |
- multi-modal-qa | |
- geometry-qa | |
- abstract-reasoning | |
- geometry-reasoning | |
- visual-puzzle | |
- non-verbal-reasoning | |
- abstract-shapes | |
language: | |
- en | |
size_categories: | |
- n<1K | |
configs: | |
- config_name: default | |
data_files: marvel.parquet | |
dataset_info: | |
- config_name: default | |
features: | |
- name: id | |
dtype: int64 | |
- name: pattern | |
dtype: string | |
- name: task_configuration | |
dtype: string | |
- name: avr_question | |
dtype: string | |
- name: explanation | |
dtype: string | |
- name: answer | |
dtype: int64 | |
- name: f_perception_question | |
dtype: string | |
- name: f_perception_answer | |
dtype: string | |
- name: f_perception_distractor | |
dtype: string | |
- name: c_perception_question_tuple | |
sequence: string | |
- name: c_perception_answer_tuple | |
sequence: string | |
- name: file | |
dtype: string | |
- name: image | |
dtype: image | |
## Dataset Details | |
### Dataset Description | |
MARVEL is a new comprehensive benchmark dataset that evaluates multi-modal large language models' abstract reasoning abilities in six patterns across five different task configurations, revealing significant performance gaps between humans and SoTA MLLMs. | |
![image](./marvel_illustration.jpeg) | |
### Dataset Sources [optional] | |
- **Repository:** https://github.com/1171-jpg/MARVEL_AVR | |
- **Paper [optional]:** https://arxiv.org/abs/2404.13591 | |
- **Demo [optional]:** https://marvel770.github.io/ | |
## Uses | |
Evaluations for multi-modal large language models' abstract reasoning abilities. | |
## Dataset Structure | |
The directory **images** keeps all images, and the file **marvel_labels.jsonl** provides annotations and explanations for all questions. | |
### Fields | |
- **id** is of ID of the question | |
- **pattern** is the high-level pattern category of the question | |
- **task_configuration** is the task configuration of the question | |
- **avr_question** is the text of the AVR question | |
- **answer** is the answer to the AVR question | |
- **explanation** is the textual reasoning process to answer the question | |
- **f_perception_question** is the fine-grained perception question | |
- **f_perception_answer** is the answer to the fine-grained perception question | |
- **f_perception_distractor** is the distractor of the fine-grained perception question | |
- **c_perception_question_tuple** is a list of coarse-grained perception questions | |
- **c_perception_answer_tuple** is a list of answers to the coarse-grained perception questions | |
- **file** is the path to the image of the question | |
## Citation [optional] | |
**BibTeX:** | |
``` | |
@article{jiang2024marvel, | |
title={MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning}, | |
author={Jiang, Yifan and Zhang, Jiarui and Sun, Kexuan and Sourati, Zhivar and Ahrabian, Kian and Ma, Kaixin and Ilievski, Filip and Pujara, Jay}, | |
journal={arXiv preprint arXiv:2404.13591}, | |
year={2024} | |
} | |
``` | |