metadata
dataset_info:
features:
- name: id
dtype: string
- name: instance_id
dtype: int64
- name: question
dtype: string
- name: answer
list:
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: category
dtype: string
- name: img
dtype: image
configs:
- config_name: 1_correct
data_files:
- split: validation
path: 1_correct/validation/0000.parquet
- split: test
path: 1_correct/test/0000.parquet
- config_name: 1_correct_var
data_files:
- split: validation
path: 1_correct_var/validation/0000.parquet
- split: test
path: 1_correct_var/test/0000.parquet
- config_name: n_correct
data_files:
- split: validation
path: n_correct/validation/0000.parquet
- split: test
path: n_correct/test/0000.parquet
DARE
DARE (Diverse Visual Question Answering with Robustness Evaluation) is a carefully created and curated multiple-choice VQA benchmark. DARE evaluates VLM performance on five diverse categories and includes four robustness-oriented evaluations based on the variations of:
- prompts
- the subsets of answer options
- the output format
- the number of correct answers.
The validation split of the dataset contains images, questions, answer options, and correct answers. We are not publishing the correct answers for the test split to prevent contamination.
Load the Dataset
To use the dataset use the huggingface datasets library:
from datasets import load_dataset
# Load the dataset
subset = "1_correct" # Change to the subset that you want to use
dataset = load_dataset("cambridgeltl/DARE", subset)
Citation
If you use this dataset, please cite our paper:
@article{sterz2024dare,
title={DARE: Diverse Visual Question Answering with Robustness Evaluation},
author={Sterz, Hannah and Pfeiffer, Jonas and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2409.18023},
year={2024}
}