--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: gpl-3.0 multilinguality: - monolingual size_categories: - 1K

```bibtex @misc{kamoi2024visonlyqa, title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information}, author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang}, year={2024}, journal={arXiv preprint arXiv:2412.00947} } ``` ## Dataset VisOnlyQA is provided in two formats: VLMEvalKit and Hugging Face Dataset. You can use either of them to evaluate your models and report the results in your papers. However, when you report the results, please explicitly mention which version of the dataset you used because the two versions are different. ### Examples

### VLMEvalKit [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) provides one-command evaluation. However, VLMEvalKit is not designed to reproduce the results in the paper. We welcome using it to report the results on VisOnlyQA in your papers, but please explicitly mention that you used VLMEvalKit. The major differences are: * VisOnlyQA on VLMEvalKit does not include the `chemistry__shape_multi` split * VLMEvalKit uses different prompts and postprocessing. Refer to [this document](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) for the installation and setup of VLMEvalKit. After setting up the environment, you can evaluate any supported models on VisOnlyQA with the following command (this example is for InternVL2-26B). ```bash python run.py --data VisOnlyQA-VLMEvalKit --model InternVL2-26B ``` ### Hugging Face Dataset The original VisOnlyQA dataset is provided in Hugging Face Dataset. If you want to reproduce the results in our paper, please use this version and code in the GitHub repository. * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real) * 500 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv) * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic) * 700 instances for questions on synthetic figures * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train) * 70,000 instances for training (synthetic figures) [dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data. ```python from datasets import load_dataset real_eval = load_dataset("ryokamoi/VisOnlyQA_Eval_Real") real_synthetic = load_dataset("ryokamoi/VisOnlyQA_Eval_Synthetic") # Splits print(real_eval.keys()) # dict_keys(['geometry__triangle', 'geometry__quadrilateral', 'geometry__length', 'geometry__angle', 'geometry__area', 'geometry__diameter_radius', 'chemistry__shape_single', 'chemistry__shape_multi', 'charts__extraction', 'charts__intersection']) print(real_synthetic.keys()) # dict_keys(['syntheticgeometry__triangle', 'syntheticgeometry__quadrilateral', 'syntheticgeometry__length', 'syntheticgeometry__angle', 'syntheticgeometry__area', '3d__size', '3d__angle']) # Prompt print(real_eval['geometry__triangle'][0]['prompt_no_reasoning']) # There is no triangle ADP in the figure. True or False? # A triangle is a polygon with three edges and three vertices, which are explicitly connected in the figure. # Your response should only include the final answer (True, False). Do not include any reasoning or explanation in your response. # Image print(real_eval['geometry__triangle'][0]['decoded_image']) # # Answer print(real_eval['geometry__triangle'][0]['answer']) # False ``` ### Data Format Each instance of VisOnlyQA dataset has the following attributes: #### Features * `decoded_image`: [PIL.Image] Input image * `question`: [string] Question (without instruction) * `prompt_reasoning`: [string] Prompt with intstruction to use chain-of-thought * `prompt_no_reasoning`: [string] Prompt with intstruction **not** to use chain-of-thought * `answer`: [string] Correct answer (e.g., `True`, `a`) #### Metadata * `image_path`: [string] Path to the image file * `image_category`: [string] Category of the image (e.g., `geometry`, `chemistry`) * `question_type`: [string] `single_answer` or `multiple answers` * `task_category`: [string] Category of the task (e.g., `triangle`) * `response_options`: [List[string]] Multiple choice options (e.g., `['True', 'False']`, `['a', 'b', 'c', 'd', 'e']`) * `source`: [string] Source dataset * `id`: [string] Unique ID ### Statistics

## License Please refer to [LICENSE.md](./LICENSE.md). ## Contact If you have any questions, feel free to open an issue or reach out directly to [Ryo Kamoi](https://ryokamoi.github.io/) (ryokamoi@psu.edu).