File size: 10,238 Bytes
eae31b9 7a614a3 b6d2702 7a614a3 b6d2702 7a614a3 b6d2702 7a614a3 776fc7a 7a614a3 776fc7a 7a614a3 776fc7a 7a614a3 776fc7a 7a614a3 b6d2702 7a614a3 b6d2702 7a614a3 b6d2702 7a614a3 b6d2702 7a614a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 |
---
license: mit
language:
- zh
pretty_name: MULTI-Benchmark
viewer: False
---
# ๐ผ๏ธ MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images
<div align="center">
![MULTI](./docs/static/images/overview.png)
๐ [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | ๐ [Paper](https://arxiv.org/abs/2402.03173/) | ๐ค [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | ๐ฎ [Submit](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
[็ฎไฝไธญๆ](./README_zh.md) | English
</div>
## ๐ฅ News
- **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html).
- **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
- **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
- **[2023.12.7]** We have released the [code](https://github.com/OpenDFM/MULTI-Benchmark/tree/main/eval) of our benchmark evaluation.
- **[2023.12.5]** We have released the [GitHub Page](https://OpenDFM.github.io/MULTI-Benchmark/).
## ๐ Overview
Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community, while existing benchmarks primarily focus on understanding simple natural images and short context. In this paper, we present***MULTI***, as a cutting-edge benchmark for evaluating MLLMs on understanding complex tables and images, and reasoning with long context. **MULTI** provides multimodal inputs and requires responses that are either precise or open-ended, reflecting real-life examination styles. **MULTI** includes over 18,000 questions and challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis and cross-modality reasoning. We also introduce***MULTI-Elite***, a 500-question selected hard subset, and ***MULTI-Extend***, with more than 4,500 external knowledge context pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a **63.7%** accuracy rate on **MULTI**, in contrast to other MLLMs scoring between **28.5%** and **55.3%**. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
## ๐ Leaderboard
| Modality | Model | Version | Overall | MULTI-Elite |
|:--------:|:-------------:| -------------------------- |:-------:|:-----------:|
| ๐ผ๏ธ | GPT-4V | gpt-4-vision-preview | 63.7 | 14.0 |
| ๐ผ๏ธ | Yi-VL | Yi-34B-Chat | 55.3 | 26.2 |
| ๐ผ๏ธ | Gemini Vision | gemini-pro-vision | 53.7 | 12.4 |
| ๐ | Gemini | gemini-pro | 52.2 | 10.5 |
| ๐ | GPT-4 | gpt-4-1106-preview | 50.2 | 5.8 |
| ๐ | DFM-2.0 | dfm-2.0-70b-preview | 49.7 | 18.0 |
| ๐ผ๏ธ | InternVL | InternVL-Chat-Chinese-V1.1 | 44.9 | 20.7 |
| ๐ผ๏ธ | Qwen-VL | Qwen-VL-Chat | 39.0 | 10.5 |
| ๐ | ChatGPT | gpt-3.5-turbo-1106 | 35.9 | 4.7 |
| ๐ผ๏ธ | VisCPM | VisCPM-Chat | 33.4 | 13.0 |
| ๐ | MOSS | moss-moon-003-sft | 32.6 | 13.1 |
| ๐ผ๏ธ | VisualGLM | visualglm-6b | 31.1 | 12.8 |
| ๐ผ๏ธ | Chinese-LLaVA | Chinese-LLaVA-Cllama2 | 28.5 | 12.3 |
## โฌ Download
You can simply download data using the following command:
```shell
cd eval
python download_data.py
```
The structure of `./data` should be something like:
```
./data
โโโ images # folder containing images
โโโ problem_v1.2.2_20240212_release.json # MULTI
โโโ knowledge_v1.2.2_20240212_release.json # MULTI-Extend
โโโ hard_list_v1.2.1_20240206.json # MULTI-Elite
โโโ captions_v1.2.0_20231217.csv # image captions generated by BLIP-6.7b
```
## ๐ How to Evaluate
We provide a unified evaluation framework in `eval`. Each file in `eval/models` contains an evaluator specified to one M/LLM, and implements a `generate_answer` method to receive a question as input and give out the answer of it.
```shell
cd eval
python eval.py -h # to list all supported arguments
python eval.py -l # to list all supported models
```
### Environment Preparation Before Usage
Each evaluator requires its unique environment setting, and a universal environment may not work for all evaluators. **Just follow the official guide.** If the corresponding model runs well, then so should it fit in our framework.
You just need to install another two packages to run the evaluation code:
```shell
pip install tiktoken tqdm
```
If you just want to generate data for a specific setting (using `--debug` argument), this line above is all you need.
### Running Evaluation
For a quick start, see these examples:
Test GPT-4V model on whole MULTI with multimodal input, using MULTI-Extend as external knowledge:
```shell
python eval.py \
--problem_file ../data/problem_v1.2.2_20240212_release.json \
--knowledge_file ../data/knowledge_v1.2.2_20240212_release.json \
--questions_type 0,1,2,3 \
--image_type 0,1,2 \
--input_type 2 \
--model gpt-4v \
--model_version gpt-4-vision-preview \
--api_key sk-************************************************
```
Test Qwen-VL model on MULTI-Elite with image caption input, skip all questions not containing images, evaluate only multiple-choice questions, automatically set cuda device:
```shell
python eval.py \
--problem_file ../data/problem_v1.2.2_20240212_release.json \
--subset ../data/hard_list_v1.2.1_20240206.json \
--caption_file ../data/captions_v1.2.0_20231217.csv \
--questions_type 0,1 \
--image_type 1,2 \
--input_type 1 \
--model qwen-vl \
--model_dir ../models/Qwen-VL-Chat
```
The evaluation script will generate a folder named `results` under the root directory, and the result will be saved in `../results/EXPERIMENT_NAME`. During the evaluation, the script will save checkpoints in `../results/EXPERIMENT_NAME/checkpoints`, you can delete them after the evaluation is done. If the evaluation is interrupted, you can continue from the last checkpoint:
```shell
python eval.py \
--checkpoint_dir ../results/EXPERIMENT_NAME
```
Most of the arguments are saved in `../results/EXPERIMENT_NAME/args.json`, so you can continue the evaluation without specifying all the arguments again. Please note that `--api_key` is not saved in `args.json` for security reasons, so you need to specify it again.
```shell
python eval.py \
--checkpoint_dir ../results/EXPERIMENT_NAME \
--api_key sk-************************************************
```
For more details of arguments, please use `python eval.py -h`, and refer to `args.py` and `eval.py`.
### Add Support for Your Models
It's recommended to read the code of the other given evaluators in `eval/models` before your implementation.
Create `class YourModelEvaluator` and implement `generate_answer(self, question:dict)` to match the design supported in `eval.py` and `eval.sh`, which is anticipated to largely ease the coding process.
**Do not forget to add their references into `args.py` for the convenience of usage.**
You can execute `model_tester.py` in the `eval` folder to check the correctness of you implementation. Various problems including implementation errors, small bugs in code, and even wrong environment settings may cause failure of the evaluation. The examples provided in the file cover most kinds of cases presented in our benchmark. Feel free to change the code in it to debug your code๐
```shell
python model_tester.py <args> # args are similar to the default settings above
```
### Create Captions and OCR Data for Images
Generate captions or OCR data for images, and save them in csv with format below:
```
../data/images/czls/502_1.png,a cartoon drawing of a man standing in front of a large block
../data/images/czls/525_1.png,a chinese newspaper with the headline, china's new year
...
```
We provide two example scripts to generate captions (`image_caption.py`) and OCR data (`image_ocr.py`) for images.
## ๐ฎ How to Submit
You need to first prepare a UTF-8 encoded JSON file with the following format:
```
{
"czsx_0_0": {
"question_id": "czsx_0_0",
"question_image_number": 1,
"image_list": [...], # optional
"input_message": ..., # optional
"prediction": "C"
},
...
}
```
If you evaluate the model with our official code, you can simply zip the prediction file `prediction.json` and the configuration file `args.json` in the experiment results folder `. /results/EXPERIMENT_NAME` in `.zip` format.
Then, you can submit your result to our [evaluation page](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html).
You are also welcomed to pull a request and contribute your code to our evaluation code. We will be very grateful for your contribution!
**[Notice]** Thank you for being so interested in the **MULTI** dataset! If you want to add your model in our leaderboard, please fill in [this questionnaire](https://wj.sjtu.edu.cn/q/89UmRAJn), your information will be kept strictly confidential, so please feel free to fill it out. ๐ค
## ๐ Citation
If you find our work useful, please cite us!
```
@misc{zhu2024multi,
title={{MULTI}: Multimodal Understanding Leaderboard with Text and Images},
author={Zichen Zhu and Yang Xu and Lu Chen and Jingkai Yang and Yichuan Ma and Yiming Sun and Hailin Wen and Jiaqi Liu and Jinyu Cai and Yingzi Ma and Situo Zhang and Zihan Zhao and Liangtai Sun and Kai Yu},
year={2024},
eprint={2402.03173},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## ๐ง Contact Us
If you have any questions, please feel free to contact us via email `[email protected]` and `[email protected]`
|