小臣子吃大橙子
commited on
Commit
•
b6d2702
1
Parent(s):
776fc7a
update README
Browse files- README.md +11 -14
- README_zh.md +7 -17
README.md
CHANGED
@@ -10,9 +10,9 @@ viewer: False
|
|
10 |
|
11 |
<div align="center">
|
12 |
|
13 |
-
![MULTI](./overview.png)
|
14 |
|
15 |
-
🌐 [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [Paper](https://arxiv.org/abs/2402.03173/) | 🤗 [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
|
16 |
|
17 |
[简体中文](./README_zh.md) | English
|
18 |
|
@@ -20,11 +20,11 @@ viewer: False
|
|
20 |
|
21 |
## 🔥 News
|
22 |
|
23 |
-
- **[
|
24 |
-
- **[2024.2.19]** We
|
25 |
-
- **[2024.2.6]** We
|
26 |
-
- **[2023.12.7]** We
|
27 |
-
- **[2023.12.5]** We
|
28 |
|
29 |
## 📖 Overview
|
30 |
|
@@ -48,8 +48,6 @@ Rapid progress in multimodal large language models (MLLMs) highlights the need t
|
|
48 |
| 🖼️ | VisualGLM | visualglm-6b | 31.1 | 12.8 |
|
49 |
| 🖼️ | Chinese-LLaVA | Chinese-LLaVA-Cllama2 | 28.5 | 12.3 |
|
50 |
|
51 |
-
For more details, please visit our [leaderboard]() (Coming Soon).
|
52 |
-
|
53 |
## ⏬ Download
|
54 |
|
55 |
You can simply download data using the following command:
|
@@ -70,7 +68,6 @@ The structure of `./data` should be something like:
|
|
70 |
└── captions_v1.2.0_20231217.csv # image captions generated by BLIP-6.7b
|
71 |
```
|
72 |
|
73 |
-
|
74 |
## 📝 How to Evaluate
|
75 |
|
76 |
We provide a unified evaluation framework in `eval`. Each file in `eval/models` contains an evaluator specified to one M/LLM, and implements a `generate_answer` method to receive a question as input and give out the answer of it.
|
@@ -185,13 +182,13 @@ You need to first prepare a UTF-8 encoded JSON file with the following format:
|
|
185 |
}
|
186 |
```
|
187 |
|
188 |
-
If you evaluate the model with our official code, you can simply zip the experiment
|
189 |
|
190 |
-
Then, you can submit your result to our [evaluation
|
191 |
|
192 |
-
You are also
|
193 |
|
194 |
-
**[Notice]** Thank you for being so interested in the **MULTI** dataset!
|
195 |
|
196 |
|
197 |
## 📑 Citation
|
|
|
10 |
|
11 |
<div align="center">
|
12 |
|
13 |
+
![MULTI](./docs/static/images/overview.png)
|
14 |
|
15 |
+
🌐 [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [Paper](https://arxiv.org/abs/2402.03173/) | 🤗 [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | 📮 [Submit](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
|
16 |
|
17 |
[简体中文](./README_zh.md) | English
|
18 |
|
|
|
20 |
|
21 |
## 🔥 News
|
22 |
|
23 |
+
- **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html).
|
24 |
+
- **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
|
25 |
+
- **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
|
26 |
+
- **[2023.12.7]** We have released the [code](https://github.com/OpenDFM/MULTI-Benchmark/tree/main/eval) of our benchmark evaluation.
|
27 |
+
- **[2023.12.5]** We have released the [GitHub Page](https://OpenDFM.github.io/MULTI-Benchmark/).
|
28 |
|
29 |
## 📖 Overview
|
30 |
|
|
|
48 |
| 🖼️ | VisualGLM | visualglm-6b | 31.1 | 12.8 |
|
49 |
| 🖼️ | Chinese-LLaVA | Chinese-LLaVA-Cllama2 | 28.5 | 12.3 |
|
50 |
|
|
|
|
|
51 |
## ⏬ Download
|
52 |
|
53 |
You can simply download data using the following command:
|
|
|
68 |
└── captions_v1.2.0_20231217.csv # image captions generated by BLIP-6.7b
|
69 |
```
|
70 |
|
|
|
71 |
## 📝 How to Evaluate
|
72 |
|
73 |
We provide a unified evaluation framework in `eval`. Each file in `eval/models` contains an evaluator specified to one M/LLM, and implements a `generate_answer` method to receive a question as input and give out the answer of it.
|
|
|
182 |
}
|
183 |
```
|
184 |
|
185 |
+
If you evaluate the model with our official code, you can simply zip the prediction file `prediction.json` and the configuration file `args.json` in the experiment results folder `. /results/EXPERIMENT_NAME` in `.zip` format.
|
186 |
|
187 |
+
Then, you can submit your result to our [evaluation page](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html).
|
188 |
|
189 |
+
You are also welcomed to pull a request and contribute your code to our evaluation code. We will be very grateful for your contribution!
|
190 |
|
191 |
+
**[Notice]** Thank you for being so interested in the **MULTI** dataset! If you want to add your model in our leaderboard, please fill in [this questionnaire](https://wj.sjtu.edu.cn/q/89UmRAJn), your information will be kept strictly confidential, so please feel free to fill it out. 🤗
|
192 |
|
193 |
|
194 |
## 📑 Citation
|
README_zh.md
CHANGED
@@ -1,18 +1,10 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
language:
|
4 |
-
- zh
|
5 |
-
pretty_name: MULTI-Benchmark
|
6 |
-
viewer: False
|
7 |
-
---
|
8 |
-
|
9 |
# 🖼️ MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images
|
10 |
|
11 |
<div align="center">
|
12 |
|
13 |
-
![MULTI](./overview.png)
|
14 |
|
15 |
-
🌐 [网站](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [论文](https://arxiv.org/abs/2402.03173/) | 🤗 [数据](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
|
16 |
|
17 |
简体中文 | [English](./README.md)
|
18 |
|
@@ -20,7 +12,7 @@ viewer: False
|
|
20 |
|
21 |
## 🔥 新闻
|
22 |
|
23 |
-
- **[
|
24 |
- **[2024.2.19]** 我们发布了[HuggingFace页面](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/)。
|
25 |
- **[2024.2.6]** 我们在arXiv上发布了我们的[论文](https://arxiv.org/abs/2402.03173/)。
|
26 |
- **[2023.12.7]** 我们发布了我们的基准评测[代码](https://github.com/OpenDFM/MULTI-Benchmark/tree/main/eval)。
|
@@ -48,8 +40,6 @@ viewer: False
|
|
48 |
| 🖼️ | VisualGLM | visualglm-6b | 31.1 | 12.8 |
|
49 |
| 🖼️ | Chinese-LLaVA | Chinese-LLaVA-Cllama2 | 28.5 | 12.3 |
|
50 |
|
51 |
-
更多详情,请访问我们的[排行榜]()(即将推出)。
|
52 |
-
|
53 |
## ⏬ 下载
|
54 |
|
55 |
您只需使用以下命令即可下载数据:
|
@@ -183,13 +173,13 @@ python model_tester.py <args> # args 类似于上面的默认设置
|
|
183 |
...
|
184 |
}
|
185 |
```
|
186 |
-
如果您使用我们的官方代码评测模型,可以直接压缩实验结果文件夹`./results/EXPERIMENT_NAME
|
187 |
|
188 |
-
然后,您可以将你的结果提交到我们的[
|
189 |
|
190 |
欢迎拉取请求(Pull Request)并贡献您的代码到我们的评测代码中。我们感激不尽!
|
191 |
|
192 |
-
**[提示]** 感谢您对 MULTI
|
193 |
|
194 |
## 📑 引用
|
195 |
|
@@ -208,4 +198,4 @@ python model_tester.py <args> # args 类似于上面的默认设置
|
|
208 |
|
209 |
## 📧 联系我们
|
210 |
|
211 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# 🖼️ MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images
|
2 |
|
3 |
<div align="center">
|
4 |
|
5 |
+
![MULTI](./docs/static/images/overview.png)
|
6 |
|
7 |
+
🌐 [网站](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [论文](https://arxiv.org/abs/2402.03173/) | 🤗 [数据](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | 📮 [提交](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
|
8 |
|
9 |
简体中文 | [English](./README.md)
|
10 |
|
|
|
12 |
|
13 |
## 🔥 新闻
|
14 |
|
15 |
+
- **[2024.3.4]** 我们发布了[评测页面](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)。
|
16 |
- **[2024.2.19]** 我们发布了[HuggingFace页面](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/)。
|
17 |
- **[2024.2.6]** 我们在arXiv上发布了我们的[论文](https://arxiv.org/abs/2402.03173/)。
|
18 |
- **[2023.12.7]** 我们发布了我们的基准评测[代码](https://github.com/OpenDFM/MULTI-Benchmark/tree/main/eval)。
|
|
|
40 |
| 🖼️ | VisualGLM | visualglm-6b | 31.1 | 12.8 |
|
41 |
| 🖼️ | Chinese-LLaVA | Chinese-LLaVA-Cllama2 | 28.5 | 12.3 |
|
42 |
|
|
|
|
|
43 |
## ⏬ 下载
|
44 |
|
45 |
您只需使用以下命令即可下载数据:
|
|
|
173 |
...
|
174 |
}
|
175 |
```
|
176 |
+
如果您使用我们的官方代码评测模型,可以直接压缩实验结果文件夹`./results/EXPERIMENT_NAME`中的预测文件`prediction.json`和配置文件`args.json`为`.zip`格式。
|
177 |
|
178 |
+
然后,您可以将你的结果提交到我们的[评测页面](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)。
|
179 |
|
180 |
欢迎拉取请求(Pull Request)并贡献您的代码到我们的评测代码中。我们感激不尽!
|
181 |
|
182 |
+
**[提示]** 感谢您对 MULTI 数据集的关注!如果您希望将您的模型结果添加至榜单,请填写[此问卷](https://wj.sjtu.edu.cn/q/89UmRAJn),您的个人信息将被严格保密,请放心填写。🤗
|
183 |
|
184 |
## 📑 引用
|
185 |
|
|
|
198 |
|
199 |
## 📧 联系我们
|
200 |
|
201 |
+
如果您有任何问题,请随时通过电子邮件与我们联系: `[email protected]` 和 `[email protected]`
|