Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -741,9 +741,42 @@ configs:
|
|
741 |
data_files:
|
742 |
- split: test
|
743 |
path: Wiki-SS-NQ/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
744 |
---
|
745 |
|
|
|
746 |
|
|
|
747 |
|
748 |
-
|
749 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
741 |
data_files:
|
742 |
- split: test
|
743 |
path: Wiki-SS-NQ/test-*
|
744 |
+
license: mit
|
745 |
+
language:
|
746 |
+
- en
|
747 |
+
tags:
|
748 |
+
- ranking
|
749 |
+
pretty_name: MMEB
|
750 |
+
size_categories:
|
751 |
+
- 10K<n<100K
|
752 |
---
|
753 |
|
754 |
+
# Massive Multimodal Embedding Benchmark
|
755 |
|
756 |
+
We compile a large set of evaluation tasks to understand the capabilities of multimodal embedding models. This benchmark covers 4 meta tasks and 36 datasets meticulously selected for evaluation.
|
757 |
|
758 |
+
The dataset is published in our paper [VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks](https://arxiv.org/abs/2410.05160).
|
759 |
|
760 |
+
## Dataset Usage
|
761 |
+
For each dataset, we have 1000 examples for evaluation. Each example contains a query and a set of targets. Both the query and target could be any combination of image and text. The first one in the candidate list is the groundtruth target.
|
762 |
+
|
763 |
+
## Statistics
|
764 |
+
We show the statistics of all the datasets as follows:
|
765 |
+
<img width="900" alt="abs" src="statistics.png">
|
766 |
+
|
767 |
+
## Per-dataset Results
|
768 |
+
We list the performance of different embedding models in the following:
|
769 |
+
<img width="900" alt="abs" src="leaderboard.png">
|
770 |
+
|
771 |
+
## Submission
|
772 |
+
We will set a formal leaderboard soon. If you want to add your results to the leaderboard, please send email to us at [email protected].
|
773 |
+
|
774 |
+
## Cite Us
|
775 |
+
```
|
776 |
+
@article{jiang2024vlm2vec,
|
777 |
+
title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks},
|
778 |
+
author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu},
|
779 |
+
journal={arXiv preprint arXiv:2410.05160},
|
780 |
+
year={2024}
|
781 |
+
}
|
782 |
+
```
|