Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -50,6 +50,14 @@ VCR is designed to measure vision-language models' capability to accurately rest
|
|
50 |
|
51 |
We found that OCR and text-based processing become ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. However, this task is generally easy for native speakers of the corresponding language. Initial results indicate that current vision-language models fall short compared to human performance on this task.
|
52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
## Evaluation
|
54 |
|
55 |
We recommend you to evaluate your model with [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). Before evaluating, please refer to the doc of `lmms-eval`.
|
|
|
50 |
|
51 |
We found that OCR and text-based processing become ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. However, this task is generally easy for native speakers of the corresponding language. Initial results indicate that current vision-language models fall short compared to human performance on this task.
|
52 |
|
53 |
+
## Dataset Description
|
54 |
+
|
55 |
+
- **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit)
|
56 |
+
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
|
57 |
+
](https://arxiv.org/abs/2103.01913)
|
58 |
+
- **Leaderboard:** [WIT leaderboard](https://paperswithcode.com/sota/text-image-retrieval-on-wit) and [WIT Kaggle competition](https://www.kaggle.com/competitions/wikipedia-image-caption/leaderboard)
|
59 |
+
- **Point of Contact:** [Miriam Redi](mailto:[email protected])
|
60 |
+
|
61 |
## Evaluation
|
62 |
|
63 |
We recommend you to evaluate your model with [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). Before evaluating, please refer to the doc of `lmms-eval`.
|