Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -17,13 +17,140 @@ dataset_info:
|
|
17 |
sequence: string
|
18 |
splits:
|
19 |
- name: test
|
20 |
-
num_bytes: 100452047
|
21 |
num_examples: 500
|
22 |
download_size: 100332998
|
23 |
-
dataset_size: 100452047
|
24 |
configs:
|
25 |
- config_name: default
|
26 |
data_files:
|
27 |
- split: test
|
28 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
sequence: string
|
18 |
splits:
|
19 |
- name: test
|
20 |
+
num_bytes: 100452047
|
21 |
num_examples: 500
|
22 |
download_size: 100332998
|
23 |
+
dataset_size: 100452047
|
24 |
configs:
|
25 |
- config_name: default
|
26 |
data_files:
|
27 |
- split: test
|
28 |
path: data/test-*
|
29 |
+
license: cc-by-sa-4.0
|
30 |
+
source_datasets:
|
31 |
+
- wikimedia/wit_base
|
32 |
+
task_categories:
|
33 |
+
- visual-question-answering
|
34 |
+
language:
|
35 |
+
- zh
|
36 |
+
pretty_name: VCR
|
37 |
+
arxiv: 2406.06462
|
38 |
+
size_categories:
|
39 |
+
- n<1K
|
40 |
---
|
41 |
+
# The VCR-Wiki Dataset for Visual Caption Restoration (VCR)
|
42 |
+
|
43 |
+
๐ [Paper](https://arxiv.org/abs/2406.06462) | ๐ฉ๐ปโ๐ป [GitHub](https://github.com/tianyu-z/vcr) | ๐ค [Huggingface Datasets](https://huggingface.co/vcr-org) | ๐ [Evaluation with lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval)
|
44 |
+
|
45 |
+
This is the official Hugging Face dataset for VCR-Wiki, a dataset for the [Visual Caption Restoration (VCR)](https://arxiv.org/abs/2406.06462) task.
|
46 |
+
|
47 |
+
VCR is designed to measure vision-language models' capability to accurately restore partially obscured texts using pixel-level hints within images. text-based processing becomes ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts.
|
48 |
+
|
49 |
+
![image/jpg](https://raw.githubusercontent.com/tianyu-z/VCR/main/assets/main_pic_en_easy.jpg)
|
50 |
+
|
51 |
+
We found that OCR and text-based processing become ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. However, this task is generally easy for native speakers of the corresponding language. Initial results indicate that current vision-language models fall short compared to human performance on this task.
|
52 |
+
|
53 |
+
## Dataset Description
|
54 |
+
|
55 |
+
- **GitHub:** [VCR GitHub](https://github.com/tianyu-z/vcr)
|
56 |
+
- **Paper:** [VCR: Visual Caption Restoration](https://arxiv.org/abs/2406.06462)
|
57 |
+
- **Point of Contact:** [Tianyu Zhang](mailto:[email protected])
|
58 |
+
|
59 |
+
## Evaluation
|
60 |
+
|
61 |
+
We recommend you to evaluate your model with [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). Before evaluating, please refer to the doc of `lmms-eval`.
|
62 |
+
|
63 |
+
```console
|
64 |
+
pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
|
65 |
+
|
66 |
+
# We use MiniCPM-Llama3-V-2_5 and vcr_wiki_en_easy as an example
|
67 |
+
python3 -m accelerate.commands.launch \
|
68 |
+
--num_processes=8 \
|
69 |
+
-m lmms_eval \
|
70 |
+
--model minicpm_v \
|
71 |
+
--model_args pretrained="openbmb/MiniCPM-Llama3-V-2_5" \
|
72 |
+
--tasks vcr_wiki_en_easy \
|
73 |
+
--batch_size 1 \
|
74 |
+
--log_samples \
|
75 |
+
--log_samples_suffix MiniCPM-Llama3-V-2_5_vcr_wiki_en_easy \
|
76 |
+
--output_path ./logs/
|
77 |
+
```
|
78 |
+
|
79 |
+
`lmms-eval` supports the following VCR `--tasks` settings:
|
80 |
+
|
81 |
+
* English
|
82 |
+
* Easy
|
83 |
+
* `vcr_wiki_en_easy` (full test set, 5000 instances)
|
84 |
+
* `vcr_wiki_en_easy_500` (first 500 instances in the vcr_wiki_en_easy setting)
|
85 |
+
* `vcr_wiki_en_easy_100` (first 100 instances in the vcr_wiki_en_easy setting)
|
86 |
+
* Hard
|
87 |
+
* `vcr_wiki_en_hard` (full test set, 5000 instances)
|
88 |
+
* `vcr_wiki_en_hard_500` (first 500 instances in the vcr_wiki_en_hard setting)
|
89 |
+
* `vcr_wiki_en_hard_100` (first 100 instances in the vcr_wiki_en_hard setting)
|
90 |
+
* Chinese
|
91 |
+
* Easy
|
92 |
+
* `vcr_wiki_zh_easy` (full test set, 5000 instances)
|
93 |
+
* `vcr_wiki_zh_easy_500` (first 500 instances in the vcr_wiki_zh_easy setting)
|
94 |
+
* `vcr_wiki_zh_easy_100` (first 100 instances in the vcr_wiki_zh_easy setting)
|
95 |
+
* Hard
|
96 |
+
* `vcr_wiki_zh_hard` (full test set, 5000 instances)
|
97 |
+
* `vcr_wiki_zh_hard_500` (first 500 instances in the vcr_wiki_zh_hard setting)
|
98 |
+
* `vcr_wiki_zh_hard_100` (first 100 instances in the vcr_wiki_zh_hard setting)
|
99 |
+
|
100 |
+
## Dataset Statistics
|
101 |
+
|
102 |
+
We show the statistics of the original VCR-Wiki dataset below:
|
103 |
+
|
104 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62bb1e0f3ff437e49a3088e5/CBS35FnFi9p0hFY9iJ0ba.png)
|
105 |
+
|
106 |
+
## Dataset Construction
|
107 |
+
|
108 |
+
![image/png](https://raw.githubusercontent.com/tianyu-z/VCR/main/assets/vcr_pipeline.png)
|
109 |
+
|
110 |
+
* **Data Collection and Initial Filtering**: The original data is collected from [wikimedia/wit_base](https://huggingface.co/datasets/wikimedia/wit_base). Before constructing the dataset, we first filter out the instances with sensitive content, including NSFW and crime-related terms, to mitigate AI risk and biases.
|
111 |
+
|
112 |
+
* **N-gram selection**: We first truncate the description of each entry to be less than 5 lines with our predefined font and size settings. We then tokenize the description for each entry with spaCy and randomly mask out 5-grams, where the masked 5-grams do not contain numbers, person names, religious or political groups, facilities, organizations, locations, dates and time labeled by spaCy, and the total masked token does not exceed 50\% of the tokens in the caption.
|
113 |
+
|
114 |
+
* **Create text embedded in images**: We create text embedded in images (TEI) for the description, resize its width to 300 pixels, and mask out the selected 5-grams with white rectangles. The size of the rectangle reflects the difficulty of the task: (1) in easy versions, the task is easy for native speakers but open-source OCR models almost always fail, and (2) in hard versions, the revealed part consists of only one to two pixels for the majority of letters or characters, yet the restoration task remains feasible for native speakers of the language.
|
115 |
+
|
116 |
+
* **Concatenate Images**: We concatenate TEI with the main visual image (VI) to get the stacked image.
|
117 |
+
|
118 |
+
* **Second-round Filtering**: We filter out all entries with no masked n-grams or have a height exceeding 900 pixels.
|
119 |
+
|
120 |
+
## Data Fields
|
121 |
+
|
122 |
+
* `question_id`: `int64`, the instance id in the current split.
|
123 |
+
* `image`: `PIL.Image.Image`, the original visual image (VI).
|
124 |
+
* `stacked_image`: `PIL.Image.Image`, the stacked VI+TEI image containing both the original visual image and the masked text embedded in image.
|
125 |
+
* `only_id_image`: `PIL.Image.Image`, the masked TEI image.
|
126 |
+
* `caption`: `str`, the unmasked original text presented in the TEI image.
|
127 |
+
* `crossed_text`: `List[str]`, the masked n-grams in the current instance.
|
128 |
+
|
129 |
+
## Disclaimer for the VCR-Wiki dataset and Its Subsets
|
130 |
+
|
131 |
+
The VCR-Wiki dataset and/or its subsets are provided under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. This dataset is intended solely for research and educational purposes in the field of visual caption restoration and related vision-language tasks.
|
132 |
+
|
133 |
+
Important Considerations:
|
134 |
+
|
135 |
+
1. **Accuracy and Reliability**: While the VCR-Wiki dataset has undergone filtering to exclude sensitive content, it may still contain inaccuracies or unintended biases. Users are encouraged to critically evaluate the dataset's content and applicability to their specific research objectives.
|
136 |
+
|
137 |
+
2. **Ethical Use**: Users must ensure that their use of the VCR-Wiki dataset aligns with ethical guidelines and standards, particularly in avoiding harm, perpetuating biases, or misusing the data in ways that could negatively impact individuals or groups.
|
138 |
+
|
139 |
+
3. **Modifications and Derivatives**: Any modifications or derivative works based on the VCR-Wiki dataset must be shared under the same license (CC BY-SA 4.0).
|
140 |
+
|
141 |
+
4. **Commercial Use**: Commercial use of the VCR-Wiki dataset is permitted under the CC BY-SA 4.0 license, provided that proper attribution is given and any derivative works are shared under the same license.
|
142 |
+
|
143 |
+
By using the VCR-Wiki dataset and/or its subsets, you agree to the terms and conditions outlined in this disclaimer and the associated license. The creators of the dataset are not liable for any direct or indirect damages resulting from its use.
|
144 |
+
|
145 |
+
## Citation
|
146 |
+
|
147 |
+
If you find VCR useful for your research and applications, please cite using this BibTeX:
|
148 |
+
|
149 |
+
```bibtex
|
150 |
+
@article{zhang2024vcr,
|
151 |
+
title = {VCR: Visual Caption Restoration},
|
152 |
+
author = {Tianyu Zhang and Suyuchen Wang and Lu Li and Ge Zhang and Perouz Taslakian and Sai Rajeswar and Jie Fu and Bang Liu and Yoshua Bengio},
|
153 |
+
year = {2024},
|
154 |
+
journal = {arXiv preprint arXiv: 2406.06462}
|
155 |
+
}
|
156 |
+
```
|