Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
sheryc commited on
Commit
c579687
โ€ข
1 Parent(s): 05d95a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -1
README.md CHANGED
@@ -33,6 +33,117 @@ task_categories:
33
  language:
34
  - en
35
  pretty_name: VCR
 
36
  size_categories:
37
  - n<1K
38
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  language:
34
  - en
35
  pretty_name: VCR
36
+ arxiv: 2406.06462
37
  size_categories:
38
  - n<1K
39
+ ---
40
+ # The VCR-Wiki Dataset for Visual Caption Restoration (VCR)
41
+
42
+ ๐Ÿ  [Paper](https://arxiv.org/abs/2406.06462) | ๐Ÿ‘ฉ๐Ÿปโ€๐Ÿ’ป [GitHub](https://github.com/tianyu-z/vcr) | ๐Ÿค— [Huggingface Datasets](https://huggingface.co/vcr-org) | ๐Ÿ“ [Evaluation with lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval)
43
+
44
+ This is the official Hugging Face dataset for VCR-Wiki, a dataset for the [Visual Caption Restoration (VCR)](https://arxiv.org/abs/2406.06462) task.
45
+
46
+ VCR is designed to measure vision-language models' capability to accurately restore partially obscured texts using pixel-level hints within images. text-based processing becomes ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts.
47
+
48
+ ![image/jpg](https://raw.githubusercontent.com/tianyu-z/VCR/main/assets/main_pic_en_easy.jpg)
49
+
50
+ We found that OCR and text-based processing become ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. However, this task is generally easy for native speakers of the corresponding language. Initial results indicate that current vision-language models fall short compared to human performance on this task.
51
+
52
+ ## Evaluation
53
+
54
+ We recommend you to evaluate your model with [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). Before evaluating, please refer to the doc of `lmms-eval`.
55
+
56
+ ```console
57
+ pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
58
+
59
+ # We use MiniCPM-Llama3-V-2_5 and vcr_wiki_en_easy as an example
60
+ python3 -m accelerate.commands.launch \
61
+ --num_processes=8 \
62
+ -m lmms_eval \
63
+ --model minicpm_v \
64
+ --model_args pretrained="openbmb/MiniCPM-Llama3-V-2_5" \
65
+ --tasks vcr_wiki_en_easy \
66
+ --batch_size 1 \
67
+ --log_samples \
68
+ --log_samples_suffix MiniCPM-Llama3-V-2_5_vcr_wiki_en_easy \
69
+ --output_path ./logs/
70
+ ```
71
+
72
+ `lmms-eval` supports the following VCR `--tasks` settings:
73
+
74
+ * English
75
+ * Easy
76
+ * `vcr_wiki_en_easy` (full test set, 5000 instances)
77
+ * `vcr_wiki_en_easy_500` (first 500 instances in the vcr_wiki_en_easy setting)
78
+ * `vcr_wiki_en_easy_100` (first 100 instances in the vcr_wiki_en_easy setting)
79
+ * Hard
80
+ * `vcr_wiki_en_hard` (full test set, 5000 instances)
81
+ * `vcr_wiki_en_hard_500` (first 500 instances in the vcr_wiki_en_hard setting)
82
+ * `vcr_wiki_en_hard_100` (first 100 instances in the vcr_wiki_en_hard setting)
83
+ * Chinese
84
+ * Easy
85
+ * `vcr_wiki_zh_easy` (full test set, 5000 instances)
86
+ * `vcr_wiki_zh_easy_500` (first 500 instances in the vcr_wiki_zh_easy setting)
87
+ * `vcr_wiki_zh_easy_100` (first 100 instances in the vcr_wiki_zh_easy setting)
88
+ * Hard
89
+ * `vcr_wiki_zh_hard` (full test set, 5000 instances)
90
+ * `vcr_wiki_zh_hard_500` (first 500 instances in the vcr_wiki_zh_hard setting)
91
+ * `vcr_wiki_zh_hard_100` (first 100 instances in the vcr_wiki_zh_hard setting)
92
+
93
+ ## Dataset Statistics
94
+
95
+ We show the statistics of the original VCR-Wiki dataset below:
96
+
97
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62bb1e0f3ff437e49a3088e5/CBS35FnFi9p0hFY9iJ0ba.png)
98
+
99
+ ## Dataset Construction
100
+
101
+ ![image/png](https://raw.githubusercontent.com/tianyu-z/VCR/main/assets/vcr_pipeline.png)
102
+
103
+ * **Data Collection and Initial Filtering**: The original data is collected from [wikimedia/wit_base](https://huggingface.co/datasets/wikimedia/wit_base). Before constructing the dataset, we first filter out the instances with sensitive content, including NSFW and crime-related terms, to mitigate AI risk and biases.
104
+
105
+ * **N-gram selection**: We first truncate the description of each entry to be less than 5 lines with our predefined font and size settings. We then tokenize the description for each entry with spaCy and randomly mask out 5-grams, where the masked 5-grams do not contain numbers, person names, religious or political groups, facilities, organizations, locations, dates and time labeled by spaCy, and the total masked token does not exceed 50\% of the tokens in the caption.
106
+
107
+ * **Create text embedded in images**: We create text embedded in images (TEI) for the description, resize its width to 300 pixels, and mask out the selected 5-grams with white rectangles. The size of the rectangle reflects the difficulty of the task: (1) in easy versions, the task is easy for native speakers but open-source OCR models almost always fail, and (2) in hard versions, the revealed part consists of only one to two pixels for the majority of letters or characters, yet the restoration task remains feasible for native speakers of the language.
108
+
109
+ * **Concatenate Images**: We concatenate TEI with the main visual image (VI) to get the stacked image.
110
+
111
+ * **Second-round Filtering**: We filter out all entries with no masked n-grams or have a height exceeding 900 pixels.
112
+
113
+ ## Data Fields
114
+
115
+ * `question_id`: `int64`, the instance id in the current split.
116
+ * `image`: `PIL.Image.Image`, the original visual image (VI).
117
+ * `stacked_image`: `PIL.Image.Image`, the stacked VI+TEI image containing both the original visual image and the masked text embedded in image.
118
+ * `only_id_image`: `PIL.Image.Image`, the masked TEI image.
119
+ * `caption`: `str`, the unmasked original text presented in the TEI image.
120
+ * `crossed_text`: `List[str]`, the masked n-grams in the current instance.
121
+
122
+ ## Disclaimer for the VCR-Wiki dataset and Its Subsets
123
+
124
+ The VCR-Wiki dataset and/or its subsets are provided under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. This dataset is intended solely for research and educational purposes in the field of visual caption restoration and related vision-language tasks.
125
+
126
+ Important Considerations:
127
+
128
+ 1. **Accuracy and Reliability**: While the VCR-Wiki dataset has undergone filtering to exclude sensitive content, it may still contain inaccuracies or unintended biases. Users are encouraged to critically evaluate the dataset's content and applicability to their specific research objectives.
129
+
130
+ 2. **Ethical Use**: Users must ensure that their use of the VCR-Wiki dataset aligns with ethical guidelines and standards, particularly in avoiding harm, perpetuating biases, or misusing the data in ways that could negatively impact individuals or groups.
131
+
132
+ 3. **Modifications and Derivatives**: Any modifications or derivative works based on the VCR-Wiki dataset must be shared under the same license (CC BY-SA 4.0).
133
+
134
+ 4. **Commercial Use**: Commercial use of the VCR-Wiki dataset is permitted under the CC BY-SA 4.0 license, provided that proper attribution is given and any derivative works are shared under the same license.
135
+
136
+ By using the VCR-Wiki dataset and/or its subsets, you agree to the terms and conditions outlined in this disclaimer and the associated license. The creators of the dataset are not liable for any direct or indirect damages resulting from its use.
137
+
138
+ ## Citation
139
+
140
+ If you find VCR useful for your research and applications, please cite using this BibTeX:
141
+
142
+ ```bibtex
143
+ @article{zhang2024vcr,
144
+ title = {VCR: Visual Caption Restoration},
145
+ author = {Tianyu Zhang and Suyuchen Wang and Lu Li and Ge Zhang and Perouz Taslakian and Sai Rajeswar and Jie Fu and Bang Liu and Yoshua Bengio},
146
+ year = {2024},
147
+ journal = {arXiv preprint arXiv: 2406.06462}
148
+ }
149
+ ```