Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Chinese
ArXiv:
Libraries:
Datasets
Dask
License:
tianyu-z commited on
Commit
2e01363
1 Parent(s): 326c95d
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -56,6 +56,32 @@ We found that OCR and text-based processing become ineffective in VCR as accurat
56
  - **Paper:** [VCR: Visual Caption Restoration](https://arxiv.org/abs/2406.06462)
57
  - **Point of Contact:** [Tianyu Zhang](mailto:[email protected])
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  # Model Evaluation
60
 
61
  ## Method 1 (recommended): use the evaluation script
 
56
  - **Paper:** [VCR: Visual Caption Restoration](https://arxiv.org/abs/2406.06462)
57
  - **Point of Contact:** [Tianyu Zhang](mailto:[email protected])
58
 
59
+ # Benchmark
60
+ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in closed source and open source are highlighted in **bold**. Closed source models are evaluated based on 500 test samples, while open source models are evaluated based on 5000 test samples.
61
+ | Model | Size (unknown for closed source) | En Easy EM | En Easy Jaccard | En Hard EM | En Hard Jaccard | Zh Easy EM | Zh Easy Jaccard | Zh Hard EM | Zh Hard Jaccard |
62
+ |---|---|---|---|---|---|---|---|---|---|
63
+ | Claude 3 Opus | - | 62.0 | 77.67 | 37.8 | 57.68 | 0.9 | 11.5 | 0.3 | 9.22 |
64
+ | Claude 3.5 Sonnet | - | 63.85 | 74.65 | 41.74 | 56.15 | 1.0 | 7.54 | 0.2 | 4.0 |
65
+ | GPT-4 Turbo | - | 78.74 | 88.54 | 45.15 | 65.72 | 0.2 | 8.42 | 0.0 | 8.58 |
66
+ | GPT-4V | - | 52.04 | 65.36 | 25.83 | 44.63 | - | - | - | - |
67
+ | GPT-4o | - | **91.55** | **96.44** | **73.2** | **86.17** | **14.87** | **39.05** | **2.2** | **22.72** |
68
+ | Gemini 1.5 Pro | - | 62.73 | 77.71 | 28.07 | 51.9 | 1.1 | 11.1 | 0.7 | 11.82 |
69
+ | Qwen-VL-Max | - | 76.8 | 85.71 | 41.65 | 61.18 | 6.34 | 13.45 | 0.89 | 5.4 |
70
+ | Reka Core | - | 66.46 | 84.23 | 6.71 | 25.84 | 0.0 | 3.43 | 0.0 | 3.35 |
71
+ | CogVLM2 | 19B | **83.25** | **89.75** | **37.98** | **59.99** | - | - | - | - |
72
+ | CogVLM2-Chinese | 19B | - | - | - | - | **33.24** | **57.57** | **1.34** | **17.35** |
73
+ | DeepSeek-VL | 1.3B | 23.04 | 46.84 | 0.16 | 11.89 | 0.0 | 6.56 | 0.0 | 6.46 |
74
+ | DeepSeek-VL | 7B | 38.01 | 60.02 | 1.0 | 15.9 | 0.0 | 4.08 | 0.0 | 5.11 |
75
+ | DocOwl-1.5-Omni | 8B | 0.84 | 13.34 | 0.04 | 7.76 | 0.0 | 1.14 | 0.0 | 1.37 |
76
+ | Idefics2 | 8B | 15.75 | 31.97 | 0.65 | 9.93 | - | - | - | - |
77
+ | InternLM-XComposer2-VL | 7B | 46.64 | 70.99 | 0.7 | 12.51 | 0.27 | 12.32 | 0.07 | 8.97 |
78
+ | InternVL-V1.5 | 25.5B | 14.65 | 51.42 | 1.99 | 16.73 | 4.78 | 26.43 | 0.03 | 8.46 |
79
+ | MiniCPM-V2.5 | 8B | 31.81 | 53.24 | 1.41 | 11.94 | 4.1 | 18.03 | 0.09 | 7.39 |
80
+ | Monkey | 7B | 50.66 | 67.6 | 1.96 | 14.02 | 0.62 | 8.34 | 0.12 | 6.36 |
81
+ | Qwen-VL | 7B | 49.71 | 69.94 | 2.0 | 15.04 | 0.04 | 1.5 | 0.01 | 1.17 |
82
+ | Yi-VL | 34B | 0.82 | 5.59 | 0.07 | 4.31 | 0.0 | 4.44 | 0.0 | 4.12 |
83
+ | Yi-VL | 6B | 0.75 | 5.54 | 0.06 | 4.46 | 0.00 | 4.37 | 0.00 | 4.0 |
84
+
85
  # Model Evaluation
86
 
87
  ## Method 1 (recommended): use the evaluation script