Datasets:
Tasks:
Visual Question Answering
Formats:
parquet
Languages:
Chinese
Size:
100K - 1M
ArXiv:
DOI:
License:
tianyu-z
commited on
Commit
•
3d98269
1
Parent(s):
b53b2cc
README.md
CHANGED
@@ -82,9 +82,10 @@ We support open-source model_id:
|
|
82 |
"THUDM/cogvlm2-llama3-chat-19B",
|
83 |
"echo840/Monkey-Chat",]
|
84 |
```
|
85 |
-
For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline.
|
86 |
|
87 |
```bash
|
|
|
88 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
89 |
# Inference from the VLMs and save the results to {model_id}_{difficulty}_{language}.json
|
90 |
cd src/evaluation
|
@@ -98,19 +99,23 @@ python3 gather_results.py --jsons_path .
|
|
98 |
```
|
99 |
|
100 |
### Close-source evaluation
|
101 |
-
We provide the evaluation script for the close-source
|
102 |
|
103 |
You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
|
104 |
```bash
|
|
|
105 |
cd src/evaluation
|
106 |
-
# save the testing dataset to the path
|
107 |
python3 save_image_from_dataset.py --output_path .
|
|
|
|
|
|
|
108 |
|
109 |
-
#
|
110 |
-
python3
|
111 |
|
112 |
# Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
|
113 |
-
python3 evaluation_metrics.py --model_id gpt4o --output_path . --json_filename "gpt4o_en_easy.json" --dataset_handler "vcr-org/VCR-wiki-en-easy-test"
|
114 |
|
115 |
# To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
|
116 |
python3 gather_results.py --jsons_path .
|
@@ -123,7 +128,6 @@ pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
|
|
123 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
124 |
python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
|
125 |
```
|
126 |
-
|
127 |
`lmms-eval` supports the following VCR `--tasks` settings:
|
128 |
|
129 |
* English
|
|
|
82 |
"THUDM/cogvlm2-llama3-chat-19B",
|
83 |
"echo840/Monkey-Chat",]
|
84 |
```
|
85 |
+
For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline. Examples of the inference logic are in `src/evaluation/inference.py`
|
86 |
|
87 |
```bash
|
88 |
+
pip install -r requirements.txt
|
89 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
90 |
# Inference from the VLMs and save the results to {model_id}_{difficulty}_{language}.json
|
91 |
cd src/evaluation
|
|
|
99 |
```
|
100 |
|
101 |
### Close-source evaluation
|
102 |
+
We provide the evaluation script for the close-source models in `src/evaluation/closed_source_eval.py`.
|
103 |
|
104 |
You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
|
105 |
```bash
|
106 |
+
pip install -r requirements.txt
|
107 |
cd src/evaluation
|
108 |
+
# [download images to inference locally option 1] save the testing dataset to the path using script from huggingface
|
109 |
python3 save_image_from_dataset.py --output_path .
|
110 |
+
# [download images to inference locally option 2] save the testing dataset to the path using github repo
|
111 |
+
# use en-easy-test-500 as an example
|
112 |
+
git clone https://github.com/tianyu-z/VCR-wiki-en-easy-test-500.git
|
113 |
|
114 |
+
# specify your image path if you would like to inference using the image stored locally by --image_path "path_to_image", otherwise, the script will streaming the images from github repo
|
115 |
+
python3 closed_source_eval.py --model_id gpt4o --dataset_handler "VCR-wiki-en-easy-test-500" --api_key "Your_API_Key"
|
116 |
|
117 |
# Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
|
118 |
+
python3 evaluation_metrics.py --model_id gpt4o --output_path . --json_filename "gpt4o_en_easy.json" --dataset_handler "vcr-org/VCR-wiki-en-easy-test-500"
|
119 |
|
120 |
# To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
|
121 |
python3 gather_results.py --jsons_path .
|
|
|
128 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
129 |
python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
|
130 |
```
|
|
|
131 |
`lmms-eval` supports the following VCR `--tasks` settings:
|
132 |
|
133 |
* English
|