JosselinSom
commited on
Commit
•
c1a30e1
1
Parent(s):
c36aa22
Update README.md
Browse files
README.md
CHANGED
@@ -43,6 +43,9 @@ configs:
|
|
43 |
# Image2Struct - Music Sheet
|
44 |
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)
|
45 |
|
|
|
|
|
|
|
46 |
## Dataset description
|
47 |
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
|
48 |
This subdataset focuses on Music sheets. The model is given an image of the expected output with the prompt:
|
@@ -53,4 +56,44 @@ This music sheet was created by me, and I would like to recreate it using Lilypo
|
|
53 |
|
54 |
The data was collected from IMSLP and has no ground truth. This means that while we prompt models to output some [Lilypond](https://lilypond.org/) code to recreate the image of the music sheet, we do not have access to a Lilypond code that could reproduce the image and would act as a "ground-truth".
|
55 |
|
56 |
-
There is no **wild** subset as this already constitutes a dataset without ground-truths.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
# Image2Struct - Music Sheet
|
44 |
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)
|
45 |
|
46 |
+
**License:** [Apache License](http://www.apache.org/licenses/) Version 2.0, January 2004
|
47 |
+
|
48 |
+
|
49 |
## Dataset description
|
50 |
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
|
51 |
This subdataset focuses on Music sheets. The model is given an image of the expected output with the prompt:
|
|
|
56 |
|
57 |
The data was collected from IMSLP and has no ground truth. This means that while we prompt models to output some [Lilypond](https://lilypond.org/) code to recreate the image of the music sheet, we do not have access to a Lilypond code that could reproduce the image and would act as a "ground-truth".
|
58 |
|
59 |
+
There is no **wild** subset as this already constitutes a dataset without ground-truths.
|
60 |
+
|
61 |
+
|
62 |
+
## Uses
|
63 |
+
|
64 |
+
To load the subset `music` of the dataset to be sent to the model under evaluation in Python:
|
65 |
+
|
66 |
+
```python
|
67 |
+
import datasets
|
68 |
+
datasets.load_dataset("stanford-crfm/i2s-musicsheet", "music", split="validation")
|
69 |
+
```
|
70 |
+
|
71 |
+
|
72 |
+
To evaluate a model on Image2Musicsheet (equation) using [HELM](https://github.com/stanford-crfm/helm/), run the following command-line commands:
|
73 |
+
|
74 |
+
```sh
|
75 |
+
pip install crfm-helm
|
76 |
+
helm-run --run-entries image2musicsheet,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
|
77 |
+
```
|
78 |
+
|
79 |
+
You can also run the evaluation for only a specific `difficulty`:
|
80 |
+
```sh
|
81 |
+
helm-run --run-entries image2musicsheet:difficulty=hard,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
|
82 |
+
```
|
83 |
+
|
84 |
+
For more information on running Image2Struct using [HELM](https://github.com/stanford-crfm/helm/), refer to the [HELM documentation](https://crfm-helm.readthedocs.io/) and the article on [reproducing leaderboards](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).
|
85 |
+
|
86 |
+
## Citation
|
87 |
+
|
88 |
+
**BibTeX:**
|
89 |
+
|
90 |
+
```tex
|
91 |
+
@misc{roberts2024image2struct,
|
92 |
+
title={Image2Struct: A Benchmark for Evaluating Vision-Language Models in Extracting Structured Information from Images},
|
93 |
+
author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang},
|
94 |
+
year={2024},
|
95 |
+
eprint={TBD},
|
96 |
+
archivePrefix={arXiv},
|
97 |
+
primaryClass={TBD}
|
98 |
+
}
|
99 |
+
```
|