Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
shizue commited on
Commit
a87d175
β€’
1 Parent(s): 93a0e62

update README

Browse files
Files changed (1) hide show
  1. README.md +61 -19
README.md CHANGED
@@ -13,19 +13,20 @@ tags:
13
  <p align="center">
14
  πŸ“ƒ <a href="https://arxiv.org/abs/2402.17205" target="_blank">[Paper]</a> β€’ πŸ’» <a href="https://github.com/stemdataset/STEM" target="_blank">[Github]</a> β€’ πŸ€— <a href="https://huggingface.co/datasets/stemdataset/STEM" target="_blank">[Dataset]</a> β€’ πŸ† <a href="https://huggingface.co/spaces/stemdataset/stem-leaderboard" target="_blank">[Leaderboard]</a> β€’ πŸ“½ <a href="https://github.com/stemdataset/STEM/blob/main/assets/STEM-Slides.pdf" target="_blank">[Slides]</a> β€’ πŸ“‹ <a href="https://github.com/stemdataset/STEM/blob/main/assets/poster.pdf" target="_blank">[Poster]</a>
15
  </p>
 
16
  This dataset is proposed in the ICLR 2024 paper: [Measuring Vision-Language STEM Skills of Neural Models](https://arxiv.org/abs/2402.17205). The problems in the real world often require solutions, combining knowledge from STEM (science, technology, engineering, and math). Unlike existing datasets, our dataset requires the understanding of multimodal vision-language information of STEM. Our dataset features one of the largest and most comprehensive datasets for the challenge. It includes 448 skills and 1,073,146 questions spanning all STEM subjects. Compared to existing datasets that often focus on examining expert-level ability, our dataset includes fundamental skills and questions designed based on the K-12 curriculum. We also add state-of-the-art foundation models such as CLIP and GPT-3.5-Turbo to our benchmark. Results show that the recent model advances only help master a very limited number of lower grade-level skills (2.5% in the third grade) in our dataset. In fact, these models are still well below (averaging 54.7%) the performance of elementary students, not to mention near expert-level performance. To understand and increase the performance on our dataset, we teach the models on a training split of our dataset. Even though we observe improved performance, the model performance remains relatively low compared to average elementary students. To solve STEM problems, we will need novel algorithmic innovations from the community.
17
 
18
  ## Authors
19
  Jianhao Shen*, Ye Yuan*, Srbuhi Mirzoyan, Ming Zhang, Chenguang Wang
20
 
21
- ## Dataset Sources
22
- - **Repository:** https://github.com/stemdataset/STEM
23
  - **Paper:** https://arxiv.org/abs/2402.17205
24
  - **Leaderboard:** https://huggingface.co/spaces/stemdataset/stem-leaderboard
25
 
26
- ## Dataset Structure
27
 
28
- The dataset is splitted into train, valid and test sets. The choice labels in test set are not released and everyone can submit the test set predictions to the [leaderboard](https://huggingface.co/spaces/stemdataset/stem-leaderboard). The basic statistics of the dataset are as follows:
29
 
30
  | Subject | #Skills | #Questions | Avg. #A | #Train | #Valid | #Test |
31
  |-------------|---------|------------|------------|----------|----------|----------|
@@ -52,29 +53,29 @@ DatasetDict({
52
  })
53
  })
54
  ```
55
- And the detailed description of the features are as follows:
56
  - `subject`: `str`
57
  - The subject of the question, one of `science`, `technology`, `engineer`, `math`.
58
  - `grade`: `str`
59
- - The grade of the question.
60
  - `skill`: `str`
61
- - The skill of the question.
62
  - `pic_choice`: `bool`
63
  - Whether the choices are images.
64
  - `pic_prob`: `bool`
65
- - Whether the problem has an image.
66
  - `problem`: `str`
67
- - The problem description.
68
  - `problem_pic`: `bytes`
69
- - The problem image.
70
  - `choices`: `Optional[List[str]]`
71
  - The choices of the question. If `pic_choice` is `True`, the choices are images and will be saved into `choices_pic`, and the `choices` with be set to `None`.
72
  - `choices_pic`: `Optional[List[bytes]]`
73
  - The choices images. If `pic_choice` is `False`, the choices are strings and will be saved into `choices`, and the `choices_pic` with be set to `None`.
74
  - `answer_idx`: `int`
75
- - The index of the correct answer in the `choices` or `choices_pic`. If the split is `test`, the `answer_idx` will be set to `-1`.
76
 
77
- The bytes can be read by the following code:
78
  ```python
79
  from PIL import Image
80
  def bytes_to_image(img_bytes: bytes) -> Image:
@@ -82,18 +83,39 @@ def bytes_to_image(img_bytes: bytes) -> Image:
82
  return img
83
  ```
84
 
85
- ## Dataset Example
86
- ### Problem picture example
87
  ***Question***: *What is the domain of this function?*
88
 
89
- ***Picture***:
90
  ![problem_pic](assets/example_problem_pic.png)
91
 
92
  ***Choices***: *["{x | x <= -6}", "all real numbers", "{x | x > 3}", "{x | x >= 0}"]*
93
 
94
  ***Answer***: *1*
95
 
96
- ### Choices picture example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  ***Question***: *The three scatter plots below show the same data set. Choose the scatter plot in which the outlier is highlighted.*
98
 
99
  ***Choices***:
@@ -105,15 +127,35 @@ def bytes_to_image(img_bytes: bytes) -> Image:
105
 
106
  ***Answer***: *1*
107
 
108
- ## How to use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  Please refer to our [code](https://github.com/stemdataset/STEM) for the usage of evaluation on the dataset.
110
 
111
  ## Citation
112
  ```bibtex
113
- @article{shen2024measuring,
114
  title={Measuring Vision-Language STEM Skills of Neural Models},
115
  author={Shen, Jianhao and Yuan, Ye and Mirzoyan, Srbuhi and Zhang, Ming and Wang, Chenguang},
116
- journal={ICLR 2024},
117
  year={2024}
118
  }
119
  ```
 
13
  <p align="center">
14
  πŸ“ƒ <a href="https://arxiv.org/abs/2402.17205" target="_blank">[Paper]</a> β€’ πŸ’» <a href="https://github.com/stemdataset/STEM" target="_blank">[Github]</a> β€’ πŸ€— <a href="https://huggingface.co/datasets/stemdataset/STEM" target="_blank">[Dataset]</a> β€’ πŸ† <a href="https://huggingface.co/spaces/stemdataset/stem-leaderboard" target="_blank">[Leaderboard]</a> β€’ πŸ“½ <a href="https://github.com/stemdataset/STEM/blob/main/assets/STEM-Slides.pdf" target="_blank">[Slides]</a> β€’ πŸ“‹ <a href="https://github.com/stemdataset/STEM/blob/main/assets/poster.pdf" target="_blank">[Poster]</a>
15
  </p>
16
+
17
  This dataset is proposed in the ICLR 2024 paper: [Measuring Vision-Language STEM Skills of Neural Models](https://arxiv.org/abs/2402.17205). The problems in the real world often require solutions, combining knowledge from STEM (science, technology, engineering, and math). Unlike existing datasets, our dataset requires the understanding of multimodal vision-language information of STEM. Our dataset features one of the largest and most comprehensive datasets for the challenge. It includes 448 skills and 1,073,146 questions spanning all STEM subjects. Compared to existing datasets that often focus on examining expert-level ability, our dataset includes fundamental skills and questions designed based on the K-12 curriculum. We also add state-of-the-art foundation models such as CLIP and GPT-3.5-Turbo to our benchmark. Results show that the recent model advances only help master a very limited number of lower grade-level skills (2.5% in the third grade) in our dataset. In fact, these models are still well below (averaging 54.7%) the performance of elementary students, not to mention near expert-level performance. To understand and increase the performance on our dataset, we teach the models on a training split of our dataset. Even though we observe improved performance, the model performance remains relatively low compared to average elementary students. To solve STEM problems, we will need novel algorithmic innovations from the community.
18
 
19
  ## Authors
20
  Jianhao Shen*, Ye Yuan*, Srbuhi Mirzoyan, Ming Zhang, Chenguang Wang
21
 
22
+ ## Resources
23
+ - **Code:** https://github.com/stemdataset/STEM
24
  - **Paper:** https://arxiv.org/abs/2402.17205
25
  - **Leaderboard:** https://huggingface.co/spaces/stemdataset/stem-leaderboard
26
 
27
+ ## Dataset
28
 
29
+ The dataset consists of multimodal multichoice questions. The dataset is spllited in to train, valid and test sets. The groundtruth answers of the test set are not released and everyone can submit the test predictions to the [leaderboard](https://huggingface.co/spaces/stemdataset/stem-leaderboard). The basic statistics of the dataset are as follows:
30
 
31
  | Subject | #Skills | #Questions | Avg. #A | #Train | #Valid | #Test |
32
  |-------------|---------|------------|------------|----------|----------|----------|
 
53
  })
54
  })
55
  ```
56
+ And the detailed descriptions are as follows:
57
  - `subject`: `str`
58
  - The subject of the question, one of `science`, `technology`, `engineer`, `math`.
59
  - `grade`: `str`
60
+ - The grade level information of the question, e.g., `grade-1`.
61
  - `skill`: `str`
62
+ - The skill level information of the question.
63
  - `pic_choice`: `bool`
64
  - Whether the choices are images.
65
  - `pic_prob`: `bool`
66
+ - Whether the question has an image.
67
  - `problem`: `str`
68
+ - The question description.
69
  - `problem_pic`: `bytes`
70
+ - The image of the question.
71
  - `choices`: `Optional[List[str]]`
72
  - The choices of the question. If `pic_choice` is `True`, the choices are images and will be saved into `choices_pic`, and the `choices` with be set to `None`.
73
  - `choices_pic`: `Optional[List[bytes]]`
74
  - The choices images. If `pic_choice` is `False`, the choices are strings and will be saved into `choices`, and the `choices_pic` with be set to `None`.
75
  - `answer_idx`: `int`
76
+ - The index of the correct answer in the `choices` or `choices_pic`. If the split is `test`, the `answer_idx` is `-1`.
77
 
78
+ The bytes can be easily read by the following code:
79
  ```python
80
  from PIL import Image
81
  def bytes_to_image(img_bytes: bytes) -> Image:
 
83
  return img
84
  ```
85
 
86
+ ## Example Questions
87
+ ### Questions containing images
88
  ***Question***: *What is the domain of this function?*
89
 
90
+ ***Image***:
91
  ![problem_pic](assets/example_problem_pic.png)
92
 
93
  ***Choices***: *["{x | x <= -6}", "all real numbers", "{x | x > 3}", "{x | x >= 0}"]*
94
 
95
  ***Answer***: *1*
96
 
97
+ ***Metadata***:
98
+ ```json
99
+ {
100
+ "subject": "math",
101
+ "grade": "algebra-1",
102
+ "skill": "domain-and-range-of-absolute-value-functions-graphs",
103
+ "pic_choice": false,
104
+ "pic_prob": true,
105
+ "problem": "What is the domain of this function?",
106
+ "problem_pic": "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x02\\xd8'...",
107
+ "choices": [
108
+ "$\\{x \\mid x \\leq -6\\}$",
109
+ "all real numbers",
110
+ "$\\{x \\mid x > 3\\}$",
111
+ "$\\{x \\mid x \\geq 0\\}$"
112
+ ],
113
+ "choices_pic": null,
114
+ "answer_idx": 1
115
+ }
116
+ ```
117
+
118
+ ### Choices containing images
119
  ***Question***: *The three scatter plots below show the same data set. Choose the scatter plot in which the outlier is highlighted.*
120
 
121
  ***Choices***:
 
127
 
128
  ***Answer***: *1*
129
 
130
+ ***Metadata***:
131
+ ```json
132
+ {
133
+ "subject": "math",
134
+ "grade": "precalculus",
135
+ "skill": "outliers-in-scatter-plots",
136
+ "pic_choice": true,
137
+ "pic_prob": false,
138
+ "problem": "The three scatter plots below show the same data set. Choose the scatter plot in which the outlier is highlighted.",
139
+ "problem_pic": null,
140
+ "choices": null,
141
+ "choices_pic": [
142
+ "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01N'...",
143
+ "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01N'...",
144
+ "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01N'..."
145
+ ],
146
+ "answer_idx": 1
147
+ }
148
+ ```
149
+
150
+ ## How to Use
151
  Please refer to our [code](https://github.com/stemdataset/STEM) for the usage of evaluation on the dataset.
152
 
153
  ## Citation
154
  ```bibtex
155
+ @inproceedings{shen2024measuring,
156
  title={Measuring Vision-Language STEM Skills of Neural Models},
157
  author={Shen, Jianhao and Yuan, Ye and Mirzoyan, Srbuhi and Zhang, Ming and Wang, Chenguang},
158
+ booktitle={ICLR},
159
  year={2024}
160
  }
161
  ```