Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 7,768 Bytes
2dc1755
5cece05
 
 
 
 
 
 
 
 
2dc1755
5cece05
3229618
93a0e62
3229618
a87d175
dd68ee7
5cece05
 
 
 
a87d175
 
5cece05
2ab9519
213b689
5cece05
a87d175
5cece05
fa13ae1
5cece05
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a87d175
5cece05
 
 
a87d175
5cece05
a87d175
5cece05
 
 
a87d175
5cece05
a87d175
5cece05
a87d175
5cece05
 
 
 
 
a87d175
5cece05
a87d175
5cece05
 
 
 
 
 
 
a87d175
 
5cece05
 
a87d175
5cece05
 
 
 
 
 
a87d175
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5cece05
 
 
 
 
 
 
 
 
 
 
a87d175
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7b71ca1
5cece05
 
 
a87d175
5cece05
 
a87d175
5cece05
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
license: apache-2.0
language:
- en
pretty_name: STEM
size_categories:
- 1M<n<10M
tags:
- stem
- benchmark
---
# STEM Dataset
<p align="center">
  πŸ“ƒ <a href="https://arxiv.org/abs/2402.17205" target="_blank">[Paper]</a> β€’ πŸ’» <a href="https://github.com/stemdataset/STEM" target="_blank">[Github]</a> β€’ πŸ€— <a href="https://huggingface.co/datasets/stemdataset/STEM" target="_blank">[Dataset]</a> β€’ πŸ† <a href="https://huggingface.co/spaces/stemdataset/stem-leaderboard" target="_blank">[Leaderboard]</a> β€’ πŸ“½ <a href="https://github.com/stemdataset/STEM/blob/main/assets/STEM-Slides.pdf" target="_blank">[Slides]</a> β€’ πŸ“‹ <a href="https://github.com/stemdataset/STEM/blob/main/assets/poster.pdf" target="_blank">[Poster]</a>
</p>

This dataset is proposed in the ICLR 2024 paper: [Measuring Vision-Language STEM Skills of Neural Models](https://arxiv.org/abs/2402.17205). We introduce a new challenge to test the STEM skills of neural models. The problems in the real world often require solutions, combining knowledge from STEM (science, technology, engineering, and math). Unlike existing datasets, our dataset requires the understanding of multimodal vision-language information of STEM. Our dataset features one of the largest and most comprehensive datasets for the challenge. It includes 448 skills and 1,073,146 questions spanning all STEM subjects. Compared to existing datasets that often focus on examining expert-level ability, our dataset includes fundamental skills and questions designed based on the K-12 curriculum. We also add state-of-the-art foundation models such as CLIP and GPT-3.5-Turbo to our benchmark. Results show that the recent model advances only help master a very limited number of lower grade-level skills (2.5% in the third grade) in our dataset. In fact, these models are still well below (averaging 54.7%) the performance of elementary students, not to mention near expert-level performance. To understand and increase the performance on our dataset, we teach the models on a training split of our dataset. Even though we observe improved performance, the model performance remains relatively low compared to average elementary students. To solve STEM problems, we will need novel algorithmic innovations from the community.

## Authors
Jianhao Shen*, Ye Yuan*, Srbuhi Mirzoyan, Ming Zhang, Chenguang Wang

## Resources
- **Code:** https://github.com/stemdataset/STEM
- **Paper:** https://arxiv.org/abs/2402.17205
- **Dataset:** https://huggingface.co/datasets/stemdataset/STEM
- **Leaderboard:** https://huggingface.co/spaces/stemdataset/stem-leaderboard

## Dataset

The dataset consists of multimodal multi-choice questions. The dataset is splitted into train, valid and test sets. The groundtruth answers of the test set are not released and everyone can submit the test predictions to the [leaderboard](https://huggingface.co/spaces/stemdataset/stem-leaderboard). The basic statistics of the dataset are as follows:

| Subject     | #Skills | #Questions | Avg. #A    | #Train   | #Valid   | #Test    |
|-------------|---------|------------|------------|----------|----------|----------|
| Science     | 82      | 186,740    | 2.8        | 112,120  | 37,343   | 37,277   |
| Technology  | 9       | 8,566      | 4.0        | 5,140    | 1,713    | 1,713    |
| Engineering | 6       | 18,981     | 2.5        | 12,055   | 3,440    | 3,486    |
| Math        | 351     | 858,859    | 2.8        | 515,482  | 171,776  | 171,601  |
| Total       | 448     | 1,073,146  | 2.8        | 644,797  | 214,272  | 214,077  |

The dataset is in the following format:
```python
DatasetDict({
    train: Dataset({
        features: ['subject', 'grade', 'skill', 'pic_choice', 'pic_prob', 'problem', 'problem_pic', 'choices', 'choices_pic', 'answer_idx'],
        num_rows: 644797
    })
    valid: Dataset({
        features: ['subject', 'grade', 'skill', 'pic_choice', 'pic_prob', 'problem', 'problem_pic', 'choices', 'choices_pic', 'answer_idx'],
        num_rows: 214272
    })
    test: Dataset({
        features: ['subject', 'grade', 'skill', 'pic_choice', 'pic_prob', 'problem', 'problem_pic', 'choices', 'choices_pic', 'answer_idx'],
        num_rows: 214077
    })
})
```
And the detailed descriptions are as follows:
- `subject`: `str`
  - The subject of the question, one of `science`, `technology`, `engineer`, `math`.
- `grade`: `str`
  - The grade level information of the question, e.g., `grade-1`.
- `skill`: `str`
  - The skill level information of the question.
- `pic_choice`: `bool`
  - Whether the choices are images.
- `pic_prob`: `bool`
  - Whether the question has an image.
- `problem`: `str`
  - The question description.
- `problem_pic`: `bytes`
  - The image of the question.
- `choices`: `Optional[List[str]]`
  - The choices of the question. If `pic_choice` is `True`, the choices are images and will be saved into `choices_pic`, and the `choices` with be set to `None`.
- `choices_pic`: `Optional[List[bytes]]`
  - The choices images. If `pic_choice` is `False`, the choices are strings and will be saved into `choices`, and the `choices_pic` with be set to `None`.
- `answer_idx`: `int`
  - The index of the correct answer in the `choices` or `choices_pic`. If the split is `test`, the `answer_idx` is `-1`.

The bytes can be easily read by the following code:
```python
from PIL import Image
def bytes_to_image(img_bytes: bytes) -> Image:
    img = Image.open(io.BytesIO(img_bytes))
    return img
```

## Example Questions
### Questions containing images
***Question***: *What is the domain of this function?*

***Image***:
![problem_pic](assets/example_problem_pic.png)

***Choices***: *["{x | x <= -6}", "all real numbers", "{x | x > 3}", "{x | x >= 0}"]*

***Answer***: *1*

***Metadata***:
```json
{
  "subject": "math",
  "grade": "algebra-1",
  "skill": "domain-and-range-of-absolute-value-functions-graphs",
  "pic_choice": false,
  "pic_prob": true,
  "problem": "What is the domain of this function?",
  "problem_pic": "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x02\\xd8'...",
  "choices": [
    "$\\{x \\mid x \\leq -6\\}$",
    "all real numbers",
    "$\\{x \\mid x > 3\\}$",
    "$\\{x \\mid x \\geq 0\\}$"
  ],
  "choices_pic": null,
  "answer_idx": 1
}
```

### Choices containing images
***Question***: *The three scatter plots below show the same data set. Choose the scatter plot in which the outlier is highlighted.*

***Choices***:
<div style="display: flex; justify-content: space-between;">
  <img src="assets/example_choice_pic_0.png" style="width: 30%" />
  <img src="assets/example_choice_pic_1.png" style="width: 30%" /> 
  <img src="assets/example_choice_pic_2.png" style="width: 30%" />
</div>

***Answer***: *1*

***Metadata***:
```json
{
  "subject": "math",
  "grade": "precalculus",
  "skill": "outliers-in-scatter-plots",
  "pic_choice": true,
  "pic_prob": false,
  "problem": "The three scatter plots below show the same data set. Choose the scatter plot in which the outlier is highlighted.",
  "problem_pic": null,
  "choices": null,
  "choices_pic": [
    "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01N'...",
    "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01N'...",
    "b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01N'..."
  ],
  "answer_idx": 1
}
```

## How to Use
Please refer to our [code](https://github.com/stemdataset/STEM) for the usage of evaluation on the dataset.

## Citation
```bibtex
@inproceedings{shen2024measuring,
  title={Measuring Vision-Language STEM Skills of Neural Models},
  author={Shen, Jianhao and Yuan, Ye and Mirzoyan, Srbuhi and Zhang, Ming and Wang, Chenguang},
  booktitle={ICLR},
  year={2024}
}
```

## Dataset Card Contact
[email protected]