s-conia commited on
Commit
486a12e
โ€ข
1 Parent(s): 6fe55e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -15
README.md CHANGED
@@ -28,11 +28,30 @@ pretty_name: Test di Medicina
28
 
29
  ## Is your LLM able to pass a National Entrance Exam for the Italian Medical School?
30
 
31
- This a Huggingface dataset designed for evaluating Language Model (LLM) on a broad range of questions from the **national entrance exams for the Italian medical school**.
32
  The dataset includes multiple-choice questions from various subjects such as biology, chemistry, physics, mathematics, world knowledge, and more.
33
- Each question is accompanied by five answer choices, with one correct answer.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  ## Features
 
36
 
37
  - **Multiple topics**: Questions cover a wide range of subjects, including biology, chemistry, physics, mathematics, world knowledge (with a focus on Italian culture), and more.
38
 
@@ -44,9 +63,6 @@ Each question is accompanied by five answer choices, with one correct answer.
44
 
45
  - **Large-scale**: The dataset contains over 3K high-quality questions, making it suitable for the evaluation of LLMs.
46
 
47
- - **Italian (and English coming soon)**: The dataset is currently available in Italian, with an English version coming soon.
48
-
49
-
50
  ## Evaluation
51
 
52
  The dataset is designed to evaluate LLMs on a wide range of questions from medical school entrance exams. The evaluation metrics are based on the model's ability to select the correct answer when presented with the question and answer choices (multiple-choice format) or generate the correct answer when presented with the question only (cloze-style format).
@@ -75,37 +91,81 @@ This is the same scoring system used for the official Italian medical school ent
75
 
76
  ### Evaluation Script
77
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  > [!NOTE]
79
- > We will release the evaluation script soon. Stay tuned!
 
 
 
 
 
 
80
 
81
- The evaluation is based on the `lm-evaluation-harness` library, which provides a simple and flexible way to evaluate LLMs on a wide range of tasks and datasets. The tasks are defined in `tasks/medschool-entrance-exams`.
82
 
83
  To run the evaluation, you can use the following command:
84
 
85
  ```bash
 
86
  MODEL_ARGS="pretrained=meta-llama/Meta-Llama-3.1-8B-Instruct,dtype=bfloat16"
87
 
88
- lm_eval \
 
 
 
 
 
 
 
 
89
  --model hf \
90
  --model_args $MODEL_ARGS \
91
- --tasks medschool_entrance_exams_it_mc,medschool_entrance_exams_it_cloze \
92
  --batch_size auto \
93
  --log_samples \
94
- --output_path outputs/ \
95
- --include tasks/medschool-entrance-exams/
96
 
97
  ```
98
 
99
  This command evaluates the model `meta-llama/Meta-Llama-3.1-8B-Instruct` on the Italian version of the dataset in both multiple-choice and cloze-style formats. The evaluation results are saved in the `outputs/` directory.
100
 
101
- Please, refer to the [lm-evaluation-harness](https://github.com/eleutherai/lm-evaluation-harness) repository for more details on how to use the library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
 
104
  ## Data
105
 
106
  ### Source
107
 
108
- The dataset is collected from the official Italian website of the Ministry of Education, University and Research ([MIUR](https://www.miur.gov.it/)), which hosts a large collection of past entrance exams for medical school in Italy. The dataset includes questions from various subjects, such as biology, chemistry, physics, mathematics, world knowledge, and more. You can find the original dataset [here](https://domande-ap.mur.gov.it/domande).
109
 
110
  ### Composition
111
 
@@ -260,10 +320,23 @@ where:
260
  ```
261
 
262
 
 
 
 
 
263
 
264
- ## Contributing
 
 
 
 
 
 
265
 
266
- Contributions to this dataset are welcome! If you have additional tasks or domains that you would like to include, please submit a pull request.
 
 
 
267
 
268
 
269
  ## License
 
28
 
29
  ## Is your LLM able to pass a National Entrance Exam for the Italian Medical School?
30
 
31
+ This the GitHub repo for our Hugging Face dataset designed for evaluating Large Language Models (LLMs) on a broad range of questions from the **national entrance exams for the Italian medical school** ([ORIGINAL WEBSITE](https://domande-ap.mur.gov.it/domande)).
32
  The dataset includes multiple-choice questions from various subjects such as biology, chemistry, physics, mathematics, world knowledge, and more.
33
+ Each question is accompanied by five answer choices, with one correct answer. The following is an example:
34
+
35
+ ```json
36
+ {
37
+ "id": 1691,
38
+ "topic": "biologia",
39
+ "text": "Come sono definite le cellule staminali che sono in grado di differenziarsi in tutti i tipi di cellule presenti nel corpo umano, ma non possono dare origine ad un organismo completo?",
40
+ "answers": [
41
+ "Cellule Staminali Multipotenti",
42
+ "Cellule Staminali Pluripotenti",
43
+ "Cellule Staminali Totipotenti",
44
+ "Cellule Staminali Unipotenti",
45
+ "Cellule Staminali Oligopotenti"
46
+ ],
47
+ "label": 1
48
+ }
49
+ ```
50
+
51
+ **The dataset is available on Hugging Face ๐Ÿค—!** ๐Ÿ‘‰๐Ÿ‘‰ [LINK](https://huggingface.co/datasets/room-b007/test-medicina) ๐Ÿ‘ˆ๐Ÿ‘ˆ.
52
 
53
  ## Features
54
+ - **Italian (and English coming soon)**: The dataset is currently available in Italian, with an English version coming soon.
55
 
56
  - **Multiple topics**: Questions cover a wide range of subjects, including biology, chemistry, physics, mathematics, world knowledge (with a focus on Italian culture), and more.
57
 
 
63
 
64
  - **Large-scale**: The dataset contains over 3K high-quality questions, making it suitable for the evaluation of LLMs.
65
 
 
 
 
66
  ## Evaluation
67
 
68
  The dataset is designed to evaluate LLMs on a wide range of questions from medical school entrance exams. The evaluation metrics are based on the model's ability to select the correct answer when presented with the question and answer choices (multiple-choice format) or generate the correct answer when presented with the question only (cloze-style format).
 
91
 
92
  ### Evaluation Script
93
 
94
+ The evaluation script is available on our GitHub repo! [Check it out](https://github.com/room-b007/test-medicina)!
95
+
96
+ The evaluation is based on the `lm-evaluation-harness` library, which provides a simple and flexible way to evaluate LLMs on a wide range of tasks and datasets. The tasks are defined in `tasks/medschool-test`.
97
+
98
+ #### Requirements and Installation
99
+ We recommend using Conda to create a new environment and install the required libraries. You can create a new Conda environment using the following command:
100
+
101
+ ```bash
102
+ conda create -n medschool-test python=3.10
103
+ conda activate medschool-test
104
+ ```
105
+
106
  > [!NOTE]
107
+ > Using Conda is optional but highly recommended to avoid conflicts with existing libraries.
108
+
109
+ To run the evaluation, you need to install the `lm-evaluation-harness` library and the `transformers` library. You can install them using the following command:
110
+
111
+ ```bash
112
+ pip install --upgrade -r requirements.txt
113
+ ```
114
 
115
+ #### Running the Evaluation Script
116
 
117
  To run the evaluation, you can use the following command:
118
 
119
  ```bash
120
+ # Model to evaluate: meta-llama/Meta-Llama-3.1-8B-Instruct (in bfloat16)
121
  MODEL_ARGS="pretrained=meta-llama/Meta-Llama-3.1-8B-Instruct,dtype=bfloat16"
122
 
123
+ # Evaluate the model on both multiple-choice and cloze-style formats
124
+ TASKS="medschool_test_it_mc,medschool_test_it_cloze"
125
+
126
+ # Create the output directory if it does not exist
127
+ OUTPUT_DIR="outputs/"
128
+ mkdir -p $OUTPUT_DIR
129
+
130
+ # Run the evaluation with the lm-evaluation-harness library
131
+ accelerate launch -m lm_eval \
132
  --model hf \
133
  --model_args $MODEL_ARGS \
134
+ --tasks $TASKS \
135
  --batch_size auto \
136
  --log_samples \
137
+ --output_path $OUTPUT_DIR \
138
+ --include tasks/medschool-test/
139
 
140
  ```
141
 
142
  This command evaluates the model `meta-llama/Meta-Llama-3.1-8B-Instruct` on the Italian version of the dataset in both multiple-choice and cloze-style formats. The evaluation results are saved in the `outputs/` directory.
143
 
144
+ Please, refer to the examples in `examples/evaluation` or [lm-evaluation-harness](https://github.com/eleutherai/lm-evaluation-harness) repository for more details on how to use the library.
145
+
146
+ > [!NOTE]
147
+ > The scores provided by the evaluation script are 5, one for each subject, and they are between -0.4 and 1.5. They, the subject-scores, are computed as the average score over all the questions for each specific subject. To get the final score, you need to calculate the weighted average of the subject-scores, where the weight of each subject-score depends on the subject itself (as described in the "Scoring" section above).
148
+ >
149
+ > For example, if the model obtains the following subject-scores:
150
+ > - Biology: 1.0571
151
+ > - Chemistry: 0.7598
152
+ > - Knowledge: 1.0518
153
+ > - Reasoning: 0.2005
154
+ > - Math & Physics: 0.4302
155
+ >
156
+ > The final score is calculated as follows:
157
+ > ```
158
+ > average_score_per_question = (1.0571 * 0.3833) + (0.7598 * 0.25) + (1.0518 * 0.0667) + (0.2005 * 0.0833) + (0.4302 * 0.2167) = 0.7752
159
+ > overall_score = average_per_question * 60 = 46.51
160
+ > ```
161
+ > The <u> maximum possible score is 90 = 60 * 1.5 </u>, when all the answers are correct, while the <u> minum score is -24 = 60 * -0.4 </u>, when all the answers are incorrect.
162
 
163
 
164
  ## Data
165
 
166
  ### Source
167
 
168
+ The dataset is collected from the official Italian website of the Ministry of Education, University and Research ([MIUR](https://www.miur.gov.it/)), which hosts a large collection of past entrance exams for medical school in Italy. The dataset includes questions from various subjects, such as biology, chemistry, physics, mathematics, world knowledge, and more. You can find the original dataset [HERE](https://domande-ap.mur.gov.it/domande).
169
 
170
  ### Composition
171
 
 
320
  ```
321
 
322
 
323
+ ### Reproducibility
324
+ We provide the code to reproduce our dataset in the `src/data/collection` directory. The code is written in Python and uses the `beautifulsoup4` library to scrape the questions from the official MIUR website. You can run the code to collect the latest questions from the website and generate the dataset in JSONL format.
325
+
326
+ You can run the code using the following command:
327
 
328
+ ```bash
329
+ python src/data/collection/collect_questions.py \
330
+ --output_path data/questions/medical_school_questions.jsonl \
331
+ --last_page_index 174
332
+ ```
333
+
334
+ This command collects the questions from the official MIUR website and saves them in the `data/questions/medical_school_questions.jsonl` file. You can specify the number of pages to scrape using the `--last_page_index` argument (where each page contains 20 questions and the last page is 174).
335
 
336
+
337
+ ## Citation
338
+ > [!NOTE]
339
+ > We are currently writing a report on the dataset and will provide the citation information soon.
340
 
341
 
342
  ## License