Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ tags:
|
|
17 |
- biology
|
18 |
- math
|
19 |
- reasoning
|
20 |
-
pretty_name:
|
21 |
---
|
22 |
|
23 |
# Medschool-Test, or *"Test di Medicina"*
|
@@ -51,6 +51,10 @@ Each question is accompanied by five answer choices, with one correct answer.
|
|
51 |
|
52 |
The dataset is designed to evaluate LLMs on a wide range of questions from medical school entrance exams. The evaluation metrics are based on the model's ability to select the correct answer when presented with the question and answer choices (multiple-choice format) or generate the correct answer when presented with the question only (cloze-style format).
|
53 |
|
|
|
|
|
|
|
|
|
54 |
### Scoring
|
55 |
|
56 |
For each question, the model obtains a score ranging from -0.4 to 1.5, based on the following criteria:
|
|
|
17 |
- biology
|
18 |
- math
|
19 |
- reasoning
|
20 |
+
pretty_name: Test di Medicina
|
21 |
---
|
22 |
|
23 |
# Medschool-Test, or *"Test di Medicina"*
|
|
|
51 |
|
52 |
The dataset is designed to evaluate LLMs on a wide range of questions from medical school entrance exams. The evaluation metrics are based on the model's ability to select the correct answer when presented with the question and answer choices (multiple-choice format) or generate the correct answer when presented with the question only (cloze-style format).
|
53 |
|
54 |
+
### Leaderboard
|
55 |
+
> [!NOTE]
|
56 |
+
> Coming soon!
|
57 |
+
|
58 |
### Scoring
|
59 |
|
60 |
For each question, the model obtains a score ranging from -0.4 to 1.5, based on the following criteria:
|