File size: 5,954 Bytes
2d1f177 4789781 2d1f177 2fb367f 973a3e5 2d1f177 4789781 2d1f177 2fb367f 973a3e5 2d1f177 754b4db 0c84b99 754b4db 7ee0604 754b4db 7ee0604 754b4db 81950e6 754b4db 81950e6 754b4db 81950e6 754b4db 81950e6 754b4db |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
---
dataset_info:
- config_name: data_mining
features:
- name: wikipedia_passage_concept_A
dtype: string
- name: concept_A
dtype: string
- name: wikipedia_passage_concept_B
dtype: string
- name: concept_B
dtype: string
- name: target
dtype: int64
splits:
- name: train
num_bytes: 2356292
num_examples: 218
- name: test
num_bytes: 906558
num_examples: 99
download_size: 564203
dataset_size: 3262850
- config_name: geometry
features:
- name: wikipedia_passage_concept_A
dtype: string
- name: concept_A
dtype: string
- name: wikipedia_passage_concept_B
dtype: string
- name: concept_B
dtype: string
- name: target
dtype: int64
splits:
- name: train
num_bytes: 6705697
num_examples: 664
- name: test
num_bytes: 2178281
num_examples: 200
download_size: 601925
dataset_size: 8883978
- config_name: physics
features:
- name: wikipedia_passage_concept_A
dtype: string
- name: concept_A
dtype: string
- name: wikipedia_passage_concept_B
dtype: string
- name: concept_B
dtype: string
- name: target
dtype: int64
splits:
- name: train
num_bytes: 14566247
num_examples: 630
- name: test
num_bytes: 4882943
num_examples: 200
download_size: 1965578
dataset_size: 19449190
- config_name: precalculus
features:
- name: wikipedia_passage_concept_A
dtype: string
- name: concept_A
dtype: string
- name: wikipedia_passage_concept_B
dtype: string
- name: concept_B
dtype: string
- name: target
dtype: int64
splits:
- name: train
num_bytes: 12491149
num_examples: 816
- name: test
num_bytes: 3261896
num_examples: 200
download_size: 1513563
dataset_size: 15753045
configs:
- config_name: data_mining
data_files:
- split: train
path: data_mining/train-*
- split: test
path: data_mining/test-*
- config_name: geometry
data_files:
- split: train
path: geometry/train-*
- split: test
path: geometry/test-*
- config_name: physics
data_files:
- split: train
path: physics/train-*
- split: test
path: physics/test-*
- config_name: precalculus
data_files:
- split: train
path: precalculus/train-*
- split: test
path: precalculus/test-*
---
# Prerequisite RElation LEARNing (PRELEARN)
Original Paper: https://ceur-ws.org/Vol-2765/paper164.pdf
This dataset contains a collection of binary-labelled concept pairs (A,B) extracted from textbooks on four domains: **data mining**, **geometry**, **physics** and **precalculus**.
Then, domain experts were asked to manually annotate if pairs of concepts showed a prerequisite relation or not, therefore the dataset consists of both positive and negative concept pairs.
We obtained the data from the original repository, making only one modification: undersampling the training data, to have a balanced set. To evaluate generative models in in-context learning, it's essential to have a balanced distribution for sampling examples in a few-shot setting. The undersampling process was carried out randomly, and separately for each domain.
## Example
Here you can see the structure of the single sample in the present dataset.
```json
{
"concept_A": string, # text of the concept A
"wikipedia_passage_concept_A": string, # paragraph of wikipedia corresponding to concept A
"concept_B": string, # text of the concept B
"wikipedia_passage_concept_B": string, # paragraph of wikipedia corresponding to concept B
"target": int, # 0: B non è preconcetto di A, 1: B è preconcetto di A
}
```
## Statitics
| PRELEARN Data Mining | 0 | 1 |
| :--------: | :----: | :----: |
| Training | 109 | 109 |
| Test | 50 | 49 |
| PRELEARN Physics | 0 | 1 |
| :--------: | :----: | :----: |
| Training | 315 | 315 |
| Test | 100 | 100 |
| PRELEARN Geometry | 0 | 1 |
| :--------: | :----: | :----: |
| Training | 332 | 332 |
| Test | 100 | 100 |
| PRELEARN Precalculus | 0 | 1 |
| :--------: | :----: | :----: |
| Training | 408 | 408 |
| Test | 100 | 100 |
## Proposed Prompts
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
Description of the task: "Dati due concetti A e B, indica se il primo concetto è un prerequisito per il secondo.\nIl concetto A è prerequisito per il concetto B, se per comprendere B devi prima aver compreso A.\nI seguenti concetti appartengono al dominio: {{domain}}.\n\n"
### Cloze Style:
Label (**B non è prerequisito di A**): "{{concept_B}} non è un prerequisito per {{concept_A}}"
Label (**B è prerequisito di A**): "{{concept_B}} è un prerequisito per {{concept_A}}"
### MCQA Style:
```
Domanda: il concetto \"{{concept_B}}\" è un prerequisito per la comprensione del concetto \"{{concept_A}}\"? Rispondi sì o no:
```
## Results
The following results are given by the Cloze-style prompting over some english and italian-adapted LLMs.
| PRELEARN (AVG) | ACCURACY (15-shots) |
| :-----: | :--: |
| Gemma-2B | 60.12 |
| QWEN2-1.5B | 57.00 |
| Mistral-7B | 64.50 |
| ZEFIRO | 64.76 |
| Llama-3-8B | 60.63 |
| Llama-3-8B-IT | 63.76 |
| ANITA | 63.77 |
## Aknwoledge
We would like to thank the authors of this resource for publicly releasing such an intriguing benchmark.
Additionally, we extend our gratitude to the students of the [MNLP-2024 course](https://naviglinlp.blogspot.com/), whose first homework explored various interesting prompting strategies.
The original dataset is freely available for download [link](https://live.european-language-grid.eu/catalogue/corpus/8084).
## License
The data come under the license [Creative Commons Attribution Non Commercial Share Alike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) |