dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 619513
num_examples: 384
- name: test
num_bytes: 2301030
num_examples: 1416
download_size: 1491635
dataset_size: 2920543
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
QUANDHO: QUestion ANswering Data for italian HistOry
Original Paper: https://aclanthology.org/L16-1069.pdf
QUANDHO (QUestion ANswering Data for italian HistOry) is an Italian question answering dataset created to cover the history of Italy in the first half of the XX century.
Starting from QUANDHO we defined a Multi-choice QA dataset, with a correct answer and four different distractors.
Data and Distractors Generation
We relied on the original data, to create this dataset. For each question-answer correct pair, we defined a dataset sample. For each sample, we gather four different distractors from incorrect question-answer pairs, where the question is the one of the chosen sample.
Example
Here you can see the structure of the single sample in the present dataset.
{
"text": string, # text of the question
"choices": list, # list of possible answers, with the correct one plus 3 distractors
"label": int, # index of the correct anser in the choices
}
Statistics
Training: 384
Test: 1416
Proposed Prompts
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity. Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
Description of the task:
Ti saranno poste domande di storia italiana.\nIdentifica quali paragrafi contengono la risposta alle domande date.\n\n
Prompt:
Data la domanda: \"{{question}}\"\nQuale tra i seguenti paragrafi risponde alla domanda?\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nRisposta:
Results
QUANDHO | ACCURACY (2-shots) |
---|---|
Gemma-2B | 43.99 |
QWEN2-1.5B | 56.43 |
Mistral-7B | 72.66 |
ZEFIRO | 70.12 |
Llama-3-8B | 70.26 |
Llama-3-8B-IT | 81.07 |
ANITA | 74.29 |
Acknowledgment
The original data can be downloaded from the following link
We want to thank the dataset's creators, that release such interesting resource publicly.
License
The original dataset is licensed under Creative Commons Attribution 4.0 International License