Datasets:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- catalan
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: catalanqa
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://github.com/projecte-aina
- Point of Contact: Carlos Rodríguez-Penagos and Carme Armentano-Oller
Dataset Summary
CatalanQA: It is an aggregation and balancing of 2 previous datasets: VilaQUAD and ViquiQUAD, which were described in
This dataset can be used to build extractive-QA and Language Models.
Splits have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
Supported Tasks and Leaderboards
Extractive-QA, Language Model.
Languages
Catalan (ca
).
Dataset Structure
Data Instances
{
"title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya",
"paragraphs": [
{
"context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.",
"qas": [
{
"question": "Quants policies enviaran a Catalunya?",
"id": "0.5961700408283691",
"answers": [
{
"text": "521",
"answer_start": 57
}
]
}
]
}
]
},
Data Fields
Follows Rajpurkar, Pranav et al., 2016 for SQUAD v1 datasets.
id
(str): Unique ID assigned to the question.title
(str): Title of the Wikipedia article.context
(str): Wikipedia section text.question
(str): Question.answers
(list): List of answers to the question, each containing:text
(str): Span text answering to the question.answer_start
Starting offset of the span text answering to the question.
Data Splits
- train.json: 17135 question/answer pairs
- dev.json: 2157 question/answer pairs
- test.json: 2135 question/answer pairs
Dataset Creation
Methodology
Aggregation and balancing from ViquiQUAD and VilaQUAD datasets.
Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)).
Who are the annotators?
Annotation was commissioned by a specialized company that hired a team of native language speakers.
Personal and Sensitive Information
No personal or sensitive information is included.
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
Carlos Rodríguez-Penagos ([email protected]) and Carme Armentano-Oller ([email protected])
Licensing Information
This work is licensed under a Attribution-ShareAlike 4.0 International License.
Citation Information
Funding
This work was funded by the Catalan Government within the framework of the AINA project..