Datasets:

Modalities:
Text
Formats:
json
Sub-tasks:
extractive-qa
Languages:
Catalan
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
ccasimiro commited on
Commit
bc479f1
1 Parent(s): 9226a44

upload dataset

Browse files
Files changed (5) hide show
  1. README.md +218 -0
  2. dev.json +0 -0
  3. test.json +0 -0
  4. train.json +0 -0
  5. viquiquad.py +124 -0
README.md ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ languages:
3
+ - ca
4
+ ---
5
+ # ViquiQuAD, An extractive QA dataset for catalan, from the Wikipedia
6
+
7
+ ## BibTeX citation
8
+
9
+ If you use any of these resources (datasets or models) in your work, please cite our latest paper:
10
+
11
+ ```bibtex
12
+ @inproceedings{armengol-estape-etal-2021-multilingual,
13
+ title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
14
+ author = "Armengol-Estap{\'e}, Jordi and
15
+ Carrino, Casimiro Pio and
16
+ Rodriguez-Penagos, Carlos and
17
+ de Gibert Bonet, Ona and
18
+ Armentano-Oller, Carme and
19
+ Gonzalez-Agirre, Aitor and
20
+ Melero, Maite and
21
+ Villegas, Marta",
22
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
23
+ month = aug,
24
+ year = "2021",
25
+ address = "Online",
26
+ publisher = "Association for Computational Linguistics",
27
+ url = "https://aclanthology.org/2021.findings-acl.437",
28
+ doi = "10.18653/v1/2021.findings-acl.437",
29
+ pages = "4933--4946",
30
+ }
31
+ ```
32
+
33
+
34
+ # Digital Object Identifier (DOI) and access to dataset files
35
+
36
+ https://doi.org/10.5281/zenodo.4562345
37
+
38
+
39
+ ## Introduction
40
+
41
+ This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "Viquipèdia" (ca.wikipedia.org), and 1 to 5 questions with their answer for each fragment.
42
+
43
+ Viquipedia articles are used under [CC-by-sa] (https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
44
+
45
+ This dataset can be used to fine-tune and evaluate extractive-QA and Language Models. It is part of the Catalan Language Understanding Benchmark (CLUB) as presented in:
46
+
47
+ Armengol-Estapé J., Carrino CP., Rodriguez-Penagos C., de Gibert Bonet O., Armentano-Oller C., Gonzalez-Agirre A., Melero M. and Villegas M.,Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan". Findings of ACL 2021 (ACL-IJCNLP 2021).
48
+
49
+ ### Supported Tasks and Leaderboards
50
+
51
+ Extractive-QA, Language Model
52
+
53
+ ### Languages
54
+
55
+ CA- Catalan
56
+
57
+ ### Directory structure
58
+
59
+ * README
60
+ * dev.json
61
+ * test.json
62
+ * train.json
63
+ * viquiquad.py
64
+
65
+ ## Dataset Structure
66
+
67
+ ### Data Instances
68
+
69
+ json files
70
+
71
+ ### Data Fields
72
+
73
+ Follows ((Rajpurkar, Pranav et al., 2016) for squad v1 datasets. (see below for full reference)
74
+
75
+ ### Example:
76
+ <pre>
77
+ {
78
+ "data": [
79
+ {
80
+ "title": "Frederick W. Mote",
81
+ "paragraphs": [
82
+ {
83
+ "context": "L'historiador Frederick W. Mote va escriure que l'ús del terme \\\\\\\\\\\\\\\\"classes socials\\\\\\\\\\\\\\\\" per a aquest sistema era enganyós i que la posició de les persones dins del sistema de quatre classes no era una indicació del seu poder social i riquesa reals, sinó que només implicava \\\\\\\\\\\\\\\\"graus de privilegi\\\\\\\\\\\\\\\\" als quals tenien dret institucionalment i legalment, de manera que la posició d'una persona dins de les classes no era una garantia de la seva posició, ja que hi havia xinesos rics i amb bona reputació social, però alhora hi havia menys mongols i semu rics que mongols i semu que vivien en la pobresa i eren maltractats.",
84
+ "qas": [
85
+ {
86
+ "answers": [
87
+ {
88
+ "text": "Frederick W. Mote",
89
+ "answer_start": 14
90
+ }
91
+ ],
92
+ "id": "5728848cff5b5019007da298",
93
+ "question": "Qui creia que el sistema de classes socials de Yuan no s’hauria d’anomenar classes socials?"
94
+ },
95
+ ...
96
+ ]
97
+ }
98
+ ]
99
+ },
100
+ ...
101
+ ]
102
+ }
103
+
104
+ </pre>
105
+
106
+ ### Data Splits
107
+
108
+ train.development,test
109
+
110
+ ## Content analysis
111
+
112
+ ### Number of articles, paragraphs and questions
113
+
114
+ * Number of articles: 597
115
+ * Number of contexts: 3111
116
+ * Number of questions: 15153
117
+ * Questions/context: 4.87
118
+ * Number of sentences in contexts: 15100
119
+ * Sentences/context: 4.85
120
+
121
+ ### Number of tokens
122
+
123
+ * tokens in context: 469335
124
+ * tokens/context 150.86
125
+ * tokens in questions: 145249
126
+ * tokens/questions: 9.58
127
+ * tokens in answers: 63246
128
+ * tokens/answers: 4.17
129
+
130
+ ### Lexical variation
131
+
132
+ After filtering (tokenization, stopwords, punctuation, case), 83,88% of the words in the question can be found in the Context
133
+
134
+ ### Question type
135
+
136
+ | Question | Count | % |
137
+ |--------|-----|------|
138
+ | què | 4220 | 27.85 % |
139
+ | qui | 2239 | 14.78 % |
140
+ | com | 1964 | 12.96 % |
141
+ | quan | 1133 | 7.48 % |
142
+ | on | 1580 | 10.43 % |
143
+ | quant | 925 | 6.1 % |
144
+ | quin | 3399 | 22.43 % |
145
+ | no question mark | 21 | 0.14 % |
146
+
147
+ ### Question-answer relationships
148
+
149
+ From 100 randomly selected samples:
150
+
151
+ * Lexical variation: 33.0%
152
+ * World knowledge: 16.0%
153
+ * Syntactic variation: 35.0%
154
+ * Multiple sentence: 17.0%
155
+
156
+ ## Dataset Creation
157
+
158
+ ### Methodology
159
+
160
+ From a set of high quality, non-translation, articles in the Catalan Wikipedia (ca.wikipedia.org), 597 were randomly chosen, and from them 3111, 5-8 sentence contexts were extracted. We commissioned creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)], (http://arxiv.org/abs/1606.05250). In total, 15153 pairs of a question and an extracted fragment that contains the answer were created.
161
+
162
+ ### Curation Rationale
163
+
164
+ For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
165
+
166
+ ### Source Data
167
+
168
+ - https://ca.wikipedia.org
169
+
170
+ #### Initial Data Collection and Normalization
171
+
172
+ The source data are scraped articles from the Catalan wikipedia site (https://ca.wikipedia.org).
173
+
174
+ #### Who are the source language producers?
175
+
176
+ [More Information Needed]
177
+
178
+ ### Annotations
179
+
180
+ #### Annotation process
181
+
182
+ We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)), http://arxiv.org/abs/1606.05250.
183
+
184
+ #### Who are the annotators?
185
+
186
+ Native language speakers.
187
+
188
+ ### Dataset Curators
189
+
190
+ Carlos Rodríguez and Carme Armentano, from BSC-CNS
191
+
192
+ ### Personal and Sensitive Information
193
+
194
+ No personal or sensitive information included.
195
+
196
+ ## Considerations for Using the Data
197
+
198
+ ### Social Impact of Dataset
199
+
200
+ [More Information Needed]
201
+
202
+ ### Discussion of Biases
203
+
204
+ [More Information Needed]
205
+
206
+ ### Other Known Limitations
207
+
208
+ [More Information Needed]
209
+
210
+
211
+ ## Contact
212
+
213
+ Carlos Rodríguez-Penagos ([email protected]) and Carme Armentano-Oller ([email protected])
214
+
215
+
216
+ ## License
217
+
218
+ <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Attribution-ShareAlike 4.0 International License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
dev.json ADDED
The diff for this file is too large to render. See raw diff
 
test.json ADDED
The diff for this file is too large to render. See raw diff
 
train.json ADDED
The diff for this file is too large to render. See raw diff
 
viquiquad.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading script for the ViquiQuAD dataset.
2
+ import json
3
+ import datasets
4
+
5
+ logger = datasets.logging.get_logger(__name__)
6
+
7
+ _CITATION = """
8
+ Rodriguez-Penagos, Carlos Gerardo, & Armentano-Oller, Carme. (2021).
9
+ ViquiQuAD: an extractive QA dataset from Catalan Wikipedia (Version ViquiQuad_v.1.0.1)
10
+ [Data set]. Zenodo. http://doi.org/10.5281/zenodo.4761412
11
+ """
12
+
13
+ _DESCRIPTION = """
14
+ ViquiQuAD: an extractive QA dataset from Catalan Wikipedia.
15
+ This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations)
16
+ articles in the Catalan Wikipedia "Viquipèdia" (ca.wikipedia.org), and 1 to 5 questions with their
17
+ answer for each fragment. Viquipedia articles are used under CC-by-sa licence.
18
+ This dataset can be used to build extractive-QA and Language Models.
19
+ Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
20
+ MT4ALL and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
21
+ """
22
+
23
+ _HOMEPAGE = """https://zenodo.org/record/4562345#.YK41aqGxWUk"""
24
+
25
+ _URL = "https://huggingface.co/datasets/bsc/viquiquad/resolve/main/"
26
+ _TRAINING_FILE = "train.json"
27
+ _DEV_FILE = "dev.json"
28
+ _TEST_FILE = "test.json"
29
+
30
+
31
+ class ViquiQuADConfig(datasets.BuilderConfig):
32
+ """ Builder config for the ViquiQuAD dataset """
33
+
34
+ def __init__(self, **kwargs):
35
+ """BuilderConfig for ViquiQuAD.
36
+ Args:
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super(ViquiQuADConfig, self).__init__(**kwargs)
40
+
41
+
42
+ class ViquiQuAD(datasets.GeneratorBasedBuilder):
43
+ """ViquiQuAD Dataset."""
44
+
45
+ BUILDER_CONFIGS = [
46
+ ViquiQuADConfig(
47
+ name="ViquiQuAD",
48
+ version=datasets.Version("1.0.1"),
49
+ description="ViquiQuAD dataset",
50
+ ),
51
+ ]
52
+
53
+ def _info(self):
54
+ return datasets.DatasetInfo(
55
+ description=_DESCRIPTION,
56
+ features=datasets.Features(
57
+ {
58
+ "id": datasets.Value("string"),
59
+ "title": datasets.Value("string"),
60
+ "context": datasets.Value("string"),
61
+ "question": datasets.Value("string"),
62
+ "answers":[
63
+
64
+ {
65
+
66
+ "text": datasets.Value("string"),
67
+
68
+ "answer_start": datasets.Value("int32"),
69
+
70
+ }
71
+
72
+ ]
73
+ }
74
+ ),
75
+ # No default supervised_keys (as we have to pass both question
76
+ # and context as input).
77
+ supervised_keys=None,
78
+ homepage=_HOMEPAGE,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ """Returns SplitGenerators."""
84
+ urls_to_download = {
85
+ "train": f"{_URL}{_TRAINING_FILE}",
86
+ "dev": f"{_URL}{_DEV_FILE}",
87
+ "test": f"{_URL}{_TEST_FILE}",
88
+ }
89
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
90
+
91
+ return [
92
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
93
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
94
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
95
+ ]
96
+
97
+ def _generate_examples(self, filepath):
98
+ """This function returns the examples in the raw (text) form."""
99
+ logger.info("generating examples from = %s", filepath)
100
+ with open(filepath, encoding="utf-8") as f:
101
+ viquiquad = json.load(f, encoding="utf-8")
102
+ for article in viquiquad["data"]:
103
+ title = article.get("title", "").strip()
104
+ for paragraph in article["paragraphs"]:
105
+ context = paragraph["context"].strip()
106
+ for qa in paragraph["qas"]:
107
+ question = qa["question"].strip()
108
+ id_ = qa["id"]
109
+
110
+ # answer_starts = [answer["answer_start"] for answer in qa["answers"]]
111
+ # answers = [answer["text"].strip() for answer in qa["answers"]]
112
+
113
+ text = qa["answers"][0]["text"]
114
+ answer_start = qa["answers"][0]["answer_start"]
115
+
116
+ # Features currently used are "context", "question", and "answers".
117
+ # Others are extracted here for the ease of future expansions.
118
+ yield id_, {
119
+ "title": title,
120
+ "context": context,
121
+ "question": question,
122
+ "id": id_,
123
+ "answers": [{"text": text, "answer_start": answer_start}]
124
+ }