Datasets:

Sub-tasks:
extractive-qa
Languages:
Spanish
ArXiv:
License:
mapama247 commited on
Commit
10c3919
1 Parent(s): ddf0ff9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -116
README.md CHANGED
@@ -10,8 +10,6 @@ license:
10
  multilinguality:
11
  - monolingual
12
  pretty_name: Spanish Question Answering Corpus (SQAC)
13
- size_categories:
14
- - unknown
15
  source_datasets:
16
  - original
17
  task_categories:
@@ -21,39 +19,52 @@ task_ids:
21
 
22
  ---
23
 
24
- # SQAC (Spanish Question-Answering Corpus): An extractive QA dataset for the Spanish language
25
-
26
- ## BibTeX citation
27
-
28
- ```bibtex
29
- @article{gutierrezfandino2022,
30
- author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
31
- title = {MarIA: Spanish Language Models},
32
- journal = {Procesamiento del Lenguaje Natural},
33
- volume = {68},
34
- number = {0},
35
- year = {2022},
36
- issn = {1989-7553},
37
- url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
38
- pages = {39--60}
39
- }
40
- ```
41
-
42
- See the pre-print version of our paper for further details: https://arxiv.org/abs/2107.07253
43
-
44
- <!-- ## Digital Object Identifier (DOI) and access to dataset files -->
45
-
46
-
47
- ## Introduction
48
-
49
- This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  The sources of the contexts are:
52
- * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
53
- * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
54
- * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
55
-
56
- This dataset can be used to build extractive-QA.
57
 
58
  ### Supported Tasks and Leaderboards
59
 
@@ -61,66 +72,45 @@ Extractive-QA
61
 
62
  ### Languages
63
 
64
- ES - Spanish
65
-
66
- ### Directory structure
67
-
68
- * README.md
69
- * dev.json
70
- * test.json
71
- * train.json
72
- * sqac.py
73
 
74
  ## Dataset Structure
75
 
76
  ### Data Instances
77
 
78
- JSON files
 
 
 
 
 
 
 
 
 
 
 
79
 
80
  ### Data Fields
81
 
82
- Follows (Rajpurkar, Pranav et al., 2016) for squad v1 datasets. (see below for full reference).
83
- We added a field "source" with the source of the context.
84
-
85
- ### Example
86
- <pre>
87
  {
88
- "data": [
89
- {
90
- "paragraphs": [
91
- {
92
- "context": "Al cogote, y fumando como una cafetera. Ah!, no era él, éramos todos nosotros. Luego llegó Billie Holiday. Bajo el epígrafe Arte, la noche temática, pasaron la vida de la única cantante del universo que no es su voz, sino su alma lo que se escucha cuando interpreta. Gata golpeada por el mundo, pateada, violada, enganchada a todos los paraísos artificiales del planeta, jamás encontró el Edén. El Edén lo encontramos nosotros cuando, al concluir la sesión de la tele, pusimos en la doméstica cadena de sonido el mítico Last Recording, su última grabación (marzo de 1959), con la orquesta de Ray Ellis y el piano de Hank Jones. Se estaba muriendo Lady Day, y no obstante, mientras moría, su alma cantaba, Baby, won't you please come home. O sea, niño, criatura, amor, vuelve, a casa por favor.",
93
- "qas": [
94
- {
95
- "question": "¿Quién se incorporó a la reunión más adelante?",
96
- "id": "c5429572-64b8-4c5d-9553-826f867b07be",
97
- "answers": [
98
- {
99
- "answer_start": 91,
100
- "text": "Billie Holiday"
101
- }
102
- ]
103
- },
104
-
105
- ...
106
-
107
- ]
108
- }
109
- ],
110
- "title": "P_129_20010702_&_P_154_20010102_&_P_108_20000301_c_&_P_108_20000601_d",
111
- "source": "ancora"
112
- },
113
- ...
114
- ]
115
  }
116
 
117
- </pre>
118
-
119
  ### Data Splits
120
 
121
- - train
122
- - development
123
- - test
 
 
124
 
125
  ## Content analysis
126
 
@@ -129,24 +119,22 @@ We added a field "source" with the source of the context.
129
  * Number of articles: 3,834
130
  * Number of contexts: 6,247
131
  * Number of questions: 18,817
132
- * Questions/context: 3.01
133
  * Number of sentences: 48,026
134
- * Sentences/context: 7.70
 
135
 
136
  ### Number of tokens
137
 
138
  * Total tokens in context: 1,561,616
139
- * Tokens/context 250.30
140
  * Total tokens in questions: 203,235
141
- * Tokens in questions/questions: 10.80
142
- * Tokens in questions/tokens in context: 0.13
143
  * Total tokens in answers: 90,307
144
- * Tokens in answers/answers: 4.80
145
- * Tokens in answers/tokens in context: 0.06
146
 
147
  ### Lexical variation
148
 
149
- 46.38 of the words in the Question can be found in the Context.
150
 
151
  ### Question type
152
 
@@ -165,47 +153,36 @@ We added a field "source" with the source of the context.
165
  | no question mark | 43 | 0.23 % |
166
  | cuántas | 19 | 0.10 % |
167
 
168
-
169
  ## Dataset Creation
170
 
171
- ### Methodology
172
-
173
- 6,247 contexts were randomly chosen from the three corpus described below. We commisioned the creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250). In total, 18,817 pairs of a question and an extracted fragment that contains the answer were created.
174
-
175
-
176
  ### Curation Rationale
177
 
178
- For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. We also created another QA dataset with Wikipedia to ensure thematic and stylistic variety.
179
 
180
  ### Source Data
181
 
182
- - Spanish Wikipedia: https://es.wikipedia.org
183
- - Spanish Wikinews: https://es.wikinews.org/
184
- - AnCora corpus: http://clic.ub.edu/corpus/en
185
-
186
  #### Initial Data Collection and Normalization
187
 
188
- The source data are scraped articles from the Spanish Wikipedia site, Wikinews site and from AnCora corpus.
 
 
 
 
189
 
190
  #### Who are the source language producers?
191
 
192
- [More Information Needed]
193
 
194
  ### Annotations
195
 
196
  #### Annotation process
197
 
198
- We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250).
199
-
200
 
201
  #### Who are the annotators?
202
 
203
  Native language speakers.
204
 
205
- ### Dataset Curators
206
-
207
- Carlos Rodríguez and Carme Armentano, from BSC-CNS.
208
-
209
  ### Personal and Sensitive Information
210
 
211
  No personal or sensitive information included.
@@ -214,23 +191,43 @@ No personal or sensitive information included.
214
 
215
  ### Social Impact of Dataset
216
 
217
- [More Information Needed]
218
 
219
  ### Discussion of Biases
220
 
221
- [More Information Needed]
 
 
 
 
 
 
 
 
 
222
 
223
- ### Other Known Limitations
 
 
 
 
 
 
 
 
 
 
 
 
224
 
225
- [More Information Needed]
226
 
227
- ## Contact
228
 
229
- Carlos Rodríguez-Penagos or Carme Armentano-Oller ([email protected])
230
 
231
- ## Funding
232
  This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
233
 
234
- ## License
235
 
236
- <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Attribution-ShareAlike 4.0 International License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
 
10
  multilinguality:
11
  - monolingual
12
  pretty_name: Spanish Question Answering Corpus (SQAC)
 
 
13
  source_datasets:
14
  - original
15
  task_categories:
 
19
 
20
  ---
21
 
22
+ # SQAC (Spanish Question-Answering Corpus)
23
+ An extractive QA dataset for the Spanish language.
24
+
25
+ ## Table of Contents
26
+ <details>
27
+ <summary>Click to expand</summary>
28
+
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Citation Information](#citation-information)
49
+ - Contact Information](#contact-information)
50
+ - [Funding](#funding)
51
+ - [Licensing Information](#licensing-information)
52
+
53
+ </details>
54
+
55
+ ## Dataset Description
56
+
57
+ - **Paper:** [MarIA: Spanish Language Models](https://upcommons.upc.edu/bitstream/handle/2117/367156/6405-5863-1-PB%20%281%29.pdf?sequence=1)
58
+ - **Point of Contact:** [email protected]
59
+
60
+ ### Dataset Summary
61
+
62
+ This dataset contains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.
63
 
64
  The sources of the contexts are:
65
+ * Encyclopedic articles from the [Spanish Wikipedia](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
66
+ * News articles from [Wikinews](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
67
+ * Text from the [AnCora corpus](http://clic.ub.edu/corpus/en), which is a mix from several newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
 
 
68
 
69
  ### Supported Tasks and Leaderboards
70
 
 
72
 
73
  ### Languages
74
 
75
+ - Spanish (es)
 
 
 
 
 
 
 
 
76
 
77
  ## Dataset Structure
78
 
79
  ### Data Instances
80
 
81
+ <pre>
82
+ {
83
+ 'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628',
84
+ 'title': 'Historia de Japón',
85
+ 'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.',
86
+ 'question': '¿Qué influencia convirtió Japón en una nación industrial?',
87
+ 'answers': {
88
+ 'text': ['la de origen occidental'],
89
+ 'answer_start': [473]
90
+ }
91
+ }
92
+ </pre>
93
 
94
  ### Data Fields
95
 
 
 
 
 
 
96
  {
97
+ - `id`: str
98
+ - `title`: str
99
+ - `context`: str
100
+ - `question`: str
101
+ - `answers`: {
102
+ - `answer_start`: [int]
103
+ - `text`: [str]
104
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
  }
106
 
 
 
107
  ### Data Splits
108
 
109
+ | Split | Size |
110
+ | ------------- | ------------- |
111
+ | `train` | 15,036 |
112
+ | `dev` | 1,864 |
113
+ | `test` | 1.910 |
114
 
115
  ## Content analysis
116
 
 
119
  * Number of articles: 3,834
120
  * Number of contexts: 6,247
121
  * Number of questions: 18,817
 
122
  * Number of sentences: 48,026
123
+ * Questions/Context ratio: 3.01
124
+ * Sentences/Context ratio: 7.70
125
 
126
  ### Number of tokens
127
 
128
  * Total tokens in context: 1,561,616
129
+ * Average tokens/context: 250
130
  * Total tokens in questions: 203,235
131
+ * Average tokens/question: 10.80
 
132
  * Total tokens in answers: 90,307
133
+ * Average tokens/answer: 4.80
 
134
 
135
  ### Lexical variation
136
 
137
+ 46.38 % of the words in the Question can be found in the Context.
138
 
139
  ### Question type
140
 
 
153
  | no question mark | 43 | 0.23 % |
154
  | cuántas | 19 | 0.10 % |
155
 
 
156
  ## Dataset Creation
157
 
 
 
 
 
 
158
  ### Curation Rationale
159
 
160
+ For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
161
 
162
  ### Source Data
163
 
 
 
 
 
164
  #### Initial Data Collection and Normalization
165
 
166
+ The source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.
167
+
168
+ - [Spanish Wikipedia](https://es.wikipedia.org)
169
+ - [Spanish Wikinews](https://es.wikinews.org/)
170
+ - [AnCora corpus](http://clic.ub.edu/corpus/en)
171
 
172
  #### Who are the source language producers?
173
 
174
+ Contributors to the aforementioned sites.
175
 
176
  ### Annotations
177
 
178
  #### Annotation process
179
 
180
+ We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al.](http://arxiv.org/abs/1606.05250).
 
181
 
182
  #### Who are the annotators?
183
 
184
  Native language speakers.
185
 
 
 
 
 
186
  ### Personal and Sensitive Information
187
 
188
  No personal or sensitive information included.
 
191
 
192
  ### Social Impact of Dataset
193
 
194
+ This corpus contributes to the development of language models in Spanish.
195
 
196
  ### Discussion of Biases
197
 
198
+ No postprocessing steps were applied to mitigate potential social biases.
199
+
200
+ ## Additional Information
201
+
202
+ ### Dataset Curators
203
+
204
+ - Carme Armentano-Oller ([email protected])
205
+ - Carlos Rodríguez-Penagos ([email protected])
206
+
207
+ ### Citation Information
208
 
209
+ ```
210
+ @article{sqac,
211
+ author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
212
+ title = {MarIA: Spanish Language Models},
213
+ journal = {Procesamiento del Lenguaje Natural},
214
+ volume = {68},
215
+ number = {0},
216
+ year = {2022},
217
+ issn = {1989-7553},
218
+ url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
219
+ pages = {39--60}
220
+ }
221
+ ```
222
 
223
+ ### Contact Information
224
 
225
+ For further information, send an email to [email protected].
226
 
227
+ ### Funding
228
 
 
229
  This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
230
 
231
+ ### Licensing Information
232
 
233
+ <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Attribution-ShareAlike 4.0 International License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.