Datasets:
Enriched README
Browse files
README.md
CHANGED
@@ -1,6 +1,4 @@
|
|
1 |
---
|
2 |
-
language:
|
3 |
-
- en
|
4 |
dataset_info:
|
5 |
features:
|
6 |
- name: original_nl_question
|
@@ -41,6 +39,177 @@ dataset_info:
|
|
41 |
num_examples: 9961
|
42 |
download_size: 7595264
|
43 |
dataset_size: 16322261
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
---
|
45 |
|
46 |
-
# Dataset Card for SimpleQuestions-SPARQLtoText
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
|
|
2 |
dataset_info:
|
3 |
features:
|
4 |
- name: original_nl_question
|
|
|
39 |
num_examples: 9961
|
40 |
download_size: 7595264
|
41 |
dataset_size: 16322261
|
42 |
+
task_categories:
|
43 |
+
- question-answering
|
44 |
+
- text-generation
|
45 |
+
tags:
|
46 |
+
- qa
|
47 |
+
- knowledge-graph
|
48 |
+
- sparql
|
49 |
+
language:
|
50 |
+
- en
|
51 |
---
|
52 |
|
53 |
+
# Dataset Card for SimpleQuestions-SPARQLtoText
|
54 |
+
|
55 |
+
## Table of Contents
|
56 |
+
- [Dataset Card for SimpleQuestions-SPARQLtoText](#dataset-card-for-simplequestions-sparqltotext)
|
57 |
+
- [Table of Contents](#table-of-contents)
|
58 |
+
- [Dataset Description](#dataset-description)
|
59 |
+
- [Dataset Summary](#dataset-summary)
|
60 |
+
- [JSON fields](#json-fields)
|
61 |
+
- [Format of the SPARQL queries](#format-of-the-sparql-queries)
|
62 |
+
- [Answerable/unanswerable](#answerableunanswerable)
|
63 |
+
- [Languages](#languages)
|
64 |
+
- [Dataset Structure](#dataset-structure)
|
65 |
+
- [Types of questions](#types-of-questions)
|
66 |
+
- [Data splits](#data-splits)
|
67 |
+
- [Additional information](#additional-information)
|
68 |
+
- [Related datasets](#related-datasets)
|
69 |
+
- [Licencing information](#licencing-information)
|
70 |
+
- [Citation information](#citation-information)
|
71 |
+
- [This version of the corpus (with normalized SPARQL queries)](#this-version-of-the-corpus-with-normalized-sparql-queries)
|
72 |
+
- [Original version](#original-version)
|
73 |
+
|
74 |
+
|
75 |
+
## Dataset Description
|
76 |
+
|
77 |
+
- **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/)
|
78 |
+
- **Point of Contact:** GwΓ©nolΓ© LecorvΓ©
|
79 |
+
|
80 |
+
### Dataset Summary
|
81 |
+
|
82 |
+
Special version of [SimpleQuestions](https://github.com/askplatypus/wikidata-simplequestions) with SPARQL queries formatted for the SPARQL-to-Text task.
|
83 |
+
|
84 |
+
#### JSON fields
|
85 |
+
|
86 |
+
The original version of SimpleQuestions is a raw text file listing triples and the natural language question. A JSON version has been generated and augmented with the following fields:
|
87 |
+
|
88 |
+
* `rdf_subject`, `rdf_property`, `rdf_object`: triple in the Wikidata format (IDs)
|
89 |
+
|
90 |
+
* `nl_subject`, `nl_property`, `nl_object`: triple with labels retrieved from Wikidata. Some entities do not have labels, they are labelled as `UNDEFINED_LABEL`
|
91 |
+
|
92 |
+
* `sparql_query`: SPARQL query with Wikidata IDs
|
93 |
+
|
94 |
+
* `verbalized_sparql_query`: SPARQL query with labels
|
95 |
+
|
96 |
+
* `original_nl_question`: original natural language question from SimpleQuestions. This is in **lower case**.
|
97 |
+
|
98 |
+
* `recased_nl_question`: Version of `original_nl_question` where the named entities have been automatically recased based on the labels of the entities.
|
99 |
+
|
100 |
+
#### Format of the SPARQL queries
|
101 |
+
|
102 |
+
* Randomizing the variables names
|
103 |
+
|
104 |
+
* Delimiters are spaced
|
105 |
+
|
106 |
+
#### Answerable/unanswerable
|
107 |
+
|
108 |
+
Some questions in SimpleQuestions cannot be answered. Hence, it originally comes with 2 versions for the train/valid/test sets: one with all entries, another with the answerable questions only.
|
109 |
+
|
110 |
+
### Languages
|
111 |
+
|
112 |
+
- English
|
113 |
+
|
114 |
+
## Dataset Structure
|
115 |
+
|
116 |
+
### Types of questions
|
117 |
+
|
118 |
+
Comparison of question types compared to related datasets:
|
119 |
+
|
120 |
+
| | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) |
|
121 |
+
|--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:|
|
122 |
+
| **Number of triplets in query** | 1 | β | β | β | β | β |
|
123 |
+
| | 2 | | β | β | β | β |
|
124 |
+
| | More | | | β | β | β |
|
125 |
+
| **Logical connector between triplets** | Conjunction | β | β | β | β | β |
|
126 |
+
| | Disjunction | | | | β | β |
|
127 |
+
| | Exclusion | | | | β | β |
|
128 |
+
| **Topology of the query graph** | Direct | β | β | β | β | β |
|
129 |
+
| | Sibling | | β | β | β | β |
|
130 |
+
| | Chain | | β | β | β | β |
|
131 |
+
| | Mixed | | | β | | β |
|
132 |
+
| | Other | | β | β | β | β |
|
133 |
+
| **Variable typing in the query** | None | β | β | β | β | οΏ½οΏ½ |
|
134 |
+
| | Target variable | | β | β | β | β |
|
135 |
+
| | Internal variable | | β | β | β | β |
|
136 |
+
| **Comparisons clauses** | None | β | β | β | β | β |
|
137 |
+
| | String | | | β | | β |
|
138 |
+
| | Number | | | β | β | β |
|
139 |
+
| | Date | | | β | | β |
|
140 |
+
| **Superlative clauses** | No | β | β | β | β | β |
|
141 |
+
| | Yes | | | | β | |
|
142 |
+
| **Answer type** | Entity (open) | β | β | β | β | β |
|
143 |
+
| | Entity (closed) | | | | β | β |
|
144 |
+
| | Number | | | β | β | β |
|
145 |
+
| | Boolean | | β | β | β | β |
|
146 |
+
| **Answer cardinality** | 0 (unanswerable) | | | β | | β |
|
147 |
+
| | 1 | β | β | β | β | β |
|
148 |
+
| | More | | β | β | β | β |
|
149 |
+
| **Number of target variables** | 0 (β ASK verb) | | β | β | β | β |
|
150 |
+
| | 1 | β | β | β | β | β |
|
151 |
+
| | 2 | | | β | | β |
|
152 |
+
| **Dialogue context** | Self-sufficient | β | β | β | β | β |
|
153 |
+
| | Coreference | | | | β | β |
|
154 |
+
| | Ellipsis | | | | β | β |
|
155 |
+
| **Meaning** | Meaningful | β | β | β | β | β |
|
156 |
+
| | Non-sense | | | | | β |
|
157 |
+
|
158 |
+
|
159 |
+
### Data splits
|
160 |
+
|
161 |
+
Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.
|
162 |
+
|
163 |
+
| | Train | Validation | Test |
|
164 |
+
| --------------------- | ---------- | ---------- | ---------- |
|
165 |
+
| Questions | 34,000 | 5,000 | 10,000 |
|
166 |
+
| NL question per query | 1 |
|
167 |
+
| Characters per query | 70 (Β± 10) |
|
168 |
+
| Tokens per question | 7.4 (Β± 2.1) |
|
169 |
+
|
170 |
+
|
171 |
+
## Additional information
|
172 |
+
|
173 |
+
### Related datasets
|
174 |
+
|
175 |
+
This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:
|
176 |
+
- Non conversational datasets
|
177 |
+
- [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) (from https://github.com/askplatypus/wikidata-simplequestions)
|
178 |
+
- [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) (from https://github.com/barshana-banerjee/ParaQA)
|
179 |
+
- [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) (from http://lc-quad.sda.tech/)
|
180 |
+
- Conversational datasets
|
181 |
+
- [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) (from https://amritasaha1812.github.io/CSQA/)
|
182 |
+
- [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) (derived from https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/release_v3.0)
|
183 |
+
|
184 |
+
### Licencing information
|
185 |
+
|
186 |
+
* Content from original dataset: CC-BY 3.0
|
187 |
+
* New content: CC BY-SA 4.0
|
188 |
+
|
189 |
+
|
190 |
+
|
191 |
+
### Citation information
|
192 |
+
|
193 |
+
|
194 |
+
#### This version of the corpus (with normalized SPARQL queries)
|
195 |
+
|
196 |
+
```bibtex
|
197 |
+
@inproceedings{lecorve2022sparql2text,
|
198 |
+
title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications},
|
199 |
+
author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.},
|
200 |
+
journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)},
|
201 |
+
year={2022}
|
202 |
+
}
|
203 |
+
```
|
204 |
+
|
205 |
+
#### Original version
|
206 |
+
|
207 |
+
```bibtex
|
208 |
+
@article{bordes2015large,
|
209 |
+
title={Large-scale simple question answering with memory networks},
|
210 |
+
author={Bordes, Antoine and Usunier, Nicolas and Chopra, Sumit and Weston, Jason},
|
211 |
+
journal={arXiv preprint arXiv:1506.02075},
|
212 |
+
year={2015}
|
213 |
+
}
|
214 |
+
|
215 |
+
```
|