system HF staff commited on
Commit
832b440
0 Parent(s):

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en-US
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ - sequence-modeling
19
+ task_ids:
20
+ - open-domain-qa
21
+ - dialogue-modeling
22
+ ---
23
+
24
+ # Dataset Card for ConvQuestions
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** [ConvQuestions page](https://convex.mpi-inf.mpg.de)
53
+ - **Repository:** [GitHub](https://github.com/PhilippChr/CONVEX)
54
+ - **Paper:** [Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion](https://arxiv.org/abs/1910.03262)
55
+ - **Leaderboard:** [Needs More Information]
56
+ - **Point of Contact:** [Philipp Christmann](mailto:[email protected])
57
+
58
+ ### Dataset Summary
59
+
60
+ ConvQuestions is the first realistic benchmark for conversational question answering over
61
+ knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.
62
+ They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,
63
+ with conversations from five domains: Books, Movies, Soccer, Music, and TV Series.
64
+ The questions feature a variety of complex question phenomena like comparisons, aggregations,
65
+ compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable
66
+ fair comparison across diverse methods. The data gathering setup was kept as natural as
67
+ possible, with the annotators selecting entities of their choice from each of the five domains,
68
+ and formulating the entire conversation in one session. All questions in a conversation are
69
+ from the same Turker, who also provided gold answers to the questions. For suitability to knowledge
70
+ graphs, questions were constrained to be objective or factoid in nature, but no other restrictive
71
+ guidelines were set. A notable property of ConvQuestions is that several questions are not
72
+ answerable by Wikidata alone (as of September 2019), but the required facts can, for example,
73
+ be found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper
74
+ (https://dl.acm.org/citation.cfm?id=3358016).
75
+
76
+ ### Supported Tasks and Leaderboards
77
+
78
+ [Needs More Information]
79
+
80
+ ### Languages
81
+
82
+ en
83
+
84
+ ## Dataset Structure
85
+
86
+ ### Data Instances
87
+
88
+ An example of 'train' looks as follows.
89
+ ```
90
+ {
91
+ 'domain': 'music',
92
+ 'seed_entity': 'https://www.wikidata.org/wiki/Q223495',
93
+ 'seed_entity_text': 'The Carpenters',
94
+ 'questions': [
95
+ 'When did The Carpenters sign with A&M Records?',
96
+ 'What song was their first hit?',
97
+ 'When did Karen die?',
98
+ 'Karen had what eating problem?',
99
+ 'and how did she die?'
100
+ ],
101
+ 'answers': [
102
+ [
103
+ '1969'
104
+ ],
105
+ [
106
+ 'https://www.wikidata.org/wiki/Q928282'
107
+ ],
108
+ [
109
+ '1983'
110
+ ],
111
+ [
112
+ 'https://www.wikidata.org/wiki/Q131749'
113
+ ],
114
+ [
115
+ 'https://www.wikidata.org/wiki/Q181754'
116
+ ]
117
+ ],
118
+ 'answer_texts': [
119
+ '1969',
120
+ '(They Long to Be) Close to You',
121
+ '1983',
122
+ 'anorexia nervosa',
123
+ 'heart failure'
124
+ ]
125
+ }
126
+ ```
127
+
128
+ ### Data Fields
129
+
130
+ - `domain`: a `string` feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv_series']
131
+ - `seed_entity`: a `string` feature. Wikidata ID of the topic entity.
132
+ - `seed_entity_text`: a `string` feature. Surface form of the topic entity.
133
+ - `questions`: a `list` of `string` features. List of questions (initial question and follow-up questions).
134
+ - `answers`: a `list` of `lists` of `string` features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names).
135
+ - `answer_texts`: a `list` of `string` features. List of surface forms of the answers.
136
+
137
+ ### Data Splits
138
+
139
+ |train|validation|tests|
140
+ |----:|---------:|----:|
141
+ | 6720| 2240| 2240|
142
+
143
+ ## Dataset Creation
144
+
145
+ ### Curation Rationale
146
+
147
+ [Needs More Information]
148
+
149
+ ### Source Data
150
+
151
+ #### Initial Data Collection and Normalization
152
+
153
+ [Needs More Information]
154
+
155
+ #### Who are the source language producers?
156
+
157
+ [Needs More Information]
158
+
159
+ ### Annotations
160
+
161
+ #### Annotation process
162
+
163
+ With insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants.
164
+
165
+ #### Who are the annotators?
166
+
167
+ Local students (Saarland Informatics Campus) and AMT Master Workers.
168
+
169
+ ### Personal and Sensitive Information
170
+
171
+ [Needs More Information]
172
+
173
+ ## Considerations for Using the Data
174
+
175
+ ### Social Impact of Dataset
176
+
177
+ [Needs More Information]
178
+
179
+ ### Discussion of Biases
180
+
181
+ [Needs More Information]
182
+
183
+ ### Other Known Limitations
184
+
185
+ [Needs More Information]
186
+
187
+ ## Additional Information
188
+
189
+ ### Dataset Curators
190
+
191
+ [Needs More Information]
192
+
193
+ ### Licensing Information
194
+
195
+ The ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License.
196
+
197
+ ### Citation Information
198
+
199
+ ```
200
+ @InProceedings{christmann2019look,
201
+ title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},
202
+ author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},
203
+ booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
204
+ pages={729--738},
205
+ year={2019}
206
+ }
207
+ ```
208
+
209
+ ### Contributions
210
+
211
+ Thanks to [@PhilippChr](https://github.com/PhilippChr) for adding this dataset.
conv_questions.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ ConvQuestions is the first realistic benchmark for conversational question answering over
17
+ knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.
18
+ They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,
19
+ with conversations from five domains: Books, Movies, Soccer, Music, and TV Series.
20
+ The questions feature a variety of complex question phenomena like comparisons, aggregations,
21
+ compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable
22
+ fair comparison across diverse methods. The data gathering setup was kept as natural as
23
+ possible, with the annotators selecting entities of their choice from each of the five domains,
24
+ and formulating the entire conversation in one session. All questions in a conversation are
25
+ from the same Turker, who also provided gold answers to the questions. For suitability to knowledge
26
+ graphs, questions were constrained to be objective or factoid in nature, but no other restrictive
27
+ guidelines were set. A notable property of ConvQuestions is that several questions are not
28
+ answerable by Wikidata alone (as of September 2019), but the required facts can, for example,
29
+ be found in the open Web or in Wikipedia. For details, please refer to our CIKM 2019 full paper
30
+ (https://dl.acm.org/citation.cfm?id=3358016).
31
+ """
32
+
33
+
34
+ import json
35
+ import os
36
+
37
+ import datasets
38
+
39
+
40
+ # Find for instance the citation on arxiv or on the dataset repo/website
41
+ _CITATION = """\
42
+ @InProceedings{christmann2019look,
43
+ title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},
44
+ author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},
45
+ booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
46
+ pages={729--738},
47
+ year={2019}
48
+ }
49
+ """
50
+
51
+ # You can copy an official description
52
+ _DESCRIPTION = """\
53
+ ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs.
54
+ It contains 11,200 conversations which can be evaluated over Wikidata. The questions feature a variety of complex
55
+ question phenomena like comparisons, aggregations, compositionality, and temporal reasoning."""
56
+
57
+ _HOMEPAGE = "https://convex.mpi-inf.mpg.de"
58
+
59
+ _LICENSE = "CC BY 4.0"
60
+
61
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
62
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
63
+ _URL = "http://qa.mpi-inf.mpg.de/convex/"
64
+ _URLs = {
65
+ "train": _URL + "ConvQuestions_train.zip",
66
+ "dev": _URL + "ConvQuestions_dev.zip",
67
+ "test": _URL + "ConvQuestions_test.zip",
68
+ }
69
+
70
+
71
+ class ConvQuestions(datasets.GeneratorBasedBuilder):
72
+ """ConvQuestions is a realistic benchmark for conversational question answering over knowledge graphs."""
73
+
74
+ VERSION = datasets.Version("1.0.0")
75
+
76
+ def _info(self):
77
+ # This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
78
+ features = datasets.Features(
79
+ {
80
+ "domain": datasets.Value("string"),
81
+ "seed_entity": datasets.Value("string"),
82
+ "seed_entity_text": datasets.Value("string"),
83
+ "questions": datasets.features.Sequence(datasets.Value("string")),
84
+ "answers": datasets.features.Sequence(datasets.features.Sequence(datasets.Value("string"))),
85
+ "answer_texts": datasets.features.Sequence(datasets.Value("string")),
86
+ }
87
+ )
88
+ return datasets.DatasetInfo(
89
+ # This is the description that will appear on the datasets page.
90
+ description=_DESCRIPTION,
91
+ # This defines the different columns of the dataset and their types
92
+ features=features, # Here we define them above because they are different between the two configurations
93
+ # If there's a common (input, target) tuple from the features,
94
+ # specify them here. They'll be used if as_supervised=True in
95
+ # builder.as_dataset.
96
+ supervised_keys=None,
97
+ # Homepage of the dataset for documentation
98
+ homepage=_HOMEPAGE,
99
+ # License for the dataset if available
100
+ license=_LICENSE,
101
+ # Citation for the dataset
102
+ citation=_CITATION,
103
+ )
104
+
105
+ def _split_generators(self, dl_manager):
106
+ """Returns SplitGenerators."""
107
+ # This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
108
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
109
+
110
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
111
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
112
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
113
+ data_dir = dl_manager.download_and_extract(_URLs)
114
+ return [
115
+ datasets.SplitGenerator(
116
+ name=datasets.Split.TRAIN,
117
+ # These kwargs will be passed to _generate_examples
118
+ gen_kwargs={
119
+ "filepath": os.path.join(data_dir["train"], "train_set/train_set_ALL.json"),
120
+ "split": "train",
121
+ },
122
+ ),
123
+ datasets.SplitGenerator(
124
+ name=datasets.Split.VALIDATION,
125
+ # These kwargs will be passed to _generate_examples
126
+ gen_kwargs={
127
+ "filepath": os.path.join(data_dir["dev"], "dev_set/dev_set_ALL.json"),
128
+ "split": "dev",
129
+ },
130
+ ),
131
+ datasets.SplitGenerator(
132
+ name=datasets.Split.TEST,
133
+ # These kwargs will be passed to _generate_examples
134
+ gen_kwargs={"filepath": os.path.join(data_dir["test"], "test_set/test_set_ALL.json"), "split": "test"},
135
+ ),
136
+ ]
137
+
138
+ def _generate_examples(
139
+ self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
140
+ ):
141
+ """Yields examples as (key, example) tuples."""
142
+ # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
143
+ # The `key` is here for legacy reason (tfds) and is not important in itself.
144
+ with open(filepath, encoding="utf-8") as f:
145
+ data = json.load(f)
146
+ for id_, instance in enumerate(data):
147
+ yield id_, {
148
+ "domain": instance["domain"],
149
+ "seed_entity": instance["seed_entity"],
150
+ "seed_entity_text": instance["seed_entity_text"],
151
+ "questions": [turn["question"] for turn in instance["questions"]],
152
+ "answers": [turn["answer"].split(";") for turn in instance["questions"]],
153
+ "answer_texts": [turn["answer_text"] for turn in instance["questions"]],
154
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs.\nIt contains 11,200 conversations which can be evaluated over Wikidata. The questions feature a variety of complex\nquestion phenomena like comparisons, aggregations, compositionality, and temporal reasoning.", "citation": "@InProceedings{christmann2019look,\n title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},\n author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},\n booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},\n pages={729--738},\n year={2019}\n}\n", "homepage": "https://convex.mpi-inf.mpg.de", "license": "CC BY 4.0", "features": {"domain": {"dtype": "string", "id": null, "_type": "Value"}, "seed_entity": {"dtype": "string", "id": null, "_type": "Value"}, "seed_entity_text": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answers": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "answer_texts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "conv_questions", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3589880, "num_examples": 6720, "dataset_name": "conv_questions"}, "validation": {"name": "validation", "num_bytes": 1241778, "num_examples": 2240, "dataset_name": "conv_questions"}, "test": {"name": "test", "num_bytes": 1175656, "num_examples": 2240, "dataset_name": "conv_questions"}}, "download_checksums": {"http://qa.mpi-inf.mpg.de/convex/ConvQuestions_train.zip": {"num_bytes": 2139687, "checksum": "093b7ea4106501035e5954213fda6111d0e4747011e8efa558765f2a9705d651"}, "http://qa.mpi-inf.mpg.de/convex/ConvQuestions_dev.zip": {"num_bytes": 594329, "checksum": "91faf376a5f702734c78033e2f357c507291cc3c85d9fda39e65c366f0abc7fd"}, "http://qa.mpi-inf.mpg.de/convex/ConvQuestions_test.zip": {"num_bytes": 542001, "checksum": "698e2a1761b9a0bff6490ccc735df8a1be9b85a7bbd8ed451a1b81ff5a1df28d"}}, "download_size": 3276017, "post_processing_size": null, "dataset_size": 6007314, "size_in_bytes": 9283331}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ca32fa0fd1735802f16e21ae69ba0ce29e6fab379ff0b3409fb056c7a7e725c
3
+ size 24754