Datasets:
hfl
/

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Chinese
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
1bc7215
1 Parent(s): 725d634

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,227 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - zh
8
- license:
9
- - cc-by-sa-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - question-answering
18
- task_ids:
19
- - extractive-qa
20
- paperswithcode_id: cmrc-2018
21
- pretty_name: Chinese Machine Reading Comprehension 2018
22
- dataset_info:
23
- features:
24
- - name: id
25
- dtype: string
26
- - name: context
27
- dtype: string
28
- - name: question
29
- dtype: string
30
- - name: answers
31
- sequence:
32
- - name: text
33
- dtype: string
34
- - name: answer_start
35
- dtype: int32
36
- splits:
37
- - name: train
38
- num_bytes: 15508110
39
- num_examples: 10142
40
- - name: validation
41
- num_bytes: 5183809
42
- num_examples: 3219
43
- - name: test
44
- num_bytes: 1606931
45
- num_examples: 1002
46
- download_size: 11508117
47
- dataset_size: 22298850
48
- ---
49
-
50
- # Dataset Card for "cmrc2018"
51
-
52
- ## Table of Contents
53
- - [Dataset Description](#dataset-description)
54
- - [Dataset Summary](#dataset-summary)
55
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
56
- - [Languages](#languages)
57
- - [Dataset Structure](#dataset-structure)
58
- - [Data Instances](#data-instances)
59
- - [Data Fields](#data-fields)
60
- - [Data Splits](#data-splits)
61
- - [Dataset Creation](#dataset-creation)
62
- - [Curation Rationale](#curation-rationale)
63
- - [Source Data](#source-data)
64
- - [Annotations](#annotations)
65
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
66
- - [Considerations for Using the Data](#considerations-for-using-the-data)
67
- - [Social Impact of Dataset](#social-impact-of-dataset)
68
- - [Discussion of Biases](#discussion-of-biases)
69
- - [Other Known Limitations](#other-known-limitations)
70
- - [Additional Information](#additional-information)
71
- - [Dataset Curators](#dataset-curators)
72
- - [Licensing Information](#licensing-information)
73
- - [Citation Information](#citation-information)
74
- - [Contributions](#contributions)
75
-
76
- ## Dataset Description
77
-
78
- - **Homepage:** [https://github.com/ymcui/cmrc2018](https://github.com/ymcui/cmrc2018)
79
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
80
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
81
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
- - **Size of downloaded dataset files:** 10.97 MB
83
- - **Size of the generated dataset:** 21.28 MB
84
- - **Total amount of disk used:** 32.26 MB
85
-
86
- ### Dataset Summary
87
-
88
- A Span-Extraction dataset for Chinese machine reading comprehension to add language
89
- diversities in this area. The dataset is composed by near 20,000 real questions annotated
90
- on Wikipedia paragraphs by human experts. We also annotated a challenge set which
91
- contains the questions that need comprehensive understanding and multi-sentence
92
- inference throughout the context.
93
-
94
- ### Supported Tasks and Leaderboards
95
-
96
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
-
98
- ### Languages
99
-
100
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
-
102
- ## Dataset Structure
103
-
104
- ### Data Instances
105
-
106
- #### default
107
-
108
- - **Size of downloaded dataset files:** 10.97 MB
109
- - **Size of the generated dataset:** 21.28 MB
110
- - **Total amount of disk used:** 32.26 MB
111
-
112
- An example of 'validation' looks as follows.
113
- ```
114
- This example was too long and was cropped:
115
-
116
- {
117
- "answers": {
118
- "answer_start": [11, 11],
119
- "text": ["光荣和ω-force", "光荣和ω-force"]
120
- },
121
- "context": "\"《战国无双3》()是由光荣和ω-force开发的战国无双系列的正统第三续作。本作以三大故事为主轴,分别是以武田信玄等人为主的《关东三国志》,织田信长等人为主的《战国三杰》,石田三成等人为主的《关原的年轻武者》,丰富游戏内的剧情。此部份专门介绍角色,欲知武...",
122
- "id": "DEV_0_QUERY_0",
123
- "question": "《战国无双3》是由哪两个公司合作开发的?"
124
- }
125
- ```
126
-
127
- ### Data Fields
128
-
129
- The data fields are the same among all splits.
130
-
131
- #### default
132
- - `id`: a `string` feature.
133
- - `context`: a `string` feature.
134
- - `question`: a `string` feature.
135
- - `answers`: a dictionary feature containing:
136
- - `text`: a `string` feature.
137
- - `answer_start`: a `int32` feature.
138
-
139
- ### Data Splits
140
-
141
- | name | train | validation | test |
142
- | ------- | ----: | ---------: | ---: |
143
- | default | 10142 | 3219 | 1002 |
144
-
145
- ## Dataset Creation
146
-
147
- ### Curation Rationale
148
-
149
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
-
151
- ### Source Data
152
-
153
- #### Initial Data Collection and Normalization
154
-
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
-
157
- #### Who are the source language producers?
158
-
159
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
-
161
- ### Annotations
162
-
163
- #### Annotation process
164
-
165
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
-
167
- #### Who are the annotators?
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- ### Personal and Sensitive Information
172
-
173
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
-
175
- ## Considerations for Using the Data
176
-
177
- ### Social Impact of Dataset
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- ### Discussion of Biases
182
-
183
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
-
185
- ### Other Known Limitations
186
-
187
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
-
189
- ## Additional Information
190
-
191
- ### Dataset Curators
192
-
193
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
-
195
- ### Licensing Information
196
-
197
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
198
-
199
- ### Citation Information
200
-
201
- ```
202
- @inproceedings{cui-emnlp2019-cmrc2018,
203
- title = "A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension",
204
- author = "Cui, Yiming and
205
- Liu, Ting and
206
- Che, Wanxiang and
207
- Xiao, Li and
208
- Chen, Zhipeng and
209
- Ma, Wentao and
210
- Wang, Shijin and
211
- Hu, Guoping",
212
- booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
213
- month = nov,
214
- year = "2019",
215
- address = "Hong Kong, China",
216
- publisher = "Association for Computational Linguistics",
217
- url = "https://www.aclweb.org/anthology/D19-1600",
218
- doi = "10.18653/v1/D19-1600",
219
- pages = "5886--5891",
220
- }
221
-
222
- ```
223
-
224
-
225
- ### Contributions
226
-
227
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cmrc2018.py DELETED
@@ -1,123 +0,0 @@
1
- """TODO(cmrc2018): Add a description here."""
2
-
3
-
4
- import json
5
-
6
- import datasets
7
- from datasets.tasks import QuestionAnsweringExtractive
8
-
9
-
10
- # TODO(cmrc2018): BibTeX citation
11
- _CITATION = """\
12
- @inproceedings{cui-emnlp2019-cmrc2018,
13
- title = {A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension},
14
- author = {Cui, Yiming and
15
- Liu, Ting and
16
- Che, Wanxiang and
17
- Xiao, Li and
18
- Chen, Zhipeng and
19
- Ma, Wentao and
20
- Wang, Shijin and
21
- Hu, Guoping},
22
- booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
23
- month = {nov},
24
- year = {2019},
25
- address = {Hong Kong, China},
26
- publisher = {Association for Computational Linguistics},
27
- url = {https://www.aclweb.org/anthology/D19-1600},
28
- doi = {10.18653/v1/D19-1600},
29
- pages = {5886--5891}}
30
- """
31
-
32
- # TODO(cmrc2018):
33
- _DESCRIPTION = """\
34
- A Span-Extraction dataset for Chinese machine reading comprehension to add language
35
- diversities in this area. The dataset is composed by near 20,000 real questions annotated
36
- on Wikipedia paragraphs by human experts. We also annotated a challenge set which
37
- contains the questions that need comprehensive understanding and multi-sentence
38
- inference throughout the context.
39
- """
40
- _URL = "https://github.com/ymcui/cmrc2018"
41
- _TRAIN_FILE = "https://worksheets.codalab.org/rest/bundles/0x15022f0c4d3944a599ab27256686b9ac/contents/blob/"
42
- _DEV_FILE = "https://worksheets.codalab.org/rest/bundles/0x72252619f67b4346a85e122049c3eabd/contents/blob/"
43
- _TEST_FILE = "https://worksheets.codalab.org/rest/bundles/0x182c2e71fac94fc2a45cc1a3376879f7/contents/blob/"
44
-
45
-
46
- class Cmrc2018(datasets.GeneratorBasedBuilder):
47
- """TODO(cmrc2018): Short description of my dataset."""
48
-
49
- # TODO(cmrc2018): Set up version.
50
- VERSION = datasets.Version("0.1.0")
51
-
52
- def _info(self):
53
- # TODO(cmrc2018): Specifies the datasets.DatasetInfo object
54
- return datasets.DatasetInfo(
55
- # This is the description that will appear on the datasets page.
56
- description=_DESCRIPTION,
57
- # datasets.features.FeatureConnectors
58
- features=datasets.Features(
59
- {
60
- "id": datasets.Value("string"),
61
- "context": datasets.Value("string"),
62
- "question": datasets.Value("string"),
63
- "answers": datasets.features.Sequence(
64
- {
65
- "text": datasets.Value("string"),
66
- "answer_start": datasets.Value("int32"),
67
- }
68
- ),
69
- # These are the features of your dataset like images, labels ...
70
- }
71
- ),
72
- # If there's a common (input, target) tuple from the features,
73
- # specify them here. They'll be used if as_supervised=True in
74
- # builder.as_dataset.
75
- supervised_keys=None,
76
- # Homepage of the dataset for documentation
77
- homepage=_URL,
78
- citation=_CITATION,
79
- task_templates=[
80
- QuestionAnsweringExtractive(
81
- question_column="question", context_column="context", answers_column="answers"
82
- )
83
- ],
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- """Returns SplitGenerators."""
88
- # TODO(cmrc2018): Downloads the data and defines the splits
89
- # dl_manager is a datasets.download.DownloadManager that can be used to
90
- # download and extract URLs
91
- urls_to_download = {"train": _TRAIN_FILE, "dev": _DEV_FILE, "test": _TEST_FILE}
92
- downloaded_files = dl_manager.download_and_extract(urls_to_download)
93
-
94
- return [
95
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
96
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
97
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
98
- ]
99
-
100
- def _generate_examples(self, filepath):
101
- """Yields examples."""
102
- # TODO(cmrc2018): Yields (key, example) tuples from the dataset
103
- with open(filepath, encoding="utf-8") as f:
104
- data = json.load(f)
105
- for example in data["data"]:
106
- for paragraph in example["paragraphs"]:
107
- context = paragraph["context"].strip()
108
- for qa in paragraph["qas"]:
109
- question = qa["question"].strip()
110
- id_ = qa["id"]
111
-
112
- answer_starts = [answer["answer_start"] for answer in qa["answers"]]
113
- answers = [answer["text"].strip() for answer in qa["answers"]]
114
-
115
- yield id_, {
116
- "context": context,
117
- "question": question,
118
- "id": id_,
119
- "answers": {
120
- "answer_start": answer_starts,
121
- "text": answers,
122
- },
123
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "A Span-Extraction dataset for Chinese machine reading comprehension to add language\ndiversities in this area. The dataset is composed by near 20,000 real questions annotated\non Wikipedia paragraphs by human experts. We also annotated a challenge set which\ncontains the questions that need comprehensive understanding and multi-sentence\ninference throughout the context.\n", "citation": "@inproceedings{cui-emnlp2019-cmrc2018,\n title = {A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension},\n author = {Cui, Yiming and\n Liu, Ting and\n Che, Wanxiang and\n Xiao, Li and\n Chen, Zhipeng and\n Ma, Wentao and\n Wang, Shijin and\n Hu, Guoping},\n booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},\n month = {nov},\n year = {2019},\n address = {Hong Kong, China},\n publisher = {Association for Computational Linguistics},\n url = {https://www.aclweb.org/anthology/D19-1600},\n doi = {10.18653/v1/D19-1600},\n pages = {5886--5891}}\n", "homepage": "https://github.com/ymcui/cmrc2018", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "answer_start": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "question-answering-extractive", "question_column": "question", "context_column": "context", "answers_column": "answers"}], "builder_name": "cmrc2018", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 15508110, "num_examples": 10142, "dataset_name": "cmrc2018"}, "validation": {"name": "validation", "num_bytes": 5183809, "num_examples": 3219, "dataset_name": "cmrc2018"}, "test": {"name": "test", "num_bytes": 1606931, "num_examples": 1002, "dataset_name": "cmrc2018"}}, "download_checksums": {"https://worksheets.codalab.org/rest/bundles/0x15022f0c4d3944a599ab27256686b9ac/contents/blob/": {"num_bytes": 7408757, "checksum": "5497aa2f81908e31d6b0e27d99b1f90ab63a8f58fa92fffe5d17cf62eba0c212"}, "https://worksheets.codalab.org/rest/bundles/0x72252619f67b4346a85e122049c3eabd/contents/blob/": {"num_bytes": 3299139, "checksum": "e9ff74231f05c230c6fa88b84441ee334d97234cbb610991cd94b82db00c7f1f"}, "https://worksheets.codalab.org/rest/bundles/0x182c2e71fac94fc2a45cc1a3376879f7/contents/blob/": {"num_bytes": 800221, "checksum": "f3fae95b57da8e03afb2b57467dd221417060ef4d82db13bf22fc88589f3a6f3"}}, "download_size": 11508117, "post_processing_size": null, "dataset_size": 22298850, "size_in_bytes": 33806967}}
 
 
default/cmrc2018-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edf7ee89d29a2b916993f897d7fab19c687351c3d8de786011aa7df25c6d15f3
3
+ size 394652
default/cmrc2018-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6344fd7b9b2f1629ed33f526f3f11121fc2a263186e1ac2733b37bbc8c08cff5
3
+ size 3365759
default/cmrc2018-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0bf63bc2a1548d392ad8eb0226a94b5e46e5ba7fc5a17bc52408e83ccf0ad58
3
+ size 1136060