system HF staff commited on
Commit
b8bd3fd
0 Parent(s):

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - id
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - summarization
20
+ ---
21
+
22
+ # Dataset Card for Large-scale Indonesian Summarization
23
+
24
+ ## Table of Contents
25
+
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [IndoLEM (Indonesian Language Evaluation Montage)](https://indolem.github.io/)
52
+ - **Repository:** [Liputan6: Summarization Corpus for Indonesian](https://github.com/fajri91/sum_liputan6/)
53
+ - **Paper:** https://arxiv.org/abs/2011.00679
54
+ - **Leaderboard:**
55
+ - **Point of Contact:** [Fajri Koto](mailto:[email protected]),
56
+ [Jey Han Lau](mailto:[email protected]), [Timothy Baldwin](mailto:[email protected]),
57
+
58
+ ### Dataset Summary
59
+
60
+ In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,
61
+ an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop
62
+ benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual
63
+ BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have
64
+ low ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive
65
+ summarization models.
66
+
67
+ The dataset has two variants: "canonical" and "xtreme". The "xtreme" variant discards development and test
68
+ document–summary pairs where the summary has fewer than 90% novel 4-grams (the training data remains the same
69
+ as the canonical variant).
70
+
71
+ You need to manually request the liputan6 dataset using the form in https://github.com/fajri91/sum_liputan6/
72
+ and uncompress it. The liputan6 dataset can then be loaded using the following command
73
+ `datasets.load_dataset("id_liputan6", 'canonical', data_dir="<path/to/uncompressed_folder>")` or
74
+ `datasets.load_dataset("id_liputan6", 'xtreme', data_dir="<path/to/uncompressed_folder>")`.
75
+ ### Supported Tasks and Leaderboards
76
+
77
+ [More Information Needed]
78
+
79
+ ### Languages
80
+ Indonesian
81
+
82
+ ## Dataset Structure
83
+ ```
84
+ {
85
+ 'id': 'string',
86
+ 'url': 'string',
87
+ 'clean_article': 'string',
88
+ 'clean_article': 'string',
89
+ 'extractive_summary': 'string'
90
+ }
91
+ ```
92
+ ### Data Instances
93
+
94
+ An example of the dataset:
95
+ ```
96
+ {
97
+ 'clean_article': 'Liputan6.com, Ambon: Partai Bulan Bintang wilayah Maluku bertekad membantu pemerintah menyelesaikan konflik di provinsi tersebut. Syaratnya, penanganan penyelesaian konflik Maluku harus dimulai dari awal kerusuhan, yakni 19 Januari 1999. Demikian hasil Musyawarah Wilayah I PBB Maluku yang dimulai Sabtu pekan silam dan berakhir Senin (31/12) di Ambon. Menurut seorang fungsionaris PBB Ridwan Hasan, persoalan di Maluku bisa selesai asalkan pemerintah dan aparat keamanan serius menangani setiap persoalan di Maluku secara komprehensif dan bijaksana. Itulah sebabnya, PBB wilayah Maluku akan menjadikan penyelesaian konflik sebagai agenda utama partai. PBB Maluku juga akan mendukung penegakan hukum secara terpadu dan tanpa pandang bulu. Siapa saja yang melanggar hukum harus ditindak. Ridwan berharap, Ketua PBB Maluku yang baru, Ali Fauzi, dapat menindak lanjuti agenda politik partai yang telah diamanatkan dan mau mendukung penegakan hukum di Maluku. (ULF/Sahlan Heluth).',
98
+ 'clean_summary': 'Konflik Ambon telah berlangsung selama tiga tahun. Partai Bulan Bintang wilayah Maluku siap membantu pemerintah menyelesaikan kasus di provinsi tersebut.',
99
+ 'extractive_summary': 'Liputan6.com, Ambon: Partai Bulan Bintang wilayah Maluku bertekad membantu pemerintah menyelesaikan konflik di provinsi tersebut. Siapa saja yang melanggar hukum harus ditindak.',
100
+ 'id': '26408',
101
+ 'url': 'https://www.liputan6.com/news/read/26408/pbb-siap-membantu-penyelesaian-konflik-ambon'
102
+ }
103
+
104
+ ```
105
+
106
+ ### Data Fields
107
+ - `id`: id of the sample
108
+ - `url`: the url to the original article
109
+ - `clean_article`: the original article
110
+ - `clean_article`: the abstractive summarization
111
+ - `extractive_summary`: the extractive summarization
112
+
113
+ ### Data Splits
114
+
115
+ The dataset is splitted in to train, validation and test sets.
116
+
117
+ ## Dataset Creation
118
+
119
+ ### Curation Rationale
120
+
121
+ [More Information Needed]
122
+
123
+ ### Source Data
124
+
125
+ #### Initial Data Collection and Normalization
126
+
127
+ [More Information Needed]
128
+
129
+ #### Who are the source language producers?
130
+
131
+ [More Information Needed]
132
+
133
+ ### Annotations
134
+
135
+ #### Annotation process
136
+
137
+ [More Information Needed]
138
+
139
+ #### Who are the annotators?
140
+ [More Information Needed]
141
+
142
+ ### Personal and Sensitive Information
143
+
144
+ [More Information Needed]
145
+
146
+ ## Considerations for Using the Data
147
+
148
+ ### Social Impact of Dataset
149
+
150
+ [More Information Needed]
151
+
152
+ ### Discussion of Biases
153
+
154
+ [More Information Needed]
155
+
156
+ ### Other Known Limitations
157
+
158
+ [More Information Needed]
159
+
160
+ ## Additional Information
161
+
162
+ ### Dataset Curators
163
+
164
+ [More Information Needed]
165
+
166
+ ### Licensing Information
167
+
168
+ [More Information Needed]
169
+
170
+ ### Citation Information
171
+ ```
172
+ @inproceedings{Koto2020Liputan6AL,
173
+ title={Liputan6: A Large-scale Indonesian Dataset for Text Summarization},
174
+ author={Fajri Koto and Jey Han Lau and Timothy Baldwin},
175
+ booktitle={AACL/IJCNLP},
176
+ year={2020}
177
+ }
178
+ ```
179
+ ### Contributions
180
+
181
+ Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"canonical": {"description": "In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,\nan online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop\nbenchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual\nBERT-based models. We include a thorough error analysis by examining machine-generated summaries that have\nlow ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive\nsummarization models.\n", "citation": "@inproceedings{id_liputan6,\n author = {Fajri Koto, Jey Han Lau, Timothy Baldwin},\n title = {Liputan6: A Large-scale Indonesian Dataset for Text Summarization},\n year = {2020},\n url = {https://arxiv.org/abs/2011.00679},\n}\n", "homepage": "https://arxiv.org/abs/2011.00679", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "clean_article": {"dtype": "string", "id": null, "_type": "Value"}, "clean_summary": {"dtype": "string", "id": null, "_type": "Value"}, "extractive_summary": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "id_liputan6", "config_name": "canonical", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 20944658, "num_examples": 10972, "dataset_name": "id_liputan6"}, "test": {"name": "test", "num_bytes": 20526768, "num_examples": 10972, "dataset_name": "id_liputan6"}, "train": {"name": "train", "num_bytes": 382245586, "num_examples": 193883, "dataset_name": "id_liputan6"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 423717012, "size_in_bytes": 423717012}, "xtreme": {"description": "In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,\nan online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop\nbenchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual\nBERT-based models. We include a thorough error analysis by examining machine-generated summaries that have\nlow ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive\nsummarization models.\n", "citation": "@inproceedings{id_liputan6,\n author = {Fajri Koto, Jey Han Lau, Timothy Baldwin},\n title = {Liputan6: A Large-scale Indonesian Dataset for Text Summarization},\n year = {2020},\n url = {https://arxiv.org/abs/2011.00679},\n}\n", "homepage": "https://arxiv.org/abs/2011.00679", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "clean_article": {"dtype": "string", "id": null, "_type": "Value"}, "clean_summary": {"dtype": "string", "id": null, "_type": "Value"}, "extractive_summary": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "id_liputan6", "config_name": "xtreme", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 9652946, "num_examples": 4948, "dataset_name": "id_liputan6"}, "test": {"name": "test", "num_bytes": 7574550, "num_examples": 3862, "dataset_name": "id_liputan6"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 17227496, "size_in_bytes": 17227496}}
dummy/canonical/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f631dc2bdb5daf6217b37c8b1d1e12c6653f88bd78e7c946b0e7aff90a6482c8
3
+ size 14764
dummy/xtreme/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f631dc2bdb5daf6217b37c8b1d1e12c6653f88bd78e7c946b0e7aff90a6482c8
3
+ size 14764
id_liputan6.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Large-scale Indonesian Summarization Dataset"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import glob
20
+ import json
21
+ import logging
22
+ import os
23
+ import re
24
+ from pathlib import Path
25
+
26
+ import datasets
27
+
28
+
29
+ _CITATION = """\
30
+ @inproceedings{id_liputan6,
31
+ author = {Fajri Koto, Jey Han Lau, Timothy Baldwin},
32
+ title = {Liputan6: A Large-scale Indonesian Dataset for Text Summarization},
33
+ year = {2020},
34
+ url = {https://arxiv.org/abs/2011.00679},
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,
40
+ an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop
41
+ benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual
42
+ BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have
43
+ low ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive
44
+ summarization models.
45
+ """
46
+
47
+ _HOMEPAGE = "https://arxiv.org/abs/2011.00679"
48
+
49
+ _LICENSE = ""
50
+
51
+
52
+ class IdLiputan6Config(datasets.BuilderConfig):
53
+ """BuilderConfig for IdLiputan6"""
54
+
55
+ def __init__(self, **kwargs):
56
+ """BuilderConfig for IdLiputan6.
57
+ Args:
58
+ **kwargs: keyword arguments forwarded to super.
59
+ """
60
+ super(IdLiputan6Config, self).__init__(**kwargs)
61
+
62
+
63
+ class IdLiputan6(datasets.GeneratorBasedBuilder):
64
+ VERSION = datasets.Version("1.0.0")
65
+
66
+ BUILDER_CONFIGS = [
67
+ IdLiputan6Config(
68
+ name="canonical",
69
+ version=VERSION,
70
+ description="Canonical Liputan6 dataset",
71
+ ),
72
+ IdLiputan6Config(
73
+ name="xtreme",
74
+ version=VERSION,
75
+ description="Xtreme Liputan6 dataset",
76
+ ),
77
+ ]
78
+
79
+ @property
80
+ def manual_download_instructions(self):
81
+ return """\
82
+ You need to manually request the liputan6 dataset using the form in https://github.com/fajri91/sum_liputan6/
83
+ and uncompress it. The liputan6 dataset can then be loaded using the following command
84
+ `datasets.load_dataset("id_liputan6", 'canonical', data_dir="<path/to/uncompressed_folder>")` or
85
+ `datasets.load_dataset("id_liputan6", 'xtreme', data_dir="<path/to/uncompressed_folder>")`.
86
+ """
87
+
88
+ def _info(self):
89
+ features = datasets.Features(
90
+ {
91
+ "id": datasets.Value("string"),
92
+ "url": datasets.Value("string"),
93
+ "clean_article": datasets.Value("string"),
94
+ "clean_summary": datasets.Value("string"),
95
+ "extractive_summary": datasets.Value("string"),
96
+ }
97
+ )
98
+ return datasets.DatasetInfo(
99
+ description=_DESCRIPTION,
100
+ features=features,
101
+ supervised_keys=None,
102
+ homepage=_HOMEPAGE,
103
+ license=_LICENSE,
104
+ citation=_CITATION,
105
+ )
106
+
107
+ def _split_generators(self, dl_manager):
108
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
109
+ if not os.path.exists(data_dir):
110
+ raise FileNotFoundError(
111
+ "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('id_liputan6', "
112
+ "'canonical', data_dir=...)`. Manual download instructions:\n{}".format(
113
+ data_dir, self.manual_download_instructions
114
+ )
115
+ )
116
+ split_generators = [
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.VALIDATION,
119
+ gen_kwargs={
120
+ "article_dir": os.path.join(data_dir, "{}/dev".format(self.config.name)),
121
+ "split": "dev",
122
+ },
123
+ ),
124
+ datasets.SplitGenerator(
125
+ name=datasets.Split.TEST,
126
+ gen_kwargs={
127
+ "article_dir": os.path.join(data_dir, "{}/test".format(self.config.name)),
128
+ "split": "test",
129
+ },
130
+ ),
131
+ ]
132
+ if self.config.name == "canonical":
133
+ split_generators.append(
134
+ datasets.SplitGenerator(
135
+ name=datasets.Split.TRAIN,
136
+ gen_kwargs={
137
+ "article_dir": os.path.join(data_dir, "{}/train".format(self.config.name)),
138
+ "split": "train",
139
+ },
140
+ )
141
+ )
142
+ return split_generators
143
+
144
+ def _generate_examples(self, article_dir, split):
145
+ detokenizers = [
146
+ [re.compile(r"([Ll])iputan6 . com "), r"\1iputan6.com"],
147
+ [re.compile(r" ([.,:])"), r"\1"],
148
+ [re.compile(r"\( ([^)]+) \)"), r"(\1)"],
149
+ [re.compile(r"\" ([^\"]+) \""), r'"\1"'],
150
+ [re.compile(r"\[ ([^]]+) ]"), r"[\1]"],
151
+ ]
152
+ logging.info("⏳ Generating %s examples from = %s", split, article_dir)
153
+ guid = 0
154
+ for path in sorted(
155
+ glob.glob(os.path.join(article_dir, "**/*.json"), recursive=True), key=lambda p: int(Path(p).stem)
156
+ ):
157
+ with open(path, encoding="utf-8") as f:
158
+ data = json.load(f)
159
+ clean_article = " ".join([" ".join(i) for i in data["clean_article"]])
160
+ for d in detokenizers:
161
+ clean_article = d[0].sub(d[1], clean_article)
162
+ clean_summary = " ".join([" ".join(i) for i in data["clean_summary"]])
163
+ for d in detokenizers:
164
+ clean_summary = d[0].sub(d[1], clean_summary)
165
+ extractive_summary = " ".join([" ".join(data["clean_article"][i]) for i in data["extractive_summary"]])
166
+ for d in detokenizers:
167
+ extractive_summary = d[0].sub(d[1], extractive_summary)
168
+ yield guid, {
169
+ "id": str(data["id"]),
170
+ "url": data["url"],
171
+ "clean_article": clean_article,
172
+ "clean_summary": clean_summary,
173
+ "extractive_summary": extractive_summary,
174
+ }
175
+ guid += 1