Yeb Havinga commited on
Commit
69c5645
1 Parent(s): 8738eb3

Add README.md

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: mC4_nl_cleaned
3
+ annotations_creators:
4
+ - no-annotation
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - nl
9
+ licenses:
10
+ - odc-by-1.0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ tiny:
15
+ - 1M<n<10M
16
+ small:
17
+ - 10M<n<100M
18
+ medium:
19
+ - 10M<n<100M
20
+ large:
21
+ - 10M<n<100M
22
+ full:
23
+ - 100M<n<1B
24
+ source_datasets:
25
+ - extended
26
+ task_categories:
27
+ - sequence-modeling
28
+ task_ids:
29
+ - language-modeling
30
+ paperswithcode_id: mc4
31
+ ---
32
+
33
+ # Dataset Card for Clean Dutch mC4 🇮🇹
34
+
35
+ ## Table of Contents
36
+
37
+ - [Dataset Card for Clean](#dataset-card-for-mc4)
38
+ - [Table of Contents](#table-of-contents)
39
+ - [Dataset Description](#dataset-description)
40
+ - [Dataset Summary](#dataset-summary)
41
+ - [Preprocessing](#preprocessing)
42
+ - [Languages](#languages)
43
+ - [Dataset Structure](#dataset-structure)
44
+ - [Data Instances](#data-instances)
45
+ - [Data Fields](#data-fields)
46
+ - [Data Splits](#data-splits)
47
+ - [Dataset Creation](#dataset-creation)
48
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
49
+ - [Social Impact of Dataset](#social-impact-of-dataset)
50
+ - [Discussion of Biases](#discussion-of-biases)
51
+ - [Other Known Limitations](#other-known-limitations)
52
+ - [Additional Information](#additional-information)
53
+ - [Dataset Curators](#dataset-curators)
54
+ - [Licensing Information](#licensing-information)
55
+ - [Citation Information](#citation-information)
56
+ - [Contributions](#contributions)
57
+
58
+ ## Dataset Description
59
+
60
+ - **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
61
+ - **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
62
+
63
+ ### Dataset Summary
64
+
65
+ A thoroughly cleaned version of the Dutch split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4).
66
+ Based on the [Common Crawl dataset](https://commoncrawl.org).
67
+ The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
68
+
69
+ ### Preprocessing
70
+
71
+ The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version.
72
+ See [GitLab](https://gitlab.com/yhavinga/c4nlpreproc) for details.
73
+
74
+ In summary, the preprocessing procedure includes:
75
+
76
+ - Removing documents containing words from a selection of the [Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words).
77
+
78
+ - Removing sentences containing:
79
+
80
+ - Less than 3 words.
81
+
82
+ - A word longer than 1000 characters.
83
+
84
+ - An end symbol not matching end-of-sentence punctuation.
85
+
86
+ - Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Dutch or English.
87
+
88
+ - Removing documents (after sentence filtering):
89
+
90
+ - Containing less than 5 sentences.
91
+
92
+ - Containing less than 500 or more than 50'000 characters.
93
+
94
+ - Not identified as prevalently Dutch by the `LangDetect` package.
95
+
96
+ Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch
97
+ shards of mC4 (1024 of ~220Mb train, 8 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence
98
+ tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
99
+
100
+ ## Dataset Structure
101
+
102
+ ### Data Instances
103
+
104
+ An example from the dataset:
105
+
106
+ ```
107
+ {
108
+ 'timestamp': '2019-02-22T15:37:25Z',
109
+ 'url': 'https://ondernemingen.bnpparibasfortis.be/nl/artikel?n=vijf-gouden-tips-voor-succesvol-zaken-doen-met-japan',
110
+ 'text': 'Japanse bedrijven zijn niet alleen hondstrouw aan hun leveranciers , ze betalen ook nog eens erg stipt. Alleen is het niet zo makkelijk er een voet tussen de deur te krijgen. Met de volgende tips hebt u alvast een streepje voor.\nIn Japan draait alles om vertrouwen. Neem voldoende tijd om een relatie op te bouwen.Aarzel niet om tijdig een lokale vertrouwenspersoon in te schakelen.\nJapan is een erg competitieve markt.Kwaliteit en prijs zijn erg belangrijk, u zult dus het beste van uzelf moeten geven. Gelukkig is de beloning groot. Japanse zakenlui zijn loyaal en betalen stipt!\nJapanners houden er eigenzinnige eisen op na. Kom dus niet aanzetten met uw standaardproducten voor de Europese markt. Zo moet een producent van diepvriesfrieten bijvoorbeeld perfect identieke frietjes kunnen leveren in mini- verpakkingen. Het goede nieuws is dat Japanners voor kwaliteit graag diep in hun buidel tasten.\nEn u dacht dat Europa lijdt aan reglementitis? Japanners kennen er ook wat van. Tal van voorschriften zeggen wat je wel en niet mag doen. Gelukkig zijn de regels helder geformuleerd.\nHet gebruik van het Engels is niet echt ingeburgerd in Japan. Betrek een tolk bij uw onderhandelingen en zorg voor correcte vertalingen van handleidingen of softwareprogramma’s.'
111
+ }
112
+ ```
113
+
114
+ ### Data Fields
115
+
116
+ The data contains the following fields:
117
+
118
+ - `url`: url of the source as a string
119
+ - `text`: text content as a string
120
+ - `timestamp`: timestamp of extraction as a string
121
+
122
+ ### Data Splits
123
+
124
+ To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following the naming style `c4-it.tfrecord-0XXXX-of-01024.json.gz` and 8 for validation following the naming style `c4-it-validation.tfrecord-0000X-of-00008.json.gz`. The full set of preprocessed files takes roughly 215GB of disk space to download with Git LFS.
125
+
126
+ For ease of use under different storage capacities, the following incremental splits are available (sizes are estimates). **Important**: The sizes in GB represent the estimated weight for :
127
+
128
+ |split |train size (docs, words, download + preproc disk space)|validation size|
129
+ |:-----|------------------------------------------------------:|--------------:|
130
+ |tiny | 10M docs, 4B words (9 GB + 27 GB) | 12k docs |
131
+ |small | 20M docs, 8B words (18 GB + 54 GB) | 24k docs |
132
+ |medium| 50M docs, 20B words (47 GB + 135 GB) | 48k docs |
133
+ |large | 75M docs, 30B words (71 GB + 203 GB) | 72k docs |
134
+ |full | 103M docs, 41B words (109 GB + 279 GB) | 96k docs |
135
+
136
+ You can load any subset like this:
137
+
138
+ ```python
139
+ from datasets import load_dataset
140
+
141
+ datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True)
142
+ ```
143
+
144
+ Since splits are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:
145
+
146
+ ```python
147
+ from datasets import load_dataset
148
+
149
+ mc4_nl_full_stream = load_dataset('yhavinga/mc4_nl_cleaned', "full", split='train', streaming=True)
150
+ print(next(iter(mc4_nl_full_stream))) # Prints the example presented above
151
+ ```
152
+
153
+ ## Dataset Creation
154
+
155
+ Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
156
+
157
+ ## Considerations for Using the Data
158
+
159
+ ### Social Impact of Dataset
160
+
161
+ With more than 200GB of cleaned Dutch text and more than 41B estimated words, this is by far the largest available corpus for the Dutch language.
162
+ The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant.
163
+ Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.
164
+ This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.
165
+
166
+ ### Discussion of Biases
167
+
168
+ Despit the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will
169
+ inevitably reflect biases present in blog articles and comments on the Internet.
170
+ This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
171
+
172
+ ## Additional Information
173
+
174
+ ### Dataset Curators
175
+
176
+ Authors at AllenAI are the original curators for the `mc4` corpus.
177
+ For inquiries or requests regarding the Dutch cleaned portion contained in this repository, please contact me at [[email protected]](mailto:[email protected])
178
+
179
+ ### Licensing Information
180
+
181
+ AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
182
+
183
+ ### Citation Information
184
+
185
+ ```
186
+ @article{2019t5,
187
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
188
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
189
+ journal = {arXiv e-prints},
190
+ year = {2019},
191
+ archivePrefix = {arXiv},
192
+ eprint = {1910.10683},
193
+ }
194
+ ```
195
+
196
+ ### Contributions
197
+
198
+ Thanks to [[email protected]](mailto:[email protected]), [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for
199
+ providing the `cleaned_it_mc4` example that shows how upload a dataset to the Huggingface hub.
mc4_nl_cleaned/validation/{cleaned_c4-nl-validation.tfrecord-00000-of-00004.json.gz → c4-nl-cleaned-validation.tfrecord-00000-of-00004.json.gz} RENAMED
File without changes
mc4_nl_cleaned/validation/{cleaned_c4-nl-validation.tfrecord-00001-of-00004.json.gz → c4-nl-cleaned-validation.tfrecord-00001-of-00004.json.gz} RENAMED
File without changes
mc4_nl_cleaned/validation/{cleaned_c4-nl-validation.tfrecord-00002-of-00004.json.gz → c4-nl-cleaned-validation.tfrecord-00002-of-00004.json.gz} RENAMED
File without changes
mc4_nl_cleaned/validation/{cleaned_c4-nl-validation.tfrecord-00003-of-00004.json.gz → c4-nl-cleaned-validation.tfrecord-00003-of-00004.json.gz} RENAMED
File without changes
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ datasets>=1.14.0