polinaeterna HF staff commited on
Commit
7155eaf
1 Parent(s): c387b85

Squashed commit of the following:

Browse files

commit b6399c365451c079863a8769c74b8681c05a7619
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 15:13:01 2022 +0300

refactor train gen kwargs

commit 2fa5e951962acadc262f19f68a6a630419ab52aa
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 14:20:01 2022 +0300

update readme

commit 128a631248a26f187c0fa7a7d4fcd1501f9f442c
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 14:12:22 2022 +0300

update readme

commit 33b92a758bf3b47d85b8507581dcff01eee1c2fc
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 13:58:27 2022 +0300

fix path to audio filenames txt

commit 6fdc248e9652cbea3af965c6f241cd639f05eb8a
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 13:57:18 2022 +0300

fix path to audio archives

commit eed468d14cda4046475e9de848dbdefc90bfc222
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 13:53:36 2022 +0300

fix incorrect syntax in python

commit be4ac67614c30c66b601d219afa09eec58d72a91
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 13:51:55 2022 +0300

refactor downloading functions, pass them directly to split

commit dcd9b5ffc1b779159ec175e4d120235f3f2d0115
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 13:19:27 2022 +0300

maybe that looks better?

commit 855ade3ef6ed567cf25d6396832f6f89e3452d2f
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 12:07:02 2022 +0300

config.data_dir -> config.data_root_dir config attribute to avoid confusion with the conventiolnal data_dir attr

commit 81876b40e76c421b7e22c88ded106165a84baac3
Author: polinaeterna <[email protected]>
Date: Tue Jan 25 12:01:31 2022 +0300

add comments and doctrings

commit 85170593d3c2fc9f33411581dfe85d590848e655
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 21:28:05 2022 +0300

copy readme from canonical dataset

commit 41b5ff81dfa56a3e622c8ef6015efddbd2131090
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 21:19:29 2022 +0300

Revert "remove config.data_dir"

This reverts commit cec22c873ae6375681864ed7ab06d8501314423a.

commit cec22c873ae6375681864ed7ab06d8501314423a
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 21:04:03 2022 +0300

remove config.data_dir

commit d0ac74f672d7cf8d0d87078971d493c70b7ae804
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 20:09:04 2022 +0300

fix

commit 801b7887edf7ce3be207c510808a21e51b13c5e5
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 20:07:12 2022 +0300

do not check that limited supervision ids are in current archive

commit d57860626f1e385c0cb07615c73ba18e241d5408
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 19:24:35 2022 +0300

add an ugly way to check if curr archive contains limoted supervision samples

commit 3ac0d45ef8cb0bb37c9c805a29cd507b22c4b63b
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 18:40:37 2022 +0300

fix atgument

commit 66623209b6e90ecfa94e6aca1575c47005699f9d
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 18:37:01 2022 +0300

fix path to data in limited supervision again

commit 0fc5a31e4bec048271bc1ecd641a60459831b71d
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 18:18:24 2022 +0300

fix typo

commit 325e3df2268418c1aed50706d5fe868098b2fae3
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 18:16:54 2022 +0300

fix undefined data path

commit 825432ef8dfea7de19a01da5ce73e4318737412c
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 18:02:41 2022 +0300

fix split in limited supervision sets

commit 66c4bfc9b34b8abce0a3ac37c696a7eb5f4fc028
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 17:43:26 2022 +0300

move all downloading stuff outside the split generator (and pray it works...)

commit 896922f2d663e7fd3302d017c8fad475be5b8bf1
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 14:50:35 2022 +0300

add a comment about limited supervision

commit c763200735df8be5eb0a6ed01dd48e9905a28263
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 14:40:41 2022 +0300

check if archive should be iterated for limited supervision train splits

commit a9ce8085327b56213860ad580a3262d7ef24830d
Author: polinaeterna <[email protected]>
Date: Mon Jan 24 12:25:41 2022 +0300

add splits for limited supervision

commit b9b8f989003a32eb7c70f7b7b049aca3dd62f315
Author: polinaeterna <[email protected]>
Date: Sun Jan 23 22:55:13 2022 +0300

remove commented lines

commit b59b839adf83cb206ac0c25239fb80091160b20f
Author: polinaeterna <[email protected]>
Date: Sun Jan 23 22:46:52 2022 +0300

remove debugging prints :))

commit 995cfa8cfb23ad005e7bde5a918f58a4606b75dc
Author: polinaeterna <[email protected]>
Date: Sun Jan 23 22:41:09 2022 +0300

fix path to audio archives names file

commit b5444229d1aff92c210d5eb378c66af10e033674
Author: polinaeterna <[email protected]>
Date: Sun Jan 23 22:37:44 2022 +0300

add files with audio archives names

commit 72593ae313f8e72d4ec1f342c03a7cae90e84f8f
Author: polinaeterna <[email protected]>
Date: Sun Jan 23 22:25:17 2022 +0300

remove test filenames file

commit 2d0f71633fe221546bbfafaa10f97392f5524a1e
Author: polinaeterna <[email protected]>
Date: Sat Jan 22 16:49:51 2022 +0300

try to iterate over archives

commit 1cd2249105bf64fdf0f97369ef88957d39fa4f89
Author: polinaeterna <[email protected]>
Date: Sat Jan 22 16:43:47 2022 +0300

add tiny list of audio archives names for testing

commit 481bf5c8dbafe68cab6801157cd9d80288653e6e
Author: polinaeterna <[email protected]>
Date: Fri Jan 21 19:30:28 2022 +0300

debug with prints yeah :))

commit efcfe059a362d71d5d3fca86a660d9d7d4e1d64b
Author: polinaeterna <[email protected]>
Date: Fri Jan 21 17:44:17 2022 +0300

try to understand what's goint on :))

commit ffc389e5f08e53f4fac649281e05207dfe442b40
Author: polinaeterna <[email protected]>
Date: Fri Jan 21 16:43:42 2022 +0300

try to open transcript file

commit 5b23cc0a43a2ee12a3581b7cd49248b89b8f9133
Author: polinaeterna <[email protected]>
Date: Fri Jan 21 16:30:22 2022 +0300

add streaming support

commit f0805d172c0c35f7804ff9d561541d18dcb7a53c
Author: polinaeterna <[email protected]>
Date: Fri Jan 21 13:47:16 2022 +0300

fix key error in string formatting

commit ed8ed17e506df3b95a9b2aafc5c4e6d97fea9be3
Author: polinaeterna <[email protected]>
Date: Thu Jan 20 20:16:33 2022 +0300

check it works

commit 067884c3894bad7c40f68f1e42243447c81f1345
Author: polinaeterna <[email protected]>
Date: Thu Jan 20 16:23:30 2022 +0300

copy loading script from canonical version

README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: MultiLingual LibriSpeech
3
+ annotations_creators:
4
+ - expert-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ - expert-generated
8
+ languages:
9
+ - de
10
+ - nl
11
+ - fr
12
+ - it
13
+ - es
14
+ - pt
15
+ - pl
16
+ licenses:
17
+ - cc-by-4.0
18
+ multilinguality:
19
+ - multilingual
20
+ paperswithcode_id: librispeech-1
21
+ size_categories:
22
+ - 100K<n<1M
23
+ source_datasets:
24
+ - original
25
+ task_categories:
26
+ - speech-processing
27
+ task_ids:
28
+ - automatic-speech-recognition
29
+ ---
30
+
31
+ # Dataset Card for MultiLingual LibriSpeech
32
+
33
+ ## Table of Contents
34
+ - [Dataset Description](#dataset-description)
35
+ - [Dataset Summary](#dataset-summary)
36
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
37
+ - [Languages](#languages)
38
+ - [Dataset Structure](#dataset-structure)
39
+ - [Data Instances](#data-instances)
40
+ - [Data Fields](#data-fields)
41
+ - [Data Splits](#data-splits)
42
+ - [Dataset Creation](#dataset-creation)
43
+ - [Curation Rationale](#curation-rationale)
44
+ - [Source Data](#source-data)
45
+ - [Annotations](#annotations)
46
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
47
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
48
+ - [Social Impact of Dataset](#social-impact-of-dataset)
49
+ - [Discussion of Biases](#discussion-of-biases)
50
+ - [Other Known Limitations](#other-known-limitations)
51
+ - [Additional Information](#additional-information)
52
+ - [Dataset Curators](#dataset-curators)
53
+ - [Licensing Information](#licensing-information)
54
+ - [Citation Information](#citation-information)
55
+ - [Contributions](#contributions)
56
+
57
+ ## Dataset Description
58
+
59
+ - **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
60
+ - **Repository:** [Needs More Information]
61
+ - **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
62
+ - **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/multilingual-librispeech)
63
+
64
+ ### Dataset Summary
65
+
66
+ This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
67
+ The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
68
+
69
+ MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
70
+ 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
71
+
72
+ ### Supported Tasks and Leaderboards
73
+
74
+ - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
75
+
76
+ ### Languages
77
+
78
+ The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
79
+
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
85
+
86
+ ```
87
+ {'file': '10900_6473_000030.flac',
88
+ 'audio': {'path': '10900_6473_000030.flac',
89
+ 'array': array([-1.52587891e-04, 6.10351562e-05, 0.00000000e+00, ...,
90
+ 4.27246094e-04, 5.49316406e-04, 4.57763672e-04]),
91
+ 'sampling_rate': 16000},
92
+ 'text': 'więc czego chcecie odemnie spytałem wysłuchawszy tego zadziwiającego opowiadania broń nas stary człowieku broń zakrzyknęli równocześnie obaj posłowie\n',
93
+ 'speaker_id': 10900,
94
+ 'chapter_id': 6473,
95
+ 'id': '10900_6473_000030'}
96
+ ```
97
+
98
+
99
+ ### Data Fields
100
+
101
+ - file: A filename .flac format.
102
+
103
+ - audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
104
+
105
+ - text: the transcription of the audio file.
106
+
107
+ - id: unique id of the data sample.
108
+
109
+ - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
110
+
111
+ - chapter_id: id of the audiobook chapter which includes the transcription.
112
+
113
+ ### Data Splits
114
+
115
+ | | Train | Train.9h | Train.1h | Dev | Test |
116
+ | ----- | ------ | ----- | ---- | ---- | ---- |
117
+ | german | 469942 | 2194 | 241 | 3469 | 3394 |
118
+ | dutch | 374287 | 2153 | 234 | 3095 | 3075 |
119
+ | french | 258213 | 2167 | 241 | 2416 | 2426 |
120
+ | spanish | 220701 | 2110 | 233 | 2408 | 2385 |
121
+ | italian | 59623 | 2173 | 240 | 1248 | 1262 |
122
+ | portuguese | 37533 | 2116 | 236 | 826 | 871 |
123
+ | polish | 25043 | 2173 | 238 | 512 | 520 |
124
+
125
+
126
+
127
+ ## Dataset Creation
128
+
129
+ ### Curation Rationale
130
+
131
+ [Needs More Information]
132
+
133
+ ### Source Data
134
+
135
+ #### Initial Data Collection and Normalization
136
+
137
+ [Needs More Information]
138
+
139
+ #### Who are the source language producers?
140
+
141
+ [Needs More Information]
142
+
143
+ ### Annotations
144
+
145
+ #### Annotation process
146
+
147
+ [Needs More Information]
148
+
149
+ #### Who are the annotators?
150
+
151
+ [Needs More Information]
152
+
153
+ ### Personal and Sensitive Information
154
+
155
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
156
+
157
+ ## Considerations for Using the Data
158
+
159
+ ### Social Impact of Dataset
160
+
161
+ [More Information Needed]
162
+
163
+ ### Discussion of Biases
164
+
165
+ [More Information Needed]
166
+
167
+ ### Other Known Limitations
168
+
169
+ [Needs More Information]
170
+
171
+ ## Additional Information
172
+
173
+ ### Dataset Curators
174
+
175
+ [Needs More Information]
176
+
177
+ ### Licensing Information
178
+
179
+ Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
180
+
181
+ ### Citation Information
182
+
183
+ ```
184
+ @article{Pratap2020MLSAL,
185
+ title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
186
+ author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
187
+ journal={ArXiv},
188
+ year={2020},
189
+ volume={abs/2012.03411}
190
+ }
191
+ ```
192
+
193
+ ### Contributions
194
+
195
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
196
+ and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
data/mls_dutch/dev/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28279fa229ef215b02fe18eee220d4cc8d114aed4b584cd0a1a1dc626435bfa3
3
+ size 212
data/mls_dutch/test/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5c3d86d686bf6ac44b5bea8a55794f6757ad33f54c0650fa1f94c86efe97dc7
3
+ size 229
data/mls_dutch/train/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4155d13c8965fd7342a9e39ebb6c316ac300ef103990e650f94a70a5369fd9e2
3
+ size 10286
data/mls_french/dev/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:117970940e38665bafeb9be051d088002f44c773567357d8b1b819ff8157b4ff
3
+ size 718
data/mls_french/test/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5d5f929561671757bc312de49b418b11ab6b06095e6abc8e8200a8dd7fd6dc0
3
+ size 590
data/mls_french/train/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f192a1678643267bfed6285cbb41fdf15fba267a2b8ca25401144c56b842964
3
+ size 12708
data/mls_german/dev/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:436fca0b2b2ceed1a19d7af648c9c2fc5c953d379acdf019bec873a0ed05d2d1
3
+ size 769
data/mls_german/test/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52868aab3ebbf54c69d3ca3d225c76515e82bfcab6774eff955bec73ba7390b4
3
+ size 823
data/mls_german/train/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a93b5b67e23f9bfa86df020d4237c667243a3ce1409025491e0f0bdfdd3bd6f
3
+ size 22234
data/mls_italian/dev/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51fe48e0a32c01dfde79a9d6cba5b9b57675b176ca16e937a0222172052ea674
3
+ size 271
data/mls_italian/test/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3603c9a85a56e916c8edc788ee0cf7a01186a715adf56d7d9e22908250045f5
3
+ size 290
data/mls_italian/train/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f4b8f34ccc9753982b469b227af1a604227cd28c0fa29d1159a9352193c489c
3
+ size 3945
data/mls_polish/dev/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c41a98e9aa2079ae6769033d1727e33d3e6299387a9d6e2a69ff050794a026cf
3
+ size 85
data/mls_polish/test/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a84188ac3d3d21a4850b3a02e5f5215ac0f0de7068073349db94b6fba1890ed2
3
+ size 83
data/mls_polish/train/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd05ac34dcaafe28b81bbaecc815a192944435daf99f82b2d0e10ee775e77695
3
+ size 979
data/mls_portuguese/dev/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afdadb1fb6ebff8ac712a3b19eddfa7451b1db2c6f715ac5ba85150ae0855d3d
3
+ size 281
data/mls_portuguese/test/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:681f7dbba5b8a436546c848fa466a945339d1e8ecd317189009308ef5e293a0b
3
+ size 265
data/mls_portuguese/train/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2825cb4c0f1592256df304d8aac976961778952c7455bb6083613f1166d80fab
3
+ size 2836
data/mls_spanish/dev/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ecd0d1e6e930b63d03466764d16f53c1648eac3defaec366c68ff388a550132
3
+ size 426
data/mls_spanish/test/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00fc16d5a0d5ec0657336b87e39d307b0da29eef0c707012815f9fad80c877f8
3
+ size 574
data/mls_spanish/train/audio_filenames.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e81b4d55933f2806287653016914ec2633d5b7ecfccf51e19efb977f66310ddf
3
+ size 9363
multilingual_librispeech.py ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Multilingual Librispeech automatic speech recognition dataset."""
18
+
19
+
20
+ from functools import partial
21
+ import os
22
+
23
+ import datasets
24
+ from datasets.tasks import AutomaticSpeechRecognition
25
+
26
+
27
+ _CITATION = """\
28
+ @article{Pratap2020MLSAL,
29
+ title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
30
+ author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
31
+ journal={ArXiv},
32
+ year={2020},
33
+ volume={abs/2012.03411}
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
39
+ The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94)
40
+ to make it easier to stream.
41
+
42
+ MLS dataset is a large multilingual corpus suitable for speech research.
43
+ The dataset is derived from read audiobooks from LibriVox and consists of 8 languages:
44
+ English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
45
+ """
46
+
47
+ _URL = "http://www.openslr.org/94"
48
+ _DL_URL_FORMAT = "data/mls_{name}"
49
+
50
+
51
+ class MultilingualLibrispeechConfig(datasets.BuilderConfig):
52
+ """BuilderConfig for MultilingualLibrispeech."""
53
+
54
+ def __init__(self, name, **kwargs):
55
+ """
56
+ Args:
57
+ name: `string`, name of dataset config (=language)
58
+ **kwargs: keyword arguments forwarded to super.
59
+ """
60
+ super(MultilingualLibrispeechConfig, self).__init__(
61
+ version=datasets.Version("2.1.0", ""), name=name, **kwargs
62
+ )
63
+ # relative path to full data inside a repo (for example `data/mls_german`)
64
+ self.data_root_dir = _DL_URL_FORMAT.format(name=name)
65
+
66
+
67
+ class MultilingualLibrispeech(datasets.GeneratorBasedBuilder):
68
+ """Multilingual Librispeech dataset."""
69
+
70
+ BUILDER_CONFIGS = [
71
+ MultilingualLibrispeechConfig(name="german", description="German LibriSpeech dataset"),
72
+ MultilingualLibrispeechConfig(name="dutch", description="Dutch LibriSpeech dataset"),
73
+ MultilingualLibrispeechConfig(name="french", description="French LibriSpeech dataset"),
74
+ MultilingualLibrispeechConfig(name="spanish", description="Spanish LibriSpeech dataset"),
75
+ MultilingualLibrispeechConfig(name="italian", description="Italian LibriSpeech dataset"),
76
+ MultilingualLibrispeechConfig(name="portuguese", description="Portuguese LibriSpeech dataset"),
77
+ MultilingualLibrispeechConfig(name="polish", description="Polish LibriSpeech dataset"),
78
+ ]
79
+
80
+ def _info(self):
81
+ return datasets.DatasetInfo(
82
+ description=_DESCRIPTION,
83
+ features=datasets.Features(
84
+ {
85
+ "file": datasets.Value("string"),
86
+ "audio": datasets.features.Audio(sampling_rate=16_000),
87
+ "text": datasets.Value("string"),
88
+ "speaker_id": datasets.Value("int64"),
89
+ "chapter_id": datasets.Value("int64"),
90
+ "id": datasets.Value("string"),
91
+ }
92
+ ),
93
+ supervised_keys=("file", "text"),
94
+ homepage=_URL,
95
+ citation=_CITATION,
96
+ task_templates=[AutomaticSpeechRecognition(audio_file_path_column="file", transcription_column="text")],
97
+ )
98
+
99
+ def _split_generators(self, dl_manager):
100
+
101
+ download_transcript = partial(
102
+ download_extract_transcript, dl_manager=dl_manager, root_dir=self.config.data_root_dir
103
+ )
104
+ download_audio = partial(
105
+ download_audio_archives, dl_manager=dl_manager, root_dir=self.config.data_root_dir
106
+ )
107
+ download_limited_ids = partial(
108
+ download_extract_limited_ids, dl_manager=dl_manager, root_dir=self.config.data_root_dir
109
+ )
110
+
111
+ train_kwargs = {
112
+ "transcript_path": download_transcript(split="train"),
113
+ "audio_archives": download_audio(split="train")
114
+ }
115
+
116
+ train_splits = [
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.TRAIN, gen_kwargs=train_kwargs
119
+ ),
120
+ datasets.SplitGenerator(
121
+ name="train.9h",
122
+ gen_kwargs={
123
+ **train_kwargs,
124
+ "limited_ids_paths": download_limited_ids(sub_folder="limited_supervision/9hr"),
125
+ },
126
+ ),
127
+ datasets.SplitGenerator(
128
+ name="train.1h",
129
+ gen_kwargs={
130
+ **train_kwargs,
131
+ "limited_ids_paths": download_limited_ids(sub_folder="limited_supervision/1hr"),
132
+ },
133
+ ),
134
+ ]
135
+
136
+ return train_splits + [
137
+ datasets.SplitGenerator(
138
+ name=datasets.Split.VALIDATION, gen_kwargs={
139
+ "transcript_path": download_transcript(split="dev"),
140
+ "audio_archives": download_audio(split="dev"),
141
+ }
142
+ ),
143
+ datasets.SplitGenerator(
144
+ name=datasets.Split.TEST, gen_kwargs={
145
+ "transcript_path": download_transcript(split="test"),
146
+ "audio_archives": download_audio(split="test"),
147
+ }
148
+ ),
149
+ ]
150
+
151
+ def _generate_examples(self, transcript_path, audio_archives, limited_ids_paths=None):
152
+ """Generate examples from a Multilingual LibriSpeech data dir."""
153
+ transcripts = dict()
154
+ with open(transcript_path, "r", encoding="utf-8") as file:
155
+ for line in file:
156
+ audio_id, transcript = line.split("\t")
157
+ transcripts[audio_id] = transcript
158
+
159
+ limited_ids, limited_ids_archives_names = [], []
160
+ if limited_ids_paths:
161
+ for path in limited_ids_paths:
162
+ with open(path, "r", encoding="utf-8") as file:
163
+ limited_ids.extend([line.strip() for line in file.readlines()])
164
+
165
+ limited_ids = set(limited_ids)
166
+
167
+ for audio_archive in audio_archives:
168
+ # TODO: check that archive doesn't contain needed ids
169
+ # if limited_ids and audio_archive not in limited_ids_archives_names:
170
+ # continue
171
+
172
+ for audio_filename, file in audio_archive:
173
+ speaker_id, chapter_id = audio_filename.split("_")[:2]
174
+ speaker_id, chapter_id = int(speaker_id), int(chapter_id)
175
+ audio_id = audio_filename.split(".flac")[0]
176
+ audio_transcript = transcripts[audio_id]
177
+
178
+ if limited_ids and audio_id not in limited_ids:
179
+ # this only can be true in limited supervision sets ("train.9h" and "train.1h")
180
+ continue
181
+
182
+ yield audio_filename, {
183
+ "file": audio_filename,
184
+ "audio": {"path": audio_filename, "bytes": file.read()},
185
+ "text": audio_transcript,
186
+ "speaker_id": speaker_id,
187
+ "chapter_id": chapter_id,
188
+ "id": audio_id
189
+ }
190
+
191
+
192
+ def download_extract_limited_ids(dl_manager, root_dir, sub_folder):
193
+ """Download and extract all handles.txt files containing ids for limited supervision train sets. """
194
+
195
+ sub_path = os.path.join(root_dir, "train", sub_folder)
196
+
197
+ if sub_folder.endswith("9hr"):
198
+ limited_ids_paths = [os.path.join(sub_path, "handles.txt")]
199
+ else: # => sub_folder.endswith("1hr")
200
+ # in case of 1 hour limited supervision ("train.1h") there are always 6 subfolders like:
201
+ # "limited_supervision/1h/0/handles.txt", "limited_supervision/1h/1/handles.txt", ...
202
+ limited_ids_paths = [os.path.join(sub_path, str(i), "handles.txt") for i in range(6)]
203
+
204
+ limited_ids_paths = dl_manager.download_and_extract(limited_ids_paths)
205
+
206
+ return limited_ids_paths
207
+
208
+
209
+ def download_extract_transcript(dl_manager, root_dir, split):
210
+ """Downloading and extracting file with audio transcriptions. """
211
+ transcript_path = os.path.join(root_dir, split, "transcripts.txt")
212
+ return dl_manager.download_and_extract(transcript_path)
213
+
214
+
215
+ def download_audio_archives(dl_manager, root_dir, split):
216
+ """Prepare archives with audio files for iterating over them.
217
+
218
+ Return:
219
+ audio_archives (List `Generator`): list of generators to iterate over files in each audio archive.
220
+ """
221
+
222
+ # each split contains many .tar.gz archives with its audio files
223
+ # audio_filenames.txt contains the names of these archives
224
+ split_dir = os.path.join(root_dir, split)
225
+ audio_filenames_path = dl_manager.download_and_extract(os.path.join(split_dir, "audio_filenames.txt"))
226
+
227
+ with open(audio_filenames_path, "r", encoding="utf-8") as file:
228
+ audio_filenames = [line.strip() for line in file.readlines()]
229
+
230
+ archive_paths = dl_manager.download([os.path.join(split_dir, "audio", filename) for filename in audio_filenames])
231
+ audio_archives = [dl_manager.iter_archive(archive_path) for archive_path in archive_paths]
232
+
233
+ return audio_archives