Files changed (2) hide show
  1. README (1).md +188 -0
  2. gitattributes.txt +38 -0
README (1).md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ - machine-generated
6
+ language_creators:
7
+ - crowdsourced
8
+ - expert-generated
9
+ language:
10
+ - en
11
+ - fr
12
+ - it
13
+ - es
14
+ - pt
15
+ - de
16
+ - nl
17
+ - ru
18
+ - pl
19
+ - cs
20
+ - ko
21
+ - zh
22
+ license:
23
+ - cc-by-4.0
24
+ multilinguality:
25
+ - multilingual
26
+ size_categories:
27
+ - 10K<n<100K
28
+ task_categories:
29
+ - automatic-speech-recognition
30
+ task_ids:
31
+ - keyword-spotting
32
+ pretty_name: MInDS-14
33
+ language_bcp47:
34
+ - en
35
+ - en-GB
36
+ - en-US
37
+ - en-AU
38
+ - fr
39
+ - it
40
+ - es
41
+ - pt
42
+ - de
43
+ - nl
44
+ - ru
45
+ - pl
46
+ - cs
47
+ - ko
48
+ - zh
49
+ tags:
50
+ - speech-recognition
51
+ ---
52
+
53
+ # MInDS-14
54
+
55
+ ## Dataset Description
56
+
57
+ - **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
58
+ - **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
59
+ - **Total amount of disk used:** ca. 500 MB
60
+
61
+ MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
62
+ intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
63
+
64
+ ## Example
65
+
66
+ MInDS-14 can be downloaded and used as follows:
67
+
68
+ ```py
69
+ from datasets import load_dataset
70
+
71
+ minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
72
+ # to download all data for multi-lingual fine-tuning uncomment following line
73
+ # minds_14 = load_dataset("PolyAI/all", "all")
74
+
75
+ # see structure
76
+ print(minds_14)
77
+
78
+ # load audio sample on the fly
79
+ audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
80
+ intent_class = minds_14["train"][0]["intent_class"] # first transcription
81
+ intent = minds_14["train"].features["intent_class"].names[intent_class]
82
+
83
+ # use audio_input and language_class to fine-tune your model for audio classification
84
+ ```
85
+
86
+ ## Dataset Structure
87
+
88
+ We show detailed information the example configurations `fr-FR` of the dataset.
89
+ All other configurations have the same structure.
90
+
91
+ ### Data Instances
92
+
93
+ **fr-FR**
94
+
95
+ - Size of downloaded dataset files: 471 MB
96
+ - Size of the generated dataset: 300 KB
97
+ - Total amount of disk used: 471 MB
98
+
99
+
100
+ An example of a datainstance of the config `fr-FR` looks as follows:
101
+
102
+ ```
103
+ {
104
+ "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
105
+ "audio": {
106
+ "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
107
+ "array": array(
108
+ [0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
109
+ ),
110
+ "sampling_rate": 8000,
111
+ },
112
+ "transcription": "je souhaite changer mon adresse",
113
+ "english_transcription": "I want to change my address",
114
+ "intent_class": 1,
115
+ "lang_id": 6,
116
+ }
117
+ ```
118
+
119
+ ### Data Fields
120
+ The data fields are the same among all splits.
121
+
122
+ - **path** (str): Path to the audio file
123
+ - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
124
+ - **transcription** (str): Transcription of the audio file
125
+ - **english_transcription** (str): English transcription of the audio file
126
+ - **intent_class** (int): Class id of intent
127
+ - **lang_id** (int): Id of language
128
+
129
+ ### Data Splits
130
+ Every config only has the `"train"` split containing of *ca.* 600 examples.
131
+
132
+ ## Dataset Creation
133
+
134
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
+
136
+ ## Considerations for Using the Data
137
+
138
+ ### Social Impact of Dataset
139
+
140
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
141
+
142
+ ### Discussion of Biases
143
+
144
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
145
+
146
+ ### Other Known Limitations
147
+
148
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
149
+
150
+ ## Additional Information
151
+
152
+ ### Dataset Curators
153
+
154
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
155
+
156
+ ### Licensing Information
157
+
158
+ All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
159
+
160
+ ### Citation Information
161
+
162
+ ```
163
+ @article{DBLP:journals/corr/abs-2104-08524,
164
+ author = {Daniela Gerz and
165
+ Pei{-}Hao Su and
166
+ Razvan Kusztos and
167
+ Avishek Mondal and
168
+ Michal Lis and
169
+ Eshan Singhal and
170
+ Nikola Mrksic and
171
+ Tsung{-}Hsien Wen and
172
+ Ivan Vulic},
173
+ title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
174
+ journal = {CoRR},
175
+ volume = {abs/2104.08524},
176
+ year = {2021},
177
+ url = {https://arxiv.org/abs/2104.08524},
178
+ eprinttype = {arXiv},
179
+ eprint = {2104.08524},
180
+ timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
181
+ biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
182
+ bibsource = {dblp computer science bibliography, https://dblp.org}
183
+ }
184
+ ```
185
+
186
+ ### Contributions
187
+
188
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
gitattributes.txt ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.wasm filter=lfs diff=lfs merge=lfs -text
25
+ *.xz filter=lfs diff=lfs merge=lfs -text
26
+ *.zip filter=lfs diff=lfs merge=lfs -text
27
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
28
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
29
+ # Audio files - uncompressed
30
+ *.pcm filter=lfs diff=lfs merge=lfs -text
31
+ *.sam filter=lfs diff=lfs merge=lfs -text
32
+ *.raw filter=lfs diff=lfs merge=lfs -text
33
+ # Audio files - compressed
34
+ *.aac filter=lfs diff=lfs merge=lfs -text
35
+ *.flac filter=lfs diff=lfs merge=lfs -text
36
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
37
+ *.ogg filter=lfs diff=lfs merge=lfs -text
38
+ *.wav filter=lfs diff=lfs merge=lfs -text