update model card
Browse files
README.md
CHANGED
@@ -6,17 +6,29 @@ tags:
|
|
6 |
- automatic-speech-recognition
|
7 |
- mozilla-foundation/common_voice_8_0
|
8 |
- generated_from_trainer
|
|
|
9 |
datasets:
|
10 |
-
-
|
11 |
model-index:
|
12 |
-
- name:
|
13 |
-
results:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
---
|
15 |
|
16 |
-
|
17 |
-
should probably proofread and complete it, then remove this comment. -->
|
18 |
-
|
19 |
-
# xls-r-uzbek-cv8
|
20 |
|
21 |
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UZ dataset.
|
22 |
It achieves the following results on the evaluation set:
|
@@ -26,17 +38,24 @@ It achieves the following results on the evaluation set:
|
|
26 |
|
27 |
## Model description
|
28 |
|
29 |
-
|
|
|
|
|
|
|
30 |
|
31 |
## Intended uses & limitations
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Training and evaluation data
|
36 |
|
37 |
-
|
38 |
|
39 |
-
|
40 |
|
41 |
### Training hyperparameters
|
42 |
|
|
|
6 |
- automatic-speech-recognition
|
7 |
- mozilla-foundation/common_voice_8_0
|
8 |
- generated_from_trainer
|
9 |
+
- robust-speech-event
|
10 |
datasets:
|
11 |
+
- mozilla-foundation/common_voice_8_0
|
12 |
model-index:
|
13 |
+
- name: XLS-R-300M Uzbek CV8
|
14 |
+
results:
|
15 |
+
- task:
|
16 |
+
name: Automatic Speech Recognition
|
17 |
+
type: automatic-speech-recognition
|
18 |
+
dataset:
|
19 |
+
name: Common Voice 8
|
20 |
+
type: mozilla-foundation/common_voice_8_0
|
21 |
+
args: uz
|
22 |
+
metrics:
|
23 |
+
- name: Test WER (no LM)
|
24 |
+
type: wer
|
25 |
+
value: 32.88
|
26 |
+
- name: Test CER (no LM)
|
27 |
+
type: cer
|
28 |
+
value: 6.53
|
29 |
---
|
30 |
|
31 |
+
# XLS-R-300M Uzbek CV8
|
|
|
|
|
|
|
32 |
|
33 |
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UZ dataset.
|
34 |
It achieves the following results on the evaluation set:
|
|
|
38 |
|
39 |
## Model description
|
40 |
|
41 |
+
For a description of the model architecture, see [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
|
42 |
+
|
43 |
+
The model vocabulary consists of the [Modern Latin alphabet for Uzbek](https://en.wikipedia.org/wiki/Uzbek_alphabet), with punctuation removed.
|
44 |
+
Note that the characters <‘> and <’> do not count as punctuation, as <‘> modifies <o> and <g>, and <’> indicates the glottal stop or a long vowel.
|
45 |
|
46 |
## Intended uses & limitations
|
47 |
|
48 |
+
This model is expected to be of some utility for low-fidelity use cases such as:
|
49 |
+
- Draft video captions
|
50 |
+
- Indexing of recorded broadcasts
|
51 |
+
|
52 |
+
The model is not reliable enough to use as a substitute for live captions for accessibility purposes, and it should not be used in a manner that would infringe the privacy of any of the contributors to the Common Voice dataset nor any other speakers.
|
53 |
|
54 |
## Training and evaluation data
|
55 |
|
56 |
+
The 50% of the `train` common voice official split was used as training data. The 50% of the official `dev` split was used as validation data, and the full `test` set was used for final evaluation.
|
57 |
|
58 |
+
The kenlm language model was compiled from the target sentences of the train + other datasets.
|
59 |
|
60 |
### Training hyperparameters
|
61 |
|