update model card README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,6 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
-
- automatic-speech-recognition
|
5 |
-
- google/fleurs
|
6 |
- generated_from_trainer
|
7 |
datasets:
|
8 |
- fleurs
|
@@ -15,15 +13,15 @@ model-index:
|
|
15 |
name: Automatic Speech Recognition
|
16 |
type: automatic-speech-recognition
|
17 |
dataset:
|
18 |
-
name:
|
19 |
type: fleurs
|
20 |
config: ps_af
|
21 |
split: test
|
22 |
-
args:
|
23 |
metrics:
|
24 |
- name: Wer
|
25 |
type: wer
|
26 |
-
value: 0.
|
27 |
---
|
28 |
|
29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -31,10 +29,10 @@ should probably proofread and complete it, then remove this comment. -->
|
|
31 |
|
32 |
# facebook/wav2vec2-xls-r-300m
|
33 |
|
34 |
-
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the
|
35 |
It achieves the following results on the evaluation set:
|
36 |
-
- Loss: 0.
|
37 |
-
- Wer: 0.
|
38 |
- Cer: 0.1969
|
39 |
|
40 |
## Model description
|
@@ -54,16 +52,16 @@ More information needed
|
|
54 |
### Training hyperparameters
|
55 |
|
56 |
The following hyperparameters were used during training:
|
57 |
-
- learning_rate: 7.5e-
|
58 |
-
- train_batch_size:
|
59 |
-
- eval_batch_size:
|
60 |
- seed: 42
|
61 |
-
- gradient_accumulation_steps:
|
62 |
- total_train_batch_size: 32
|
63 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
64 |
- lr_scheduler_type: linear
|
65 |
- lr_scheduler_warmup_steps: 1000
|
66 |
-
- training_steps:
|
67 |
- mixed_precision_training: Native AMP
|
68 |
|
69 |
### Training results
|
@@ -80,7 +78,8 @@ The following hyperparameters were used during training:
|
|
80 |
| 0.5935 | 50.63 | 4000 | 0.1969 | 0.9162 | 0.5156 |
|
81 |
| 0.5174 | 56.96 | 4500 | 0.1972 | 0.9287 | 0.5140 |
|
82 |
| 0.5462 | 63.29 | 5000 | 0.1974 | 0.9370 | 0.5138 |
|
83 |
-
| 0.5564 | 69.62 | 5500 | 0.
|
|
|
84 |
|
85 |
|
86 |
### Framework versions
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
tags:
|
|
|
|
|
4 |
- generated_from_trainer
|
5 |
datasets:
|
6 |
- fleurs
|
|
|
13 |
name: Automatic Speech Recognition
|
14 |
type: automatic-speech-recognition
|
15 |
dataset:
|
16 |
+
name: fleurs
|
17 |
type: fleurs
|
18 |
config: ps_af
|
19 |
split: test
|
20 |
+
args: ps_af
|
21 |
metrics:
|
22 |
- name: Wer
|
23 |
type: wer
|
24 |
+
value: 0.5117667121418826
|
25 |
---
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
29 |
|
30 |
# facebook/wav2vec2-xls-r-300m
|
31 |
|
32 |
+
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
|
33 |
It achieves the following results on the evaluation set:
|
34 |
+
- Loss: 0.9505
|
35 |
+
- Wer: 0.5118
|
36 |
- Cer: 0.1969
|
37 |
|
38 |
## Model description
|
|
|
52 |
### Training hyperparameters
|
53 |
|
54 |
The following hyperparameters were used during training:
|
55 |
+
- learning_rate: 7.5e-07
|
56 |
+
- train_batch_size: 16
|
57 |
+
- eval_batch_size: 16
|
58 |
- seed: 42
|
59 |
+
- gradient_accumulation_steps: 2
|
60 |
- total_train_batch_size: 32
|
61 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
62 |
- lr_scheduler_type: linear
|
63 |
- lr_scheduler_warmup_steps: 1000
|
64 |
+
- training_steps: 6000
|
65 |
- mixed_precision_training: Native AMP
|
66 |
|
67 |
### Training results
|
|
|
78 |
| 0.5935 | 50.63 | 4000 | 0.1969 | 0.9162 | 0.5156 |
|
79 |
| 0.5174 | 56.96 | 4500 | 0.1972 | 0.9287 | 0.5140 |
|
80 |
| 0.5462 | 63.29 | 5000 | 0.1974 | 0.9370 | 0.5138 |
|
81 |
+
| 0.5564 | 69.62 | 5500 | 0.1977 | 0.9461 | 0.5148 |
|
82 |
+
| 0.5252 | 75.95 | 6000 | 0.9505 | 0.5118 | 0.1969 |
|
83 |
|
84 |
|
85 |
### Framework versions
|