Update adding evaluation
Browse files
README.md
CHANGED
@@ -31,7 +31,12 @@ More information needed
|
|
31 |
|
32 |
## Training and evaluation data
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
## Training procedure
|
37 |
|
|
|
31 |
|
32 |
## Training and evaluation data
|
33 |
|
34 |
+
The model was trained using the Common Voice 17.0 dataset - spanish subset (mozilla-foundation/common_voice_17_0). Both the base model, whisper-large-v3-turbo, and the fine-tuned model, whisper-large-v3-turbo-es, were evaluated using Word Error Rate (WER) on the test split of the same dataset. The results are as follows:
|
35 |
+
|
36 |
+
- WER for whisper-large-v3-turbo (base): 10.18%
|
37 |
+
- WER for whisper-large-v3-turbo-es (fine-tuned): 2.69%
|
38 |
+
|
39 |
+
This significant reduction in WER shows that fine-tuning the model for spanish audio led to improved transcription accuracy compared to the original base model.
|
40 |
|
41 |
## Training procedure
|
42 |
|