MatthiasZ commited on
Commit
acec38c
1 Parent(s): ee719e1

End of training

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: openai/whisper-large-v3-turbo
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - MatthiasZ/whisper_large_v3_turbo_annota_2
9
+ metrics:
10
+ - wer
11
+ model-index:
12
+ - name: whisper_large_v3_turbo_annota_2
13
+ results:
14
+ - task:
15
+ name: Automatic Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: whisper_large_v3_turbo_annota_2
19
+ type: MatthiasZ/whisper_large_v3_turbo_annota_2
20
+ args: 'config: de, split: test'
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 21.886674395921897
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # whisper_large_v3_turbo_annota_2
31
+
32
+ This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the whisper_large_v3_turbo_annota_2 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.3910
35
+ - Wer: 21.8867
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 1e-05
55
+ - train_batch_size: 16
56
+ - eval_batch_size: 8
57
+ - seed: 42
58
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
59
+ - lr_scheduler_type: linear
60
+ - lr_scheduler_warmup_steps: 500
61
+ - training_steps: 6000
62
+ - mixed_precision_training: Native AMP
63
+
64
+ ### Training results
65
+
66
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
67
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
68
+ | 0.4409 | 0.3333 | 2000 | 0.4489 | 23.5761 |
69
+ | 0.4317 | 0.6667 | 4000 | 0.4141 | 22.9669 |
70
+ | 0.3881 | 1.0 | 6000 | 0.3910 | 21.8867 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.46.2
76
+ - Pytorch 2.5.1+cu124
77
+ - Datasets 3.1.0
78
+ - Tokenizers 0.20.3