kimas1269 commited on
Commit
d27d531
1 Parent(s): ff8d5ec

End of training

Browse files
README.md CHANGED
@@ -5,6 +5,8 @@ license: apache-2.0
5
  base_model: openai/whisper-medium
6
  tags:
7
  - generated_from_trainer
 
 
8
  model-index:
9
  - name: Whisper Medium Zh - Kimas
10
  results: []
@@ -16,6 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
16
  # Whisper Medium Zh - Kimas
17
 
18
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
 
 
 
19
 
20
  ## Model description
21
 
@@ -35,15 +40,33 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
- - train_batch_size: 4
39
  - eval_batch_size: 8
40
  - seed: 42
 
 
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
- - training_steps: 4000
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.36.0.dev0
 
5
  base_model: openai/whisper-medium
6
  tags:
7
  - generated_from_trainer
8
+ metrics:
9
+ - wer
10
  model-index:
11
  - name: Whisper Medium Zh - Kimas
12
  results: []
 
18
  # Whisper Medium Zh - Kimas
19
 
20
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.0635
23
+ - Wer: 100.0
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 1e-05
43
+ - train_batch_size: 8
44
  - eval_batch_size: 8
45
  - seed: 42
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 16
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_steps: 500
51
+ - training_steps: 10000
52
  - mixed_precision_training: Native AMP
53
 
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
57
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
58
+ | 0.1461 | 0.28 | 1000 | 0.1406 | 100.0 |
59
+ | 0.0803 | 0.57 | 2000 | 0.1181 | 100.0 |
60
+ | 0.0715 | 0.85 | 3000 | 0.1039 | 100.0 |
61
+ | 0.0255 | 1.14 | 4000 | 0.0925 | 100.0207 |
62
+ | 0.0199 | 1.42 | 5000 | 0.0810 | 100.0 |
63
+ | 0.027 | 1.7 | 6000 | 0.0767 | 100.0207 |
64
+ | 0.0328 | 1.99 | 7000 | 0.0706 | 100.0 |
65
+ | 0.0026 | 2.27 | 8000 | 0.0700 | 100.0 |
66
+ | 0.0082 | 2.56 | 9000 | 0.0646 | 100.0 |
67
+ | 0.0099 | 2.84 | 10000 | 0.0635 | 100.0 |
68
+
69
+
70
  ### Framework versions
71
 
72
  - Transformers 4.36.0.dev0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9aa7e975d377cd14e5c2fecc73258d96b2d168c65f8f1d4052c21a860034efe8
3
  size 3055544304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2f88622ba50b16effdf8d32ecd900059bb4631d2eb25e458d54624def50029f
3
  size 3055544304
runs/Nov14_10-30-24_DESKTOP-SFJ2QT7/events.out.tfevents.1699929026.DESKTOP-SFJ2QT7 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f63750b7272ddd1c00c3ccace92042b912d066b97315697ee8f2fd6bb0a84a4a
3
- size 70696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5be47c52ac306f132cba55049428534f0879cb4df00be101311204d1eccccb31
3
+ size 71368