FunPang commited on
Commit
836c4bd
1 Parent(s): d4cb136

FunPang/whisper-small-Cantonese-fine-tunet

Browse files
README.md CHANGED
@@ -1,11 +1,11 @@
1
  ---
2
- base_model: openai/whisper-small
3
  library_name: transformers
4
  license: apache-2.0
5
- metrics:
6
- - wer
7
  tags:
8
  - generated_from_trainer
 
 
9
  model-index:
10
  - name: whisper-small-Cantonese-fine-tune
11
  results: []
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 3.1513
22
- - Wer: 100.0
23
 
24
  ## Model description
25
 
@@ -44,15 +44,15 @@ The following hyperparameters were used during training:
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 10
48
  - training_steps: 100
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Wer |
54
- |:-------------:|:-----:|:----:|:---------------:|:-----:|
55
- | 0.0014 | 25.0 | 100 | 3.1513 | 100.0 |
56
 
57
 
58
  ### Framework versions
 
1
  ---
 
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: openai/whisper-small
 
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - wer
9
  model-index:
10
  - name: whisper-small-Cantonese-fine-tune
11
  results: []
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 3.0590
22
+ - Wer: 135.7143
23
 
24
  ## Model description
25
 
 
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 5
48
  - training_steps: 100
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
54
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
+ | 0.0012 | 25.0 | 100 | 3.0590 | 135.7143 |
56
 
57
 
58
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f1648688a4c4acc5eff6518b522f2be81dbddc69af00a39149e514b9b8d68f01
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84917dc543af18096228a90d034cf665c9e48903cf80a2c8e3e6513c695357dc
3
  size 966995080
runs/Sep24_23-06-58_asus2/events.out.tfevents.1727244419.asus2.43884.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98e2d7dc2b23edf96a926521dc8754d0d15650664fef62871ec09f799c2ebde5
3
+ size 7345
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0a20f505c3eec51400d030f0345bba9a2fcdeb71e36acd7b6b7a27e1e27442f0
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffb58d3a528966202c4046a018ca3b1557f0d2b5913c05d80f0ef02407e6fb1d
3
  size 5432