End of training
Browse files
README.md
CHANGED
@@ -2,12 +2,12 @@
|
|
2 |
language:
|
3 |
- ko
|
4 |
license: apache-2.0
|
|
|
5 |
tags:
|
6 |
- hf-asr-leaderboard
|
7 |
- generated_from_trainer
|
8 |
datasets:
|
9 |
- hyojin99/EBRC
|
10 |
-
base_model: openai/whisper-base
|
11 |
model-index:
|
12 |
- name: ft_model
|
13 |
results: []
|
@@ -20,8 +20,12 @@ should probably proofread and complete it, then remove this comment. -->
|
|
20 |
|
21 |
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the EBRC dataset.
|
22 |
It achieves the following results on the evaluation set:
|
23 |
-
-
|
24 |
-
-
|
|
|
|
|
|
|
|
|
25 |
|
26 |
## Model description
|
27 |
|
@@ -50,18 +54,6 @@ The following hyperparameters were used during training:
|
|
50 |
- training_steps: 6000
|
51 |
- mixed_precision_training: Native AMP
|
52 |
|
53 |
-
### Training results
|
54 |
-
|
55 |
-
| Training Loss | Epoch | Step | Validation Loss | Cer |
|
56 |
-
|:-------------:|:-----:|:----:|:---------------:|:-------:|
|
57 |
-
| 0.3649 | 0.4 | 1000 | 0.3399 | 18.5291 |
|
58 |
-
| 0.2559 | 0.8 | 2000 | 0.2578 | 14.4563 |
|
59 |
-
| 0.1746 | 1.2 | 3000 | 0.2247 | 13.3580 |
|
60 |
-
| 0.1639 | 1.6 | 4000 | 0.2017 | 11.5004 |
|
61 |
-
| 0.1251 | 2.0 | 5000 | 0.1909 | 10.8103 |
|
62 |
-
| 0.1051 | 2.4 | 6000 | 0.1886 | 10.6428 |
|
63 |
-
|
64 |
-
|
65 |
### Framework versions
|
66 |
|
67 |
- Transformers 4.39.0.dev0
|
|
|
2 |
language:
|
3 |
- ko
|
4 |
license: apache-2.0
|
5 |
+
base_model: openai/whisper-base
|
6 |
tags:
|
7 |
- hf-asr-leaderboard
|
8 |
- generated_from_trainer
|
9 |
datasets:
|
10 |
- hyojin99/EBRC
|
|
|
11 |
model-index:
|
12 |
- name: ft_model
|
13 |
results: []
|
|
|
20 |
|
21 |
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the EBRC dataset.
|
22 |
It achieves the following results on the evaluation set:
|
23 |
+
- eval_loss: 0.1907
|
24 |
+
- eval_cer: 10.7153
|
25 |
+
- eval_runtime: 1285.9622
|
26 |
+
- eval_samples_per_second: 3.888
|
27 |
+
- eval_steps_per_second: 0.486
|
28 |
+
- step: 0
|
29 |
|
30 |
## Model description
|
31 |
|
|
|
54 |
- training_steps: 6000
|
55 |
- mixed_precision_training: Native AMP
|
56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
### Framework versions
|
58 |
|
59 |
- Transformers 4.39.0.dev0
|
runs/Mar12_22-35-36_0764078b8668/events.out.tfevents.1710307806.0764078b8668.2736.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7aa01be76a0a91737bcb9b3c10ba8eb00d667157f3206a5c1e9f7e0386f59465
|
3 |
+
size 341
|