ibrahimj commited on
Commit
0ef2a3c
1 Parent(s): 895230c

End of training

Browse files
README.md CHANGED
@@ -1,20 +1,33 @@
1
  ---
2
  base_model: nadsoft/Hamsa-large-v0.1-beta
3
  tags:
 
4
  - generated_from_trainer
 
 
5
  metrics:
6
  - wer
7
  model-index:
8
- - name: hamsa-pretrained
9
- results: []
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # hamsa-pretrained
16
 
17
- This model is a fine-tuned version of [nadsoft/Hamsa-large-v0.1-beta](https://huggingface.co/nadsoft/Hamsa-large-v0.1-beta) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.4344
20
  - Wer: 29.2057
 
1
  ---
2
  base_model: nadsoft/Hamsa-large-v0.1-beta
3
  tags:
4
+ - whisper-event
5
  - generated_from_trainer
6
+ datasets:
7
+ - nadsoft/QASR-Speech-Resource
8
  metrics:
9
  - wer
10
  model-index:
11
+ - name: hamsa-large-pretrained
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: nadsoft/QASR-Speech-Resource default
18
+ type: nadsoft/QASR-Speech-Resource
19
+ metrics:
20
+ - name: Wer
21
+ type: wer
22
+ value: 29.205723913714138
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
  should probably proofread and complete it, then remove this comment. -->
27
 
28
+ # hamsa-large-pretrained
29
 
30
+ This model is a fine-tuned version of [nadsoft/Hamsa-large-v0.1-beta](https://huggingface.co/nadsoft/Hamsa-large-v0.1-beta) on the nadsoft/QASR-Speech-Resource default dataset.
31
  It achieves the following results on the evaluation set:
32
  - Loss: 0.4344
33
  - Wer: 29.2057
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.18,
3
+ "eval_loss": 0.4343973398208618,
4
+ "eval_runtime": 6177.873,
5
+ "eval_samples_per_second": 0.81,
6
+ "eval_steps_per_second": 0.203,
7
+ "eval_wer": 29.205723913714138,
8
+ "train_loss": 0.7804906897136143,
9
+ "train_runtime": 406099.0091,
10
+ "train_samples_per_second": 0.689,
11
+ "train_steps_per_second": 0.086
12
+ }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.18,
3
+ "eval_loss": 0.4343973398208618,
4
+ "eval_runtime": 6177.873,
5
+ "eval_samples_per_second": 0.81,
6
+ "eval_steps_per_second": 0.203,
7
+ "eval_wer": 29.205723913714138
8
+ }
runs/Jan30_13-41-29_ip-10-0-3-5.eu-west-1.compute.internal/events.out.tfevents.1707034551.ip-10-0-3-5.eu-west-1.compute.internal.3710.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd39bb5c5483e189918cd9904b69e3c713a62423a9792f80d1f3797cfc6fa698
3
+ size 412
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.18,
3
+ "train_loss": 0.7804906897136143,
4
+ "train_runtime": 406099.0091,
5
+ "train_samples_per_second": 0.689,
6
+ "train_steps_per_second": 0.086
7
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff