omvishesh commited on
Commit
dd3a6b6
1 Parent(s): aa4fe02

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.4689
20
 
21
  ## Model description
22
 
@@ -35,32 +35,28 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 5e-05
39
- - train_batch_size: 4
40
- - eval_batch_size: 2
41
  - seed: 42
42
- - gradient_accumulation_steps: 8
43
- - total_train_batch_size: 32
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - lr_scheduler_warmup_steps: 200
47
- - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:------:|:----:|:---------------:|
54
- | 0.754 | 0.7333 | 100 | 0.6016 |
55
- | 0.588 | 1.4665 | 200 | 0.5398 |
56
- | 0.557 | 2.1998 | 300 | 0.5056 |
57
- | 0.5447 | 2.9331 | 400 | 0.5043 |
58
- | 0.5307 | 3.6664 | 500 | 0.4880 |
59
- | 0.5185 | 4.3996 | 600 | 0.4909 |
60
- | 0.5221 | 5.1329 | 700 | 0.4802 |
61
- | 0.5043 | 5.8662 | 800 | 0.4712 |
62
- | 0.5034 | 6.5995 | 900 | 0.4686 |
63
- | 0.5023 | 7.3327 | 1000 | 0.4689 |
64
 
65
 
66
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.4392
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
+ - train_batch_size: 2
40
+ - eval_batch_size: 1
41
  - seed: 42
42
+ - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 8
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 40
47
+ - training_steps: 250
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-------:|:----:|:---------------:|
54
+ | 0.6078 | 2.2535 | 40 | 0.4783 |
55
+ | 0.5393 | 4.5070 | 80 | 0.4533 |
56
+ | 0.4864 | 6.7606 | 120 | 0.4480 |
57
+ | 0.4846 | 9.0141 | 160 | 0.4493 |
58
+ | 0.4628 | 11.2676 | 200 | 0.4383 |
59
+ | 0.4731 | 13.5211 | 240 | 0.4392 |
 
 
 
 
60
 
61
 
62
  ### Framework versions
runs/Oct24_11-08-03_om/events.out.tfevents.1729748297.om.17272.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c64f4dfbb02f3eca4f563f88ec8d5df0bc5d0f0097b6dede7b8dbd24730655d6
3
- size 13223
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ace9f6eacb076dff1f72d7b8eed8fa647049488b5b2f0b9b6561a0ba9244632
3
+ size 13788