nimrita commited on
Commit
1325f94
1 Parent(s): 7a9fd60

End of training

Browse files
Files changed (1) hide show
  1. README.md +9 -14
README.md CHANGED
@@ -3,6 +3,7 @@ license: mit
3
  base_model: microsoft/speecht5_tts
4
  tags:
5
  - text-to-speech
 
6
  datasets:
7
  - voxpopuli
8
  model-index:
@@ -17,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.5157
21
 
22
  ## Model description
23
 
@@ -36,7 +37,7 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 1e-05
40
  - train_batch_size: 2
41
  - eval_batch_size: 2
42
  - seed: 42
@@ -44,21 +45,15 @@ The following hyperparameters were used during training:
44
  - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 500
48
- - training_steps: 4000
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:------:|:----:|:---------------:|
54
- | 0.5152 | 91.95 | 500 | 0.5177 |
55
- | 0.4734 | 183.91 | 1000 | 0.5199 |
56
- | 0.4555 | 275.86 | 1500 | 0.5129 |
57
- | 0.4464 | 367.82 | 2000 | 0.5133 |
58
- | 0.437 | 459.77 | 2500 | 0.5097 |
59
- | 0.4342 | 551.72 | 3000 | 0.5152 |
60
- | 0.426 | 643.68 | 3500 | 0.5153 |
61
- | 0.426 | 735.63 | 4000 | 0.5157 |
62
 
63
 
64
  ### Framework versions
 
3
  base_model: microsoft/speecht5_tts
4
  tags:
5
  - text-to-speech
6
+ - generated_from_trainer
7
  datasets:
8
  - voxpopuli
9
  model-index:
 
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.4915
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - learning_rate: 0.0001
41
  - train_batch_size: 2
42
  - eval_batch_size: 2
43
  - seed: 42
 
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 200
49
+ - training_steps: 500
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:-----:|:----:|:---------------:|
55
+ | 0.4214 | 45.98 | 250 | 0.4956 |
56
+ | 0.3978 | 91.95 | 500 | 0.4915 |
 
 
 
 
 
 
57
 
58
 
59
  ### Framework versions