lithish2602 commited on
Commit
4e6e639
1 Parent(s): 09d78ae

Training in progress, step 4000

Browse files
README.md CHANGED
@@ -17,8 +17,6 @@ should probably proofread and complete it, then remove this comment. -->
17
  # speecht5_tts_ta
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.5230
22
 
23
  ## Model description
24
 
@@ -37,7 +35,7 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 1e-05
41
  - train_batch_size: 16
42
  - eval_batch_size: 8
43
  - seed: 42
@@ -45,20 +43,9 @@ The following hyperparameters were used during training:
45
  - total_train_batch_size: 32
46
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 1000
49
  - training_steps: 4000
50
  - mixed_precision_training: Native AMP
51
 
52
- ### Training results
53
-
54
- | Training Loss | Epoch | Step | Validation Loss |
55
- |:-------------:|:------:|:----:|:---------------:|
56
- | 0.9121 | 500.0 | 1000 | 0.4994 |
57
- | 0.8363 | 1000.0 | 2000 | 0.5093 |
58
- | 0.7777 | 1500.0 | 3000 | 0.5234 |
59
- | 0.7586 | 2000.0 | 4000 | 0.5230 |
60
-
61
-
62
  ### Framework versions
63
 
64
  - Transformers 4.46.0.dev0
 
17
  # speecht5_tts_ta
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset.
 
 
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 5e-05
39
  - train_batch_size: 16
40
  - eval_batch_size: 8
41
  - seed: 42
 
43
  - total_train_batch_size: 32
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
 
46
  - training_steps: 4000
47
  - mixed_precision_training: Native AMP
48
 
 
 
 
 
 
 
 
 
 
 
49
  ### Framework versions
50
 
51
  - Transformers 4.46.0.dev0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8604c7fb221bfdfa521e2866dc1f717e104ede80d4ee86da20be2eec573f5a3a
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ffb108605d11a7e6967d411e96038a558d9c6678fae256fa759d4dec485ff55
3
  size 577789320
runs/Oct24_08-17-22_2c77d78e9598/events.out.tfevents.1729757881.2c77d78e9598.341.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59f3e61cc9a47ab3598d294d73ef4ac4871ade5e9234ff89b8ee70920e022ecd
3
+ size 41780
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:900e4a04b853013c8813da3e18fcb4128f7a187bfd148b3460f44e552139c029
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b6d2270aa17081a51abe166a3fb5159a3e85a3c00fa7ece0555d2b836a5999a
3
  size 5368