Olivia111 commited on
Commit
d1f844e
1 Parent(s): 84e9943

End of training

Browse files
Files changed (2) hide show
  1. README.md +75 -0
  2. generation_config.json +9 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: microsoft/speecht5_tts
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: speecht5_finetuned_en64_lr301
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # speecht5_finetuned_en64_lr301
15
+
16
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.4580
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 0.0001
38
+ - train_batch_size: 32
39
+ - eval_batch_size: 2
40
+ - seed: 42
41
+ - gradient_accumulation_steps: 16
42
+ - total_train_batch_size: 512
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: linear
45
+ - lr_scheduler_warmup_steps: 500
46
+ - training_steps: 1500
47
+ - mixed_precision_training: Native AMP
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:--------:|:----:|:---------------:|
53
+ | 0.4634 | 55.1724 | 100 | 0.3961 |
54
+ | 0.3883 | 110.3448 | 200 | 0.3942 |
55
+ | 0.3614 | 165.5172 | 300 | 0.4229 |
56
+ | 0.3456 | 220.6897 | 400 | 0.4271 |
57
+ | 0.3355 | 275.8621 | 500 | 0.4273 |
58
+ | 0.3257 | 331.0345 | 600 | 0.4478 |
59
+ | 0.3194 | 386.2069 | 700 | 0.4437 |
60
+ | 0.3116 | 441.3793 | 800 | 0.4586 |
61
+ | 0.3053 | 496.5517 | 900 | 0.4518 |
62
+ | 0.2994 | 551.7241 | 1000 | 0.4535 |
63
+ | 0.2969 | 606.8966 | 1100 | 0.4594 |
64
+ | 0.2946 | 662.0690 | 1200 | 0.4521 |
65
+ | 0.2904 | 717.2414 | 1300 | 0.4569 |
66
+ | 0.2889 | 772.4138 | 1400 | 0.4573 |
67
+ | 0.2911 | 827.5862 | 1500 | 0.4580 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.42.4
73
+ - Pytorch 2.3.1+cu121
74
+ - Datasets 2.20.0
75
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "eos_token_id": 2,
6
+ "max_length": 1876,
7
+ "pad_token_id": 1,
8
+ "transformers_version": "4.42.4"
9
+ }