Jonnhan commited on
Commit
fd0f70d
1 Parent(s): d1a3278

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -67
README.md CHANGED
@@ -1,67 +1,68 @@
1
- ---
2
- license: mit
3
- base_model: microsoft/speecht5_tts
4
- tags:
5
- - generated_from_trainer
6
- datasets:
7
- - facebook/voxpopuli
8
- model-index:
9
- - name: speecht5_finetuned_voxpopuli_nl_10000
10
- results: []
11
- ---
12
-
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
-
16
- # speecht5_finetuned_voxpopuli_nl_10000
17
-
18
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 0.4738
21
-
22
- ## Model description
23
-
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
-
30
- ## Training and evaluation data
31
-
32
- More information needed
33
-
34
- ## Training procedure
35
-
36
- ### Training hyperparameters
37
-
38
- The following hyperparameters were used during training:
39
- - learning_rate: 1e-05
40
- - train_batch_size: 2
41
- - eval_batch_size: 2
42
- - seed: 42
43
- - gradient_accumulation_steps: 8
44
- - total_train_batch_size: 16
45
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
- - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 500
48
- - training_steps: 2500
49
- - mixed_precision_training: Native AMP
50
-
51
- ### Training results
52
-
53
- | Training Loss | Epoch | Step | Validation Loss |
54
- |:-------------:|:------:|:----:|:---------------:|
55
- | 0.5645 | 1.5619 | 500 | 0.5125 |
56
- | 0.5299 | 3.1238 | 1000 | 0.4888 |
57
- | 0.5206 | 4.6857 | 1500 | 0.4778 |
58
- | 0.5118 | 6.2476 | 2000 | 0.4747 |
59
- | 0.5148 | 7.8094 | 2500 | 0.4738 |
60
-
61
-
62
- ### Framework versions
63
-
64
- - Transformers 4.44.0
65
- - Pytorch 2.1.1+cu118
66
- - Datasets 2.20.0
67
- - Tokenizers 0.19.1
 
 
1
+ ---
2
+ license: mit
3
+ base_model: microsoft/speecht5_tts
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - facebook/voxpopuli
8
+ model-index:
9
+ - name: speecht5_finetuned_voxpopuli_nl_10000
10
+ results: []
11
+ pipeline_tag: text-to-speech
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # speecht5_finetuned_voxpopuli_nl_10000
18
+
19
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4738
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 1e-05
41
+ - train_batch_size: 2
42
+ - eval_batch_size: 2
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 8
45
+ - total_train_batch_size: 16
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 500
49
+ - training_steps: 2500
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss |
55
+ |:-------------:|:------:|:----:|:---------------:|
56
+ | 0.5645 | 1.5619 | 500 | 0.5125 |
57
+ | 0.5299 | 3.1238 | 1000 | 0.4888 |
58
+ | 0.5206 | 4.6857 | 1500 | 0.4778 |
59
+ | 0.5118 | 6.2476 | 2000 | 0.4747 |
60
+ | 0.5148 | 7.8094 | 2500 | 0.4738 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - Transformers 4.44.0
66
+ - Pytorch 2.1.1+cu118
67
+ - Datasets 2.20.0
68
+ - Tokenizers 0.19.1