sumet commited on
Commit
6960f5b
1 Parent(s): 7b53c0b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -22
README.md CHANGED
@@ -1,23 +1,25 @@
1
  ---
 
 
2
  license: mit
3
  base_model: microsoft/speecht5_tts
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - voxpopuli
8
  model-index:
9
- - name: speecht5_finetuned_voxpopuli_nl
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- # speecht5_finetuned_voxpopuli_nl
17
 
18
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4817
21
 
22
  ## Model description
23
 
@@ -36,36 +38,28 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 1e-05
40
- - train_batch_size: 16
41
  - eval_batch_size: 4
42
  - seed: 42
43
- - gradient_accumulation_steps: 2
44
- - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
- - training_steps: 1000
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.7685 | 0.43 | 100 | 0.6720 |
55
- | 0.7072 | 0.86 | 200 | 0.6247 |
56
- | 0.6094 | 1.29 | 300 | 0.5385 |
57
- | 0.5648 | 1.72 | 400 | 0.5098 |
58
- | 0.5602 | 2.15 | 500 | 0.5003 |
59
- | 0.5337 | 2.58 | 600 | 0.4931 |
60
- | 0.5357 | 3.01 | 700 | 0.4881 |
61
- | 0.5315 | 3.44 | 800 | 0.4841 |
62
- | 0.5248 | 3.87 | 900 | 0.4828 |
63
- | 0.5281 | 4.3 | 1000 | 0.4817 |
64
 
65
 
66
  ### Framework versions
67
 
68
- - Transformers 4.31.0.dev0
69
  - Pytorch 2.0.1+cu118
70
- - Datasets 2.13.1
71
  - Tokenizers 0.13.3
 
1
  ---
2
+ language:
3
+ - nl
4
  license: mit
5
  base_model: microsoft/speecht5_tts
6
  tags:
7
  - generated_from_trainer
8
  datasets:
9
+ - facebook/voxpopuli
10
  model-index:
11
+ - name: speec T5 NL - Sumet
12
  results: []
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
  should probably proofread and complete it, then remove this comment. -->
17
 
18
+ # speec T5 NL - Sumet
19
 
20
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Vox Populi NL dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: nan
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 0.003
42
+ - train_batch_size: 8
43
  - eval_batch_size: 4
44
  - seed: 42
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
+ - training_steps: 4000
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 0.0 | 0.54 | 1000 | nan |
55
+ | 0.0 | 1.09 | 2000 | nan |
56
+ | 0.0 | 1.63 | 3000 | nan |
57
+ | 0.0 | 2.18 | 4000 | nan |
 
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
61
 
62
+ - Transformers 4.31.0
63
  - Pytorch 2.0.1+cu118
64
+ - Datasets 2.14.0
65
  - Tokenizers 0.13.3