thiagolira commited on
Commit
768df84
1 Parent(s): c48d6c3

End of training

Browse files
README.md CHANGED
@@ -1,40 +1,36 @@
1
  ---
2
- language:
3
- - la
4
  license: mit
 
5
  tags:
6
  - generated_from_trainer
7
- datasets:
8
- - thiagolira/LatinYoutube
9
  metrics:
10
  - wer
11
- base_model: facebook/w2v-bert-2.0
12
  model-index:
13
  - name: CiceroASR
14
  results: []
15
  ---
16
 
17
-
 
18
 
19
  # CiceroASR
20
 
21
- This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0)
22
- for the transcription of Classical Latin!
 
 
 
 
 
 
23
 
24
- Example from the Aeneid:
25
- <video controls src="https://cdn-uploads.huggingface.co/production/uploads/5fc7944e8a82cc0bcf7cc51d/hYNFr2od1EKDlRRdzJmzR.webm"></video>
26
- Transcription:
27
- **arma virumque cano** (Of arms and men I sing)
28
 
29
- Example from Genesis:
30
- <video controls src="https://cdn-uploads.huggingface.co/production/uploads/5fc7944e8a82cc0bcf7cc51d/9Q6DfG2h8FkABnl55DLBH.webm"></video>
31
- Transcription (little error there):
32
- **creavit deus chaelum et terram** (In the beggining God created the heaven and the earth)
33
 
 
34
 
35
- It achieves the following results on the evaluation set of my dataset [Latin Youtube](https://huggingface.co/datasets/thiagolira/LatinYoutube):
36
- - Loss: 0.5026
37
- - Wer: 0.1651
38
 
39
  ## Training procedure
40
 
@@ -57,24 +53,26 @@ The following hyperparameters were used during training:
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Wer |
59
  |:-------------:|:-----:|:----:|:---------------:|:------:|
60
- | 2.9864 | 1.14 | 50 | 2.4639 | 1.0 |
61
- | 0.7134 | 2.27 | 100 | 0.4891 | 0.3601 |
62
- | 0.5196 | 3.41 | 150 | 0.5267 | 0.3022 |
63
- | 0.3779 | 4.55 | 200 | 0.4407 | 0.2369 |
64
- | 0.3818 | 5.68 | 250 | 0.4516 | 0.2360 |
65
- | 0.3 | 6.82 | 300 | 0.4365 | 0.2379 |
66
- | 0.3252 | 7.95 | 350 | 0.4238 | 0.2183 |
67
- | 0.2736 | 9.09 | 400 | 0.4609 | 0.2034 |
68
- | 0.1588 | 10.23 | 450 | 0.4007 | 0.2239 |
69
- | 0.1223 | 11.36 | 500 | 0.4892 | 0.1987 |
70
- | 0.0859 | 12.5 | 550 | 0.5393 | 0.1772 |
71
- | 0.0575 | 13.64 | 600 | 0.4629 | 0.1744 |
72
- | 0.0464 | 14.77 | 650 | 0.5026 | 0.1651 |
 
 
73
 
74
 
75
  ### Framework versions
76
 
77
- - Transformers 4.38.0
78
  - Pytorch 2.1.0+cu121
79
  - Datasets 2.17.1
80
- - Tokenizers 0.15.2
 
1
  ---
 
 
2
  license: mit
3
+ base_model: facebook/w2v-bert-2.0
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - wer
 
8
  model-index:
9
  - name: CiceroASR
10
  results: []
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # CiceroASR
17
 
18
+ This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.5395
21
+ - Wer: 0.2220
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
 
27
+ ## Intended uses & limitations
 
 
 
28
 
29
+ More information needed
 
 
 
30
 
31
+ ## Training and evaluation data
32
 
33
+ More information needed
 
 
34
 
35
  ## Training procedure
36
 
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Wer |
55
  |:-------------:|:-----:|:----:|:---------------:|:------:|
56
+ | 3.6548 | 0.94 | 50 | 2.8634 | 0.9990 |
57
+ | 2.2055 | 1.89 | 100 | 1.0921 | 0.9727 |
58
+ | 1.667 | 2.83 | 150 | 0.7201 | 0.4615 |
59
+ | 1.3148 | 3.77 | 200 | 0.6431 | 0.3866 |
60
+ | 0.9899 | 4.72 | 250 | 0.5561 | 0.3116 |
61
+ | 0.9629 | 5.66 | 300 | 0.6027 | 0.3817 |
62
+ | 0.7557 | 6.6 | 350 | 0.7145 | 0.3145 |
63
+ | 0.9143 | 7.55 | 400 | 0.4926 | 0.2610 |
64
+ | 0.5837 | 8.49 | 450 | 0.5396 | 0.2619 |
65
+ | 0.7037 | 9.43 | 500 | 0.5076 | 0.2746 |
66
+ | 0.5986 | 10.38 | 550 | 0.5224 | 0.2415 |
67
+ | 0.5288 | 11.32 | 600 | 0.5332 | 0.2259 |
68
+ | 0.5034 | 12.26 | 650 | 0.5436 | 0.2249 |
69
+ | 0.4897 | 13.21 | 700 | 0.5171 | 0.2162 |
70
+ | 0.4738 | 14.15 | 750 | 0.5395 | 0.2220 |
71
 
72
 
73
  ### Framework versions
74
 
75
+ - Transformers 4.38.1
76
  - Pytorch 2.1.0+cu121
77
  - Datasets 2.17.1
78
+ - Tokenizers 0.15.2
config.json CHANGED
@@ -9,7 +9,7 @@
9
  "architectures": [
10
  "Wav2Vec2BertForCTC"
11
  ],
12
- "attention_dropout": 0.01,
13
  "bos_token_id": 1,
14
  "classifier_proj_size": 768,
15
  "codevector_dim": 768,
@@ -30,14 +30,14 @@
30
  "initializer_range": 0.02,
31
  "intermediate_size": 4096,
32
  "layer_norm_eps": 1e-05,
33
- "layerdrop": 0.0,
34
  "left_max_position_embeddings": 64,
35
  "mask_feature_length": 10,
36
  "mask_feature_min_masks": 0,
37
  "mask_feature_prob": 0.0,
38
  "mask_time_length": 10,
39
  "mask_time_min_masks": 2,
40
- "mask_time_prob": 0.0,
41
  "max_source_positions": 5000,
42
  "model_type": "wav2vec2-bert",
43
  "num_adapter_layers": 1,
@@ -47,7 +47,7 @@
47
  "num_hidden_layers": 24,
48
  "num_negatives": 100,
49
  "output_hidden_size": 1024,
50
- "pad_token_id": 26,
51
  "position_embeddings_type": "relative_key",
52
  "proj_codevector_dim": 768,
53
  "right_max_position_embeddings": 8,
@@ -74,9 +74,9 @@
74
  1
75
  ],
76
  "torch_dtype": "float32",
77
- "transformers_version": "4.38.0",
78
  "use_intermediate_ffn_before_adapter": false,
79
  "use_weighted_layer_sum": false,
80
- "vocab_size": 29,
81
  "xvector_output_dim": 512
82
  }
 
9
  "architectures": [
10
  "Wav2Vec2BertForCTC"
11
  ],
12
+ "attention_dropout": 0.0,
13
  "bos_token_id": 1,
14
  "classifier_proj_size": 768,
15
  "codevector_dim": 768,
 
30
  "initializer_range": 0.02,
31
  "intermediate_size": 4096,
32
  "layer_norm_eps": 1e-05,
33
+ "layerdrop": 0.1,
34
  "left_max_position_embeddings": 64,
35
  "mask_feature_length": 10,
36
  "mask_feature_min_masks": 0,
37
  "mask_feature_prob": 0.0,
38
  "mask_time_length": 10,
39
  "mask_time_min_masks": 2,
40
+ "mask_time_prob": 0.05,
41
  "max_source_positions": 5000,
42
  "model_type": "wav2vec2-bert",
43
  "num_adapter_layers": 1,
 
47
  "num_hidden_layers": 24,
48
  "num_negatives": 100,
49
  "output_hidden_size": 1024,
50
+ "pad_token_id": 28,
51
  "position_embeddings_type": "relative_key",
52
  "proj_codevector_dim": 768,
53
  "right_max_position_embeddings": 8,
 
74
  1
75
  ],
76
  "torch_dtype": "float32",
77
+ "transformers_version": "4.38.1",
78
  "use_intermediate_ffn_before_adapter": false,
79
  "use_weighted_layer_sum": false,
80
+ "vocab_size": 31,
81
  "xvector_output_dim": 512
82
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d624783ea559e63e5ccbfeebf113f523d4392cca9f5b7c87c88fc6b1de66f24f
3
- size 2422933460
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0b3ed488437a89e7f7698e03e1cb4024a3cdec1b502422e851d049a93c2271d
3
+ size 2422945860
runs/Feb23_21-28-37_a1c0a7f9db93/events.out.tfevents.1708723755.a1c0a7f9db93.399.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55794fd39574b5d1aec9c9a815fb1c1c894f978cb20027394f512e6e8c2b2c95
3
+ size 17640
runs/Feb23_21-50-41_a1c0a7f9db93/events.out.tfevents.1708725089.a1c0a7f9db93.399.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be87027f3a12f08445baa3f46e60af792834320cd20f19dbb16ccb24c1e3120d
3
+ size 17467
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8ef64dfc202d707f0f052d97cb9910b9c8c80ea2ffbf68cc9e4078fe8463c324
3
- size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7700e5ed7df84a1d2583138d2e5556600a67acb82570114f01dfea9cae123ba0
3
+ size 4856