soBeauty commited on
Commit
a36212a
1 Parent(s): 5f63f4f

End of training

Browse files
Files changed (3) hide show
  1. README.md +70 -0
  2. generation_config.json +5 -0
  3. pytorch_model.bin +1 -1
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: bert-base-multilingual-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: 20231005-1-bert-base-multilingual-cased-new
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # 20231005-1-bert-base-multilingual-cased-new
17
+
18
+ This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Accuracy: 0.6240
21
+ - Loss: 1.6828
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 2e-05
41
+ - train_batch_size: 32
42
+ - eval_batch_size: 32
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 20
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Accuracy | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:--------:|:---------------:|
52
+ | 2.9403 | 1.82 | 200 | 0.4411 | 2.4884 |
53
+ | 2.4143 | 3.64 | 400 | 0.4908 | 2.1693 |
54
+ | 2.1466 | 5.45 | 600 | 0.5377 | 1.9990 |
55
+ | 2.0429 | 7.27 | 800 | 0.5424 | 2.1102 |
56
+ | 1.9514 | 9.09 | 1000 | 0.5680 | 1.8748 |
57
+ | 1.8498 | 10.91 | 1200 | 0.5826 | 1.8680 |
58
+ | 1.8097 | 12.73 | 1400 | 0.5960 | 1.8489 |
59
+ | 1.737 | 14.55 | 1600 | 0.6364 | 1.6621 |
60
+ | 1.7203 | 16.36 | 1800 | 0.6298 | 1.6846 |
61
+ | 1.6172 | 18.18 | 2000 | 0.6527 | 1.5969 |
62
+ | 1.6564 | 20.0 | 2200 | 0.6240 | 1.6828 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.34.0
68
+ - Pytorch 2.0.1+cu118
69
+ - Datasets 2.14.5
70
+ - Tokenizers 0.14.0
generation_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "pad_token_id": 0,
4
+ "transformers_version": "4.34.0"
5
+ }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e630f93d9e030f1aafb6a6f4d99577b4c5f24646ae18396680ee6929f293d0bf
3
  size 711964213
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02ddf3b225a7198e2ba048ceafca63b5c10f5a646d63ccf42ed70ae6e748be75
3
  size 711964213