AAmeni commited on
Commit
e3fc9ef
1 Parent(s): c2915d3

End of training

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: nielsr/lilt-xlm-roberta-base
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
+ model-index:
12
+ - name: LILT-id
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # LILT-id
20
+
21
+ This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.4723
24
+ - Precision: 0.9132
25
+ - Recall: 0.8998
26
+ - F1: 0.9064
27
+ - Accuracy: 0.9467
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 5e-05
47
+ - train_batch_size: 2
48
+ - eval_batch_size: 4
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - num_epochs: 30
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
58
+ |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
59
+ | No log | 2.4390 | 200 | 0.2933 | 0.875 | 0.8557 | 0.8653 | 0.9208 |
60
+ | No log | 4.8780 | 400 | 0.3650 | 0.8934 | 0.8606 | 0.8767 | 0.9257 |
61
+ | 0.3014 | 7.3171 | 600 | 0.3822 | 0.9055 | 0.8900 | 0.8977 | 0.9386 |
62
+ | 0.3014 | 9.7561 | 800 | 0.4052 | 0.8980 | 0.8826 | 0.8903 | 0.9386 |
63
+ | 0.0365 | 12.1951 | 1000 | 0.4668 | 0.8966 | 0.8900 | 0.8933 | 0.9386 |
64
+ | 0.0365 | 14.6341 | 1200 | 0.4664 | 0.9123 | 0.8900 | 0.9010 | 0.9435 |
65
+ | 0.0365 | 17.0732 | 1400 | 0.4993 | 0.8978 | 0.8802 | 0.8889 | 0.9370 |
66
+ | 0.0091 | 19.5122 | 1600 | 0.4723 | 0.9132 | 0.8998 | 0.9064 | 0.9467 |
67
+ | 0.0091 | 21.9512 | 1800 | 0.4826 | 0.9089 | 0.9022 | 0.9055 | 0.9467 |
68
+ | 0.0004 | 24.3902 | 2000 | 0.4790 | 0.9086 | 0.8998 | 0.9042 | 0.9467 |
69
+ | 0.0004 | 26.8293 | 2200 | 0.4807 | 0.9086 | 0.8998 | 0.9042 | 0.9467 |
70
+ | 0.0004 | 29.2683 | 2400 | 0.4818 | 0.9086 | 0.8998 | 0.9042 | 0.9467 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.42.4
76
+ - Pytorch 2.3.1+cu121
77
+ - Datasets 2.20.0
78
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f4a4447c2dcab4ef4d3f9d31cc0b44b2046628d1a826694655fa839632bf5f13
3
  size 1134316848
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eb81cea921175752d392692a4c751b2a2905d7f08709366aab6138a5f90f4e3
3
  size 1134316848
runs/Jul24_15-32-50_f0094826bc04/events.out.tfevents.1721835171.f0094826bc04.470.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6308d53cd62627652f2c35a6c5c24b5e029a269c7382e3a6210b08ce26c74348
3
- size 11707
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12fcba15a8ac0fc6a463de8a1271bf259e318996bc56e7b6f7d512c553d6643b
3
+ size 12061
runs/Jul24_15-32-50_f0094826bc04/events.out.tfevents.1721837605.f0094826bc04.470.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30fc7f4c8e8466c9201abe3065c06fc8c00d826b84ef2f3095d7a38a7a39138b
3
+ size 1032