gokuls commited on
Commit
a5b9444
1 Parent(s): 6ec7009

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - glue
6
+ metrics:
7
+ - accuracy
8
+ - f1
9
+ model-index:
10
+ - name: hBERTv2_qqp
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: glue
17
+ type: glue
18
+ config: qqp
19
+ split: validation
20
+ args: qqp
21
+ metrics:
22
+ - name: Accuracy
23
+ type: accuracy
24
+ value: 0.8760079149146673
25
+ - name: F1
26
+ type: f1
27
+ value: 0.8322233006459385
28
+ ---
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # hBERTv2_qqp
34
+
35
+ This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2](https://huggingface.co/gokuls/bert_12_layer_model_v2) on the glue dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 0.4723
38
+ - Accuracy: 0.8760
39
+ - F1: 0.8322
40
+ - Combined Score: 0.8541
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 5e-05
60
+ - train_batch_size: 256
61
+ - eval_batch_size: 256
62
+ - seed: 10
63
+ - distributed_type: multi-GPU
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - num_epochs: 50
67
+ - mixed_precision_training: Native AMP
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
72
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
73
+ | 0.4179 | 1.0 | 1422 | 0.3830 | 0.8252 | 0.7916 | 0.8084 |
74
+ | 0.2978 | 2.0 | 2844 | 0.3507 | 0.8357 | 0.7906 | 0.8131 |
75
+ | 0.2318 | 3.0 | 4266 | 0.3129 | 0.8651 | 0.8160 | 0.8406 |
76
+ | 0.1765 | 4.0 | 5688 | 0.3540 | 0.8700 | 0.8328 | 0.8514 |
77
+ | 0.1305 | 5.0 | 7110 | 0.4276 | 0.8734 | 0.8267 | 0.8500 |
78
+ | 0.1003 | 6.0 | 8532 | 0.4078 | 0.8748 | 0.8292 | 0.8520 |
79
+ | 0.0788 | 7.0 | 9954 | 0.4069 | 0.8767 | 0.8345 | 0.8556 |
80
+ | 0.0625 | 8.0 | 11376 | 0.4723 | 0.8760 | 0.8322 | 0.8541 |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.26.1
86
+ - Pytorch 1.14.0a0+410ce96
87
+ - Datasets 2.10.1
88
+ - Tokenizers 0.13.2