haryoaw commited on
Commit
ac6b213
1 Parent(s): 5867cf9

Initial Commit

Browse files
Files changed (5) hide show
  1. README.md +109 -0
  2. config.json +53 -0
  3. eval_result_ner.json +1 -0
  4. model.safetensors +3 -0
  5. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: microsoft/mdeberta-v3-base
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - precision
9
+ - recall
10
+ - f1
11
+ - accuracy
12
+ model-index:
13
+ - name: scenario-non-kd-scr-ner-full-mdeberta_data-univner_full66
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # scenario-non-kd-scr-ner-full-mdeberta_data-univner_full66
21
+
22
+ This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.3189
25
+ - Precision: 0.6246
26
+ - Recall: 0.5741
27
+ - F1: 0.5983
28
+ - Accuracy: 0.9618
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 3e-05
48
+ - train_batch_size: 32
49
+ - eval_batch_size: 32
50
+ - seed: 66
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 30
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
58
+ |:-------------:|:-------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
59
+ | 0.3108 | 0.2910 | 500 | 0.2429 | 0.3192 | 0.2232 | 0.2627 | 0.9338 |
60
+ | 0.1942 | 0.5821 | 1000 | 0.1982 | 0.4385 | 0.3207 | 0.3705 | 0.9447 |
61
+ | 0.1438 | 0.8731 | 1500 | 0.1652 | 0.4817 | 0.4732 | 0.4774 | 0.9522 |
62
+ | 0.1089 | 1.1641 | 2000 | 0.1553 | 0.5015 | 0.5252 | 0.5131 | 0.9558 |
63
+ | 0.0849 | 1.4552 | 2500 | 0.1575 | 0.5738 | 0.5229 | 0.5471 | 0.9587 |
64
+ | 0.0794 | 1.7462 | 3000 | 0.1483 | 0.5344 | 0.5574 | 0.5457 | 0.9588 |
65
+ | 0.071 | 2.0373 | 3500 | 0.1583 | 0.5706 | 0.5640 | 0.5673 | 0.9600 |
66
+ | 0.0435 | 2.3283 | 4000 | 0.1623 | 0.5559 | 0.5660 | 0.5609 | 0.9588 |
67
+ | 0.0471 | 2.6193 | 4500 | 0.1561 | 0.5563 | 0.5905 | 0.5729 | 0.9607 |
68
+ | 0.0462 | 2.9104 | 5000 | 0.1557 | 0.5755 | 0.5960 | 0.5856 | 0.9610 |
69
+ | 0.028 | 3.2014 | 5500 | 0.1784 | 0.5995 | 0.6136 | 0.6065 | 0.9623 |
70
+ | 0.026 | 3.4924 | 6000 | 0.1895 | 0.6169 | 0.5555 | 0.5846 | 0.9620 |
71
+ | 0.0257 | 3.7835 | 6500 | 0.1790 | 0.6020 | 0.6068 | 0.6044 | 0.9621 |
72
+ | 0.025 | 4.0745 | 7000 | 0.1943 | 0.6036 | 0.6048 | 0.6042 | 0.9625 |
73
+ | 0.0138 | 4.3655 | 7500 | 0.2013 | 0.5832 | 0.6203 | 0.6012 | 0.9619 |
74
+ | 0.0165 | 4.6566 | 8000 | 0.2146 | 0.6079 | 0.5865 | 0.5970 | 0.9621 |
75
+ | 0.0163 | 4.9476 | 8500 | 0.2123 | 0.6071 | 0.5827 | 0.5947 | 0.9615 |
76
+ | 0.0099 | 5.2386 | 9000 | 0.2290 | 0.6196 | 0.5957 | 0.6074 | 0.9627 |
77
+ | 0.0093 | 5.5297 | 9500 | 0.2274 | 0.6019 | 0.6143 | 0.6081 | 0.9616 |
78
+ | 0.0105 | 5.8207 | 10000 | 0.2326 | 0.6102 | 0.5806 | 0.5950 | 0.9618 |
79
+ | 0.0096 | 6.1118 | 10500 | 0.2335 | 0.6016 | 0.6198 | 0.6106 | 0.9617 |
80
+ | 0.0062 | 6.4028 | 11000 | 0.2542 | 0.6230 | 0.5866 | 0.6043 | 0.9626 |
81
+ | 0.0069 | 6.6938 | 11500 | 0.2510 | 0.6216 | 0.6066 | 0.6140 | 0.9627 |
82
+ | 0.0079 | 6.9849 | 12000 | 0.2457 | 0.5980 | 0.6067 | 0.6023 | 0.9620 |
83
+ | 0.0051 | 7.2759 | 12500 | 0.2603 | 0.6330 | 0.5842 | 0.6076 | 0.9626 |
84
+ | 0.0054 | 7.5669 | 13000 | 0.2627 | 0.6237 | 0.6025 | 0.6129 | 0.9625 |
85
+ | 0.0053 | 7.8580 | 13500 | 0.2680 | 0.5916 | 0.6227 | 0.6068 | 0.9617 |
86
+ | 0.0045 | 8.1490 | 14000 | 0.2709 | 0.6004 | 0.6068 | 0.6036 | 0.9619 |
87
+ | 0.0033 | 8.4400 | 14500 | 0.2873 | 0.6024 | 0.5940 | 0.5982 | 0.9616 |
88
+ | 0.0043 | 8.7311 | 15000 | 0.2806 | 0.6167 | 0.6032 | 0.6099 | 0.9624 |
89
+ | 0.0048 | 9.0221 | 15500 | 0.2733 | 0.6091 | 0.5918 | 0.6003 | 0.9623 |
90
+ | 0.0028 | 9.3132 | 16000 | 0.2804 | 0.5862 | 0.6188 | 0.6021 | 0.9618 |
91
+ | 0.0029 | 9.6042 | 16500 | 0.2829 | 0.6201 | 0.6019 | 0.6109 | 0.9624 |
92
+ | 0.0031 | 9.8952 | 17000 | 0.2828 | 0.6154 | 0.5989 | 0.6070 | 0.9620 |
93
+ | 0.0029 | 10.1863 | 17500 | 0.2876 | 0.6075 | 0.6094 | 0.6085 | 0.9625 |
94
+ | 0.0026 | 10.4773 | 18000 | 0.3005 | 0.6329 | 0.5859 | 0.6085 | 0.9623 |
95
+ | 0.0025 | 10.7683 | 18500 | 0.2942 | 0.6063 | 0.6201 | 0.6131 | 0.9619 |
96
+ | 0.0025 | 11.0594 | 19000 | 0.2948 | 0.6115 | 0.6102 | 0.6108 | 0.9622 |
97
+ | 0.0017 | 11.3504 | 19500 | 0.2995 | 0.6143 | 0.5965 | 0.6052 | 0.9621 |
98
+ | 0.002 | 11.6414 | 20000 | 0.2930 | 0.6022 | 0.6061 | 0.6042 | 0.9616 |
99
+ | 0.002 | 11.9325 | 20500 | 0.3087 | 0.6222 | 0.5910 | 0.6062 | 0.9624 |
100
+ | 0.0018 | 12.2235 | 21000 | 0.3114 | 0.5903 | 0.6217 | 0.6056 | 0.9617 |
101
+ | 0.0014 | 12.5146 | 21500 | 0.3189 | 0.6246 | 0.5741 | 0.5983 | 0.9618 |
102
+
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.44.2
107
+ - Pytorch 2.1.1+cu121
108
+ - Datasets 2.14.5
109
+ - Tokenizers 0.19.1
config.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/mdeberta-v3-base",
3
+ "architectures": [
4
+ "DebertaV2ForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 768,
10
+ "id2label": {
11
+ "0": "LABEL_0",
12
+ "1": "LABEL_1",
13
+ "2": "LABEL_2",
14
+ "3": "LABEL_3",
15
+ "4": "LABEL_4",
16
+ "5": "LABEL_5",
17
+ "6": "LABEL_6"
18
+ },
19
+ "initializer_range": 0.02,
20
+ "intermediate_size": 3072,
21
+ "label2id": {
22
+ "LABEL_0": 0,
23
+ "LABEL_1": 1,
24
+ "LABEL_2": 2,
25
+ "LABEL_3": 3,
26
+ "LABEL_4": 4,
27
+ "LABEL_5": 5,
28
+ "LABEL_6": 6
29
+ },
30
+ "layer_norm_eps": 1e-07,
31
+ "max_position_embeddings": 512,
32
+ "max_relative_positions": -1,
33
+ "model_type": "deberta-v2",
34
+ "norm_rel_ebd": "layer_norm",
35
+ "num_attention_heads": 12,
36
+ "num_hidden_layers": 6,
37
+ "pad_token_id": 0,
38
+ "pooler_dropout": 0,
39
+ "pooler_hidden_act": "gelu",
40
+ "pooler_hidden_size": 768,
41
+ "pos_att_type": [
42
+ "p2c",
43
+ "c2p"
44
+ ],
45
+ "position_biased_input": false,
46
+ "position_buckets": 256,
47
+ "relative_attention": true,
48
+ "share_att_key": true,
49
+ "torch_dtype": "float32",
50
+ "transformers_version": "4.44.2",
51
+ "type_vocab_size": 0,
52
+ "vocab_size": 251000
53
+ }
eval_result_ner.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"ceb_gja": {"precision": 0.23853211009174313, "recall": 0.5306122448979592, "f1": 0.32911392405063294, "accuracy": 0.9127413127413128}, "en_pud": {"precision": 0.4764292878635908, "recall": 0.4418604651162791, "f1": 0.45849420849420847, "accuracy": 0.9488571968265962}, "de_pud": {"precision": 0.1264559068219634, "recall": 0.2925890279114533, "f1": 0.17659018297995932, "accuracy": 0.8513431156532746}, "pt_pud": {"precision": 0.5701275045537341, "recall": 0.5696087352138307, "f1": 0.5698680018206645, "accuracy": 0.9606100739095143}, "ru_pud": {"precision": 0.021717911176183505, "recall": 0.0859073359073359, "f1": 0.03467082197117258, "accuracy": 0.6510979075174373}, "sv_pud": {"precision": 0.5355113636363636, "recall": 0.3663751214771623, "f1": 0.4350836699365262, "accuracy": 0.9484168588802684}, "tl_trg": {"precision": 0.2, "recall": 0.5652173913043478, "f1": 0.29545454545454547, "accuracy": 0.9019073569482289}, "tl_ugnayan": {"precision": 0.05660377358490566, "recall": 0.18181818181818182, "f1": 0.08633093525179857, "accuracy": 0.8705560619872379}, "zh_gsd": {"precision": 0.589873417721519, "recall": 0.6075619295958279, "f1": 0.5985870263326911, "accuracy": 0.947968697968698}, "zh_gsdsimp": {"precision": 0.5759096612296111, "recall": 0.601572739187418, "f1": 0.5884615384615384, "accuracy": 0.945054945054945}, "hr_set": {"precision": 0.7624548736462093, "recall": 0.7526728439059159, "f1": 0.7575322812051649, "accuracy": 0.9729183841714757}, "da_ddt": {"precision": 0.6935064935064935, "recall": 0.5973154362416108, "f1": 0.641826923076923, "accuracy": 0.9741594333034022}, "en_ewt": {"precision": 0.6009708737864078, "recall": 0.5689338235294118, "f1": 0.5845136921624173, "accuracy": 0.9627843965414193}, "pt_bosque": {"precision": 0.66, "recall": 0.6518518518518519, "f1": 0.6559006211180124, "accuracy": 0.9690986813505289}, "sr_set": {"precision": 0.8066429418742586, "recall": 0.8028335301062574, "f1": 0.8047337278106509, "accuracy": 0.970843183609141}, "sk_snk": {"precision": 0.39908256880733944, "recall": 0.28524590163934427, "f1": 0.33269598470363293, "accuracy": 0.9193624371859297}, "sv_talbanken": {"precision": 0.6772486772486772, "recall": 0.6530612244897959, "f1": 0.6649350649350649, "accuracy": 0.9942091573833244}}
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18628f037c91bb5d858410f0e2fd34f5f800a11647a2cf80a87968a05f048210
3
+ size 942800188
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05a8021f3a912241cda262a88e81c99de440c29b94ef83056cdfdc4a1b19155e
3
+ size 5304