asahi417 commited on
Commit
9a868a1
1 Parent(s): 53d0409

model update

Browse files
README.md CHANGED
@@ -14,7 +14,7 @@ model-index:
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
- value: 0.8303968253968254
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
@@ -25,7 +25,7 @@ model-index:
25
  metrics:
26
  - name: Accuracy
27
  type: accuracy
28
- value: 0.7192513368983957
29
  - task:
30
  name: Analogy Questions (SAT)
31
  type: multiple-choice-qa
@@ -36,7 +36,7 @@ model-index:
36
  metrics:
37
  - name: Accuracy
38
  type: accuracy
39
- value: 0.7091988130563798
40
  - task:
41
  name: Analogy Questions (BATS)
42
  type: multiple-choice-qa
@@ -47,7 +47,7 @@ model-index:
47
  metrics:
48
  - name: Accuracy
49
  type: accuracy
50
- value: 0.8043357420789328
51
  - task:
52
  name: Analogy Questions (Google)
53
  type: multiple-choice-qa
@@ -58,7 +58,7 @@ model-index:
58
  metrics:
59
  - name: Accuracy
60
  type: accuracy
61
- value: 0.948
62
  - task:
63
  name: Analogy Questions (U2)
64
  type: multiple-choice-qa
@@ -69,7 +69,7 @@ model-index:
69
  metrics:
70
  - name: Accuracy
71
  type: accuracy
72
- value: 0.6798245614035088
73
  - task:
74
  name: Analogy Questions (U4)
75
  type: multiple-choice-qa
@@ -80,7 +80,7 @@ model-index:
80
  metrics:
81
  - name: Accuracy
82
  type: accuracy
83
- value: 0.6643518518518519
84
  - task:
85
  name: Analogy Questions (ConceptNet Analogy)
86
  type: multiple-choice-qa
@@ -91,7 +91,7 @@ model-index:
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
- value: 0.4865771812080537
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
@@ -102,7 +102,7 @@ model-index:
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
- value: 0.6338797814207651
106
  - task:
107
  name: Analogy Questions (NELL-ONE Analogy)
108
  type: multiple-choice-qa
@@ -113,7 +113,7 @@ model-index:
113
  metrics:
114
  - name: Accuracy
115
  type: accuracy
116
- value: 0.6633333333333333
117
  - task:
118
  name: Lexical Relation Classification (BLESS)
119
  type: classification
@@ -124,10 +124,10 @@ model-index:
124
  metrics:
125
  - name: F1
126
  type: f1
127
- value: 0.9169805635076088
128
  - name: F1 (macro)
129
  type: f1_macro
130
- value: 0.9133613159985977
131
  - task:
132
  name: Lexical Relation Classification (CogALexV)
133
  type: classification
@@ -138,10 +138,10 @@ model-index:
138
  metrics:
139
  - name: F1
140
  type: f1
141
- value: 0.8643192488262911
142
  - name: F1 (macro)
143
  type: f1_macro
144
- value: 0.709680204738525
145
  - task:
146
  name: Lexical Relation Classification (EVALution)
147
  type: classification
@@ -152,10 +152,10 @@ model-index:
152
  metrics:
153
  - name: F1
154
  type: f1
155
- value: 0.6782231852654388
156
  - name: F1 (macro)
157
  type: f1_macro
158
- value: 0.665196173208286
159
  - task:
160
  name: Lexical Relation Classification (K&H+N)
161
  type: classification
@@ -166,10 +166,10 @@ model-index:
166
  metrics:
167
  - name: F1
168
  type: f1
169
- value: 0.9568060095986646
170
  - name: F1 (macro)
171
  type: f1_macro
172
- value: 0.8745909398702613
173
  - task:
174
  name: Lexical Relation Classification (ROOT09)
175
  type: classification
@@ -180,34 +180,34 @@ model-index:
180
  metrics:
181
  - name: F1
182
  type: f1
183
- value: 0.9150736446255092
184
  - name: F1 (macro)
185
  type: f1_macro
186
- value: 0.9142555280970402
187
 
188
  ---
189
  # relbert/relbert-roberta-base-nce-semeval2012-2
190
 
191
- RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
192
  This model achieves the following results on the relation understanding tasks:
193
  - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/analogy.forward.json)):
194
- - Accuracy on SAT (full): 0.7192513368983957
195
- - Accuracy on SAT: 0.7091988130563798
196
- - Accuracy on BATS: 0.8043357420789328
197
- - Accuracy on U2: 0.6798245614035088
198
- - Accuracy on U4: 0.6643518518518519
199
- - Accuracy on Google: 0.948
200
- - Accuracy on ConceptNet Analogy: 0.4865771812080537
201
- - Accuracy on T-Rex Analogy: 0.6338797814207651
202
- - Accuracy on NELL-ONE Analogy: 0.6633333333333333
203
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/classification.json)):
204
- - Micro F1 score on BLESS: 0.9169805635076088
205
- - Micro F1 score on CogALexV: 0.8643192488262911
206
- - Micro F1 score on EVALution: 0.6782231852654388
207
- - Micro F1 score on K&H+N: 0.9568060095986646
208
- - Micro F1 score on ROOT09: 0.9150736446255092
209
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/relation_mapping.json)):
210
- - Accuracy on Relation Mapping: 0.8303968253968254
211
 
212
 
213
  ### Usage
@@ -224,7 +224,7 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
224
 
225
  ### Training hyperparameters
226
 
227
- - model: roberta-large
228
  - max_length: 64
229
  - epoch: 10
230
  - batch: 32
@@ -239,7 +239,7 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
239
  - split_valid: validation
240
  - loss_function: nce
241
  - classification_loss: False
242
- - loss_function_config: {'temperature': 0.05, 'num_negative': 100, 'num_positive': 10}
243
  - augment_negative_by_positive: True
244
 
245
  See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/finetuning_config.json).
 
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
+ value: 0.780436507936508
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
 
25
  metrics:
26
  - name: Accuracy
27
  type: accuracy
28
+ value: 0.5588235294117647
29
  - task:
30
  name: Analogy Questions (SAT)
31
  type: multiple-choice-qa
 
36
  metrics:
37
  - name: Accuracy
38
  type: accuracy
39
+ value: 0.5608308605341247
40
  - task:
41
  name: Analogy Questions (BATS)
42
  type: multiple-choice-qa
 
47
  metrics:
48
  - name: Accuracy
49
  type: accuracy
50
+ value: 0.6731517509727627
51
  - task:
52
  name: Analogy Questions (Google)
53
  type: multiple-choice-qa
 
58
  metrics:
59
  - name: Accuracy
60
  type: accuracy
61
+ value: 0.856
62
  - task:
63
  name: Analogy Questions (U2)
64
  type: multiple-choice-qa
 
69
  metrics:
70
  - name: Accuracy
71
  type: accuracy
72
+ value: 0.5570175438596491
73
  - task:
74
  name: Analogy Questions (U4)
75
  type: multiple-choice-qa
 
80
  metrics:
81
  - name: Accuracy
82
  type: accuracy
83
+ value: 0.5439814814814815
84
  - task:
85
  name: Analogy Questions (ConceptNet Analogy)
86
  type: multiple-choice-qa
 
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
+ value: 0.30453020134228187
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
 
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
+ value: 0.4644808743169399
106
  - task:
107
  name: Analogy Questions (NELL-ONE Analogy)
108
  type: multiple-choice-qa
 
113
  metrics:
114
  - name: Accuracy
115
  type: accuracy
116
+ value: 0.635
117
  - task:
118
  name: Lexical Relation Classification (BLESS)
119
  type: classification
 
124
  metrics:
125
  - name: F1
126
  type: f1
127
+ value: 0.9067349706192557
128
  - name: F1 (macro)
129
  type: f1_macro
130
+ value: 0.901083105005463
131
  - task:
132
  name: Lexical Relation Classification (CogALexV)
133
  type: classification
 
138
  metrics:
139
  - name: F1
140
  type: f1
141
+ value: 0.8112676056338028
142
  - name: F1 (macro)
143
  type: f1_macro
144
+ value: 0.6148092103324919
145
  - task:
146
  name: Lexical Relation Classification (EVALution)
147
  type: classification
 
152
  metrics:
153
  - name: F1
154
  type: f1
155
+ value: 0.6305525460455038
156
  - name: F1 (macro)
157
  type: f1_macro
158
+ value: 0.6268772505825797
159
  - task:
160
  name: Lexical Relation Classification (K&H+N)
161
  type: classification
 
166
  metrics:
167
  - name: F1
168
  type: f1
169
+ value: 0.9433122348195033
170
  - name: F1 (macro)
171
  type: f1_macro
172
+ value: 0.8521358889073605
173
  - task:
174
  name: Lexical Relation Classification (ROOT09)
175
  type: classification
 
180
  metrics:
181
  - name: F1
182
  type: f1
183
+ value: 0.895017235976183
184
  - name: F1 (macro)
185
  type: f1_macro
186
+ value: 0.8915314929167794
187
 
188
  ---
189
  # relbert/relbert-roberta-base-nce-semeval2012-2
190
 
191
+ RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
192
  This model achieves the following results on the relation understanding tasks:
193
  - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/analogy.forward.json)):
194
+ - Accuracy on SAT (full): 0.5588235294117647
195
+ - Accuracy on SAT: 0.5608308605341247
196
+ - Accuracy on BATS: 0.6731517509727627
197
+ - Accuracy on U2: 0.5570175438596491
198
+ - Accuracy on U4: 0.5439814814814815
199
+ - Accuracy on Google: 0.856
200
+ - Accuracy on ConceptNet Analogy: 0.30453020134228187
201
+ - Accuracy on T-Rex Analogy: 0.4644808743169399
202
+ - Accuracy on NELL-ONE Analogy: 0.635
203
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/classification.json)):
204
+ - Micro F1 score on BLESS: 0.9067349706192557
205
+ - Micro F1 score on CogALexV: 0.8112676056338028
206
+ - Micro F1 score on EVALution: 0.6305525460455038
207
+ - Micro F1 score on K&H+N: 0.9433122348195033
208
+ - Micro F1 score on ROOT09: 0.895017235976183
209
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/relation_mapping.json)):
210
+ - Accuracy on Relation Mapping: 0.780436507936508
211
 
212
 
213
  ### Usage
 
224
 
225
  ### Training hyperparameters
226
 
227
+ - model: roberta-base
228
  - max_length: 64
229
  - epoch: 10
230
  - batch: 32
 
239
  - split_valid: validation
240
  - loss_function: nce
241
  - classification_loss: False
242
+ - loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
243
  - augment_negative_by_positive: True
244
 
245
  See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-2/raw/main/finetuning_config.json).
analogy.forward.json CHANGED
@@ -1 +1 @@
1
- {"semeval2012_relational_similarity/validation": 0.7974683544303798, "scan/test": 0.2908415841584158, "sat_full/test": 0.7192513368983957, "sat/test": 0.7091988130563798, "u2/test": 0.6798245614035088, "u4/test": 0.6643518518518519, "google/test": 0.948, "bats/test": 0.8043357420789328, "t_rex_relational_similarity/test": 0.6338797814207651, "conceptnet_relational_similarity/test": 0.4865771812080537, "nell_relational_similarity/test": 0.6633333333333333}
 
1
+ {"semeval2012_relational_similarity/validation": 0.6962025316455697, "scan/test": 0.2332920792079208, "sat_full/test": 0.5588235294117647, "sat/test": 0.5608308605341247, "u2/test": 0.5570175438596491, "u4/test": 0.5439814814814815, "google/test": 0.856, "bats/test": 0.6731517509727627, "t_rex_relational_similarity/test": 0.4644808743169399, "conceptnet_relational_similarity/test": 0.30453020134228187, "nell_relational_similarity/test": 0.635}
classification.json CHANGED
@@ -1 +1 @@
1
- {"lexical_relation_classification/BLESS": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9169805635076088, "test/f1_macro": 0.9133613159985977, "test/f1_micro": 0.9169805635076088, "test/p_macro": 0.9065498330870753, "test/p_micro": 0.9169805635076088, "test/r_macro": 0.9208392163252331, "test/r_micro": 0.9169805635076088, "test/f1/attri": 0.9212207239176722, "test/p/attri": 0.9102384291725105, "test/r/attri": 0.9324712643678161, "test/f1/coord": 0.9544175576814856, "test/p/coord": 0.9474860335195531, "test/r/coord": 0.9614512471655329, "test/f1/event": 0.8538071065989848, "test/p/event": 0.8285714285714286, "test/r/event": 0.8806282722513089, "test/f1/hyper": 0.9296987087517934, "test/p/hyper": 0.9337175792507204, "test/r/hyper": 0.9257142857142857, "test/f1/mero": 0.8896146309601568, "test/p/mero": 0.867515923566879, "test/r/mero": 0.9128686327077749, "test/f1/random": 0.931409168081494, "test/p/random": 0.9517696044413602, "test/r/random": 0.9119015957446809}, "lexical_relation_classification/CogALexV": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.8643192488262911, "test/f1_macro": 0.709680204738525, "test/f1_micro": 0.8643192488262911, "test/p_macro": 0.7459301562952307, "test/p_micro": 0.8643192488262911, "test/r_macro": 0.6794130847403439, "test/r_micro": 0.8643192488262911, "test/f1/ANT": 0.7822222222222223, "test/p/ANT": 0.8380952380952381, "test/r/ANT": 0.7333333333333333, "test/f1/HYPER": 0.6134453781512605, "test/p/HYPER": 0.6596385542168675, "test/r/HYPER": 0.5732984293193717, "test/f1/PART_OF": 0.7286063569682152, "test/p/PART_OF": 0.8054054054054054, "test/r/PART_OF": 0.6651785714285714, "test/f1/RANDOM": 0.937519923493784, "test/p/RANDOM": 0.9147744945567652, "test/r/RANDOM": 0.9614253023864008, "test/f1/SYN": 0.48660714285714285, "test/p/SYN": 0.5117370892018779, "test/r/SYN": 0.46382978723404256}, "lexical_relation_classification/EVALution": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.6782231852654388, "test/f1_macro": 0.665196173208286, "test/f1_micro": 0.6782231852654388, "test/p_macro": 0.6665905293786667, "test/p_micro": 0.6782231852654388, "test/r_macro": 0.6653300709395378, "test/r_micro": 0.6782231852654388, "test/f1/Antonym": 0.7931456548347613, "test/p/Antonym": 0.8059701492537313, "test/r/Antonym": 0.7807228915662651, "test/f1/HasA": 0.6541353383458647, "test/p/HasA": 0.7016129032258065, "test/r/HasA": 0.6126760563380281, "test/f1/HasProperty": 0.8126888217522659, "test/p/HasProperty": 0.7911764705882353, "test/r/HasProperty": 0.8354037267080745, "test/f1/IsA": 0.6145610278372591, "test/p/IsA": 0.6042105263157894, "test/r/IsA": 0.6252723311546841, "test/f1/MadeOf": 0.6363636363636364, "test/p/MadeOf": 0.6222222222222222, "test/r/MadeOf": 0.6511627906976745, "test/f1/PartOf": 0.6622073578595318, "test/p/PartOf": 0.6428571428571429, "test/r/PartOf": 0.6827586206896552, "test/f1/Synonym": 0.483271375464684, "test/p/Synonym": 0.49808429118773945, "test/r/Synonym": 0.4693140794223827}, "lexical_relation_classification/K&H+N": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9568060095986646, "test/f1_macro": 0.8745909398702613, "test/f1_micro": 0.9568060095986646, "test/p_macro": 0.8722419340096175, "test/p_micro": 0.9568060095986646, "test/r_macro": 0.8771975453603155, "test/r_micro": 0.9568060095986646, "test/f1/false": 0.9696655047096344, "test/p/false": 0.9703131957844738, "test/r/false": 0.9690186777349541, "test/f1/hypo": 0.9182389937106918, "test/p/hypo": 0.9258536585365854, "test/r/hypo": 0.9107485604606526, "test/f1/mero": 0.6490872210953347, "test/p/mero": 0.6324110671936759, "test/r/mero": 0.6666666666666666, "test/f1/sibl": 0.9613720399653843, "test/p/sibl": 0.9603898145237346, "test/r/sibl": 0.9623562765789888}, "lexical_relation_classification/ROOT09": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9150736446255092, "test/f1_macro": 0.9142555280970402, "test/f1_micro": 0.9150736446255092, "test/p_macro": 0.9123400631967442, "test/p_micro": 0.9150736446255092, "test/r_macro": 0.916280536276508, "test/r_micro": 0.9150736446255092, "test/f1/COORD": 0.9762050030506406, "test/p/COORD": 0.9720534629404617, "test/r/COORD": 0.9803921568627451, "test/f1/HYPER": 0.8489296636085627, "test/p/HYPER": 0.8401937046004843, "test/r/HYPER": 0.857849196538937, "test/f1/RANDOM": 0.9176319176319176, "test/p/RANDOM": 0.9247730220492867, "test/r/RANDOM": 0.9106002554278416}}
 
1
+ {"lexical_relation_classification/BLESS": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9067349706192557, "test/f1_macro": 0.901083105005463, "test/f1_micro": 0.9067349706192557, "test/p_macro": 0.8997912343328448, "test/p_micro": 0.9067349706192557, "test/r_macro": 0.9030508070759272, "test/r_micro": 0.9067349706192557, "test/f1/attri": 0.9096005606166784, "test/p/attri": 0.8878248974008208, "test/r/attri": 0.9324712643678161, "test/f1/coord": 0.9407783417935703, "test/p/coord": 0.936026936026936, "test/r/coord": 0.9455782312925171, "test/f1/event": 0.8609341825902335, "test/p/event": 0.8729817007534983, "test/r/event": 0.8492146596858638, "test/f1/hyper": 0.9136163982430454, "test/p/hyper": 0.9369369369369369, "test/r/hyper": 0.8914285714285715, "test/f1/mero": 0.8599348534201954, "test/p/mero": 0.8365019011406845, "test/r/mero": 0.8847184986595175, "test/f1/random": 0.9216342933690557, "test/p/random": 0.9284750337381916, "test/r/random": 0.9148936170212766}, "lexical_relation_classification/CogALexV": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.8112676056338028, "test/f1_macro": 0.6148092103324919, "test/f1_micro": 0.8112676056338028, "test/p_macro": 0.6304195397027895, "test/p_micro": 0.8112676056338028, "test/r_macro": 0.6012687546642258, "test/r_micro": 0.8112676056338028, "test/f1/ANT": 0.61731843575419, "test/p/ANT": 0.6207865168539326, "test/r/ANT": 0.6138888888888889, "test/f1/HYPER": 0.5638148667601683, "test/p/HYPER": 0.6072507552870091, "test/r/HYPER": 0.5261780104712042, "test/f1/PART_OF": 0.5849056603773586, "test/p/PART_OF": 0.62, "test/r/PART_OF": 0.5535714285714286, "test/f1/RANDOM": 0.9080070887707427, "test/p/RANDOM": 0.8951715374841169, "test/r/RANDOM": 0.9212160836874795, "test/f1/SYN": 0.39999999999999997, "test/p/SYN": 0.4088888888888889, "test/r/SYN": 0.39148936170212767}, "lexical_relation_classification/EVALution": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.6305525460455038, "test/f1_macro": 0.6268772505825797, "test/f1_micro": 0.6305525460455038, "test/p_macro": 0.637097321721357, "test/p_micro": 0.6305525460455038, "test/r_macro": 0.6212371311593516, "test/r_micro": 0.6305525460455038, "test/f1/Antonym": 0.711779448621554, "test/p/Antonym": 0.741514360313316, "test/r/Antonym": 0.6843373493975904, "test/f1/HasA": 0.6733333333333333, "test/p/HasA": 0.6392405063291139, "test/r/HasA": 0.7112676056338029, "test/f1/HasProperty": 0.796324655436447, "test/p/HasProperty": 0.7854984894259819, "test/r/HasProperty": 0.8074534161490683, "test/f1/IsA": 0.5868392664509169, "test/p/IsA": 0.5811965811965812, "test/r/IsA": 0.5925925925925926, "test/f1/MadeOf": 0.609271523178808, "test/p/MadeOf": 0.7076923076923077, "test/r/MadeOf": 0.5348837209302325, "test/f1/PartOf": 0.6223776223776224, "test/p/PartOf": 0.6312056737588653, "test/r/PartOf": 0.6137931034482759, "test/f1/Synonym": 0.3882149046793761, "test/p/Synonym": 0.37333333333333335, "test/r/Synonym": 0.4043321299638989}, "lexical_relation_classification/K&H+N": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9433122348195033, "test/f1_macro": 0.8521358889073605, "test/f1_micro": 0.9433122348195033, "test/p_macro": 0.8689695129753637, "test/p_micro": 0.9433122348195033, "test/r_macro": 0.8378283080134359, "test/r_micro": 0.9433122348195033, "test/f1/false": 0.955429255160036, "test/p/false": 0.9640808934500453, "test/r/false": 0.9469315149718351, "test/f1/hypo": 0.9023437499999999, "test/p/hypo": 0.9184890656063618, "test/r/hypo": 0.8867562380038387, "test/f1/mero": 0.6018099547511312, "test/p/mero": 0.6584158415841584, "test/r/mero": 0.5541666666666667, "test/f1/sibl": 0.948960595718275, "test/p/sibl": 0.9348922512608895, "test/r/sibl": 0.9634588124114034}, "lexical_relation_classification/ROOT09": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.895017235976183, "test/f1_macro": 0.8915314929167794, "test/f1_micro": 0.895017235976183, "test/p_macro": 0.8924056043823589, "test/p_micro": 0.895017235976183, "test/r_macro": 0.8910879079225317, "test/r_micro": 0.895017235976183, "test/f1/COORD": 0.9739866908650938, "test/p/COORD": 0.961768219832736, "test/r/COORD": 0.9865196078431373, "test/f1/HYPER": 0.7987341772151898, "test/p/HYPER": 0.8184176394293126, "test/r/HYPER": 0.7799752781211372, "test/f1/RANDOM": 0.901873610670054, "test/p/RANDOM": 0.8970309538850284, "test/r/RANDOM": 0.9067688378033205}}
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "roberta-large",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
@@ -9,19 +9,19 @@
9
  "eos_token_id": 2,
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
- "hidden_size": 1024,
13
  "initializer_range": 0.02,
14
- "intermediate_size": 4096,
15
  "layer_norm_eps": 1e-05,
16
  "max_position_embeddings": 514,
17
  "model_type": "roberta",
18
- "num_attention_heads": 16,
19
- "num_hidden_layers": 24,
20
  "pad_token_id": 1,
21
  "position_embedding_type": "absolute",
22
  "relbert_config": {
23
  "aggregation_mode": "average_no_mask",
24
- "template": "I wasn\u2019t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj>"
25
  },
26
  "torch_dtype": "float32",
27
  "transformers_version": "4.26.1",
 
1
  {
2
+ "_name_or_path": "roberta-base",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
 
9
  "eos_token_id": 2,
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
  "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
  "layer_norm_eps": 1e-05,
16
  "max_position_embeddings": 514,
17
  "model_type": "roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
  "pad_token_id": 1,
21
  "position_embedding_type": "absolute",
22
  "relbert_config": {
23
  "aggregation_mode": "average_no_mask",
24
+ "template": "I wasn\u2019t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>\u2019s <mask>"
25
  },
26
  "torch_dtype": "float32",
27
  "transformers_version": "4.26.1",
finetuning_config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "template": "I wasn\u2019t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj>",
3
- "model": "roberta-large",
4
  "max_length": 64,
5
  "epoch": 10,
6
  "batch": 32,
@@ -17,7 +17,7 @@
17
  "classification_loss": false,
18
  "loss_function_config": {
19
  "temperature": 0.05,
20
- "num_negative": 100,
21
  "num_positive": 10
22
  },
23
  "augment_negative_by_positive": true
 
1
  {
2
+ "template": "I wasn\u2019t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>\u2019s <mask>",
3
+ "model": "roberta-base",
4
  "max_length": 64,
5
  "epoch": 10,
6
  "batch": 32,
 
17
  "classification_loss": false,
18
  "loss_function_config": {
19
  "temperature": 0.05,
20
+ "num_negative": 400,
21
  "num_positive": 10
22
  },
23
  "augment_negative_by_positive": true
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2625a24982aa7546e5d7c5bb28293ef3adbfdf6393b31806150e3ba3272c8759
3
- size 1421575277
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4c70aba115b9aea37f6758c47c99150b652bf727eee744bc8147b8054e24ae1
3
+ size 498652017
relation_mapping.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer.json CHANGED
@@ -53,8 +53,7 @@
53
  "pre_tokenizer": {
54
  "type": "ByteLevel",
55
  "add_prefix_space": false,
56
- "trim_offsets": true,
57
- "use_regex": true
58
  },
59
  "post_processor": {
60
  "type": "RobertaProcessing",
@@ -72,8 +71,7 @@
72
  "decoder": {
73
  "type": "ByteLevel",
74
  "add_prefix_space": true,
75
- "trim_offsets": true,
76
- "use_regex": true
77
  },
78
  "model": {
79
  "type": "BPE",
 
53
  "pre_tokenizer": {
54
  "type": "ByteLevel",
55
  "add_prefix_space": false,
56
+ "trim_offsets": true
 
57
  },
58
  "post_processor": {
59
  "type": "RobertaProcessing",
 
71
  "decoder": {
72
  "type": "ByteLevel",
73
  "add_prefix_space": true,
74
+ "trim_offsets": true
 
75
  },
76
  "model": {
77
  "type": "BPE",
tokenizer_config.json CHANGED
@@ -6,7 +6,7 @@
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
- "name_or_path": "roberta-large",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,
 
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
+ "name_or_path": "roberta-base",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,