model update
Browse files- README.md +9 -9
- relation_mapping.json +1 -0
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
datasets:
|
3 |
- relbert/semeval2012_relational_similarity
|
4 |
model-index:
|
5 |
-
- name: relbert/roberta-large-semeval2012-average-prompt-c-nce-classification
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
@@ -14,7 +14,7 @@ model-index:
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
-
value: 0.
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
@@ -153,27 +153,27 @@ model-index:
|
|
153 |
value: 0.9039135057812416
|
154 |
|
155 |
---
|
156 |
-
# relbert/roberta-large-semeval2012-average-prompt-c-nce-classification
|
157 |
|
158 |
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
|
159 |
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
|
160 |
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
|
161 |
It achieves the following results on the relation understanding tasks:
|
162 |
-
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification/raw/main/analogy.json)):
|
163 |
- Accuracy on SAT (full): 0.31283422459893045
|
164 |
- Accuracy on SAT: 0.3086053412462908
|
165 |
- Accuracy on BATS: 0.46192329071706506
|
166 |
- Accuracy on U2: 0.34649122807017546
|
167 |
- Accuracy on U4: 0.3611111111111111
|
168 |
- Accuracy on Google: 0.63
|
169 |
-
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification/raw/main/classification.json)):
|
170 |
- Micro F1 score on BLESS: 0.8457134247400934
|
171 |
- Micro F1 score on CogALexV: 0.846244131455399
|
172 |
- Micro F1 score on EVALution: 0.6262188515709642
|
173 |
- Micro F1 score on K&H+N: 0.9545802323155039
|
174 |
- Micro F1 score on ROOT09: 0.9044186775305547
|
175 |
-
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification/raw/main/relation_mapping.json)):
|
176 |
-
- Accuracy on Relation Mapping: 0.
|
177 |
|
178 |
|
179 |
### Usage
|
@@ -184,7 +184,7 @@ pip install relbert
|
|
184 |
and activate model as below.
|
185 |
```python
|
186 |
from relbert import RelBERT
|
187 |
-
model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-c-nce-classification")
|
188 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
|
189 |
```
|
190 |
|
@@ -216,7 +216,7 @@ The following hyperparameters were used during training:
|
|
216 |
- n_sample: 640
|
217 |
- gradient_accumulation: 8
|
218 |
|
219 |
-
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification/raw/main/trainer_config.json).
|
220 |
|
221 |
### Reference
|
222 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
|
|
|
2 |
datasets:
|
3 |
- relbert/semeval2012_relational_similarity
|
4 |
model-index:
|
5 |
+
- name: relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
+
value: 0.6846626984126984
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
|
|
153 |
value: 0.9039135057812416
|
154 |
|
155 |
---
|
156 |
+
# relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated
|
157 |
|
158 |
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
|
159 |
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
|
160 |
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
|
161 |
It achieves the following results on the relation understanding tasks:
|
162 |
+
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/analogy.json)):
|
163 |
- Accuracy on SAT (full): 0.31283422459893045
|
164 |
- Accuracy on SAT: 0.3086053412462908
|
165 |
- Accuracy on BATS: 0.46192329071706506
|
166 |
- Accuracy on U2: 0.34649122807017546
|
167 |
- Accuracy on U4: 0.3611111111111111
|
168 |
- Accuracy on Google: 0.63
|
169 |
+
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/classification.json)):
|
170 |
- Micro F1 score on BLESS: 0.8457134247400934
|
171 |
- Micro F1 score on CogALexV: 0.846244131455399
|
172 |
- Micro F1 score on EVALution: 0.6262188515709642
|
173 |
- Micro F1 score on K&H+N: 0.9545802323155039
|
174 |
- Micro F1 score on ROOT09: 0.9044186775305547
|
175 |
+
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)):
|
176 |
+
- Accuracy on Relation Mapping: 0.6846626984126984
|
177 |
|
178 |
|
179 |
### Usage
|
|
|
184 |
and activate model as below.
|
185 |
```python
|
186 |
from relbert import RelBERT
|
187 |
+
model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated")
|
188 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
|
189 |
```
|
190 |
|
|
|
216 |
- n_sample: 640
|
217 |
- gradient_accumulation: 8
|
218 |
|
219 |
+
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/trainer_config.json).
|
220 |
|
221 |
### Reference
|
222 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
|
relation_mapping.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"accuracy": 0.6846626984126984, "prediction": [{"source": ["solar system", "sun", "planet", "mass", "attracts", "revolves", "gravity"], "true": ["atom", "nucleus", "electron", "charge", "attracts", "revolves", "electromagnetism"], "pred": ["electron", "nucleus", "atom", "charge", "attracts", "revolves", "electromagnetism"], "alignment_match": false, "accuracy": 0.7142857142857143, "similarity": 0.9999999995263773, "similarity_true": 0.9999999995263773}, {"source": ["water", "flows", "pressure", "water tower", "bucket", "filling", "emptying", "hydrodynamics"], "true": ["heat", "transfers", "temperature", "burner", "kettle", "heating", "cooling", "thermodynamics"], "pred": ["heat", "transfers", "temperature", "heating", "burner", "kettle", "cooling", "thermodynamics"], "alignment_match": false, "accuracy": 0.625, "similarity": 0.999400519347137, "similarity_true": 0.9993167488552953}, {"source": ["waves", "shore", "reflects", "water", "breakwater", "rough", "calm", "crashing"], "true": ["sounds", "wall", "echoes", "air", "insulation", "loud", "quiet", "vibrating"], "pred": ["sounds", "wall", "insulation", "air", "vibrating", "loud", "quiet", "echoes"], "alignment_match": false, "accuracy": 0.625, "similarity": 0.9995384246491622, "similarity_true": 0.9995384246491622}, {"source": ["combustion", "fire", "fuel", "burning", "hot", "intense", "oxygen", "carbon dioxide"], "true": ["respiration", "animal", "food", "breathing", "living", "vigorous", "oxygen", "carbon dioxide"], "pred": ["respiration", "food", "breathing", "living", "animal", "vigorous", "oxygen", "carbon dioxide"], "alignment_match": false, "accuracy": 0.5, "similarity": 0.9999999995756266, "similarity_true": 0.9999999995756266}, {"source": ["sound", "low", "high", "echoes", "loud", "quiet", "horn"], "true": ["light", "red", "violet", "reflects", "bright", "dim", "lens"], "pred": ["light", "dim", "red", "reflects", "bright", "violet", "lens"], "alignment_match": false, "accuracy": 0.5714285714285714, "similarity": 0.9993150101307127, "similarity_true": 0.9993150101307127}, {"source": ["projectile", "trajectory", "earth", "parabolic", "air", "gravity", "attracts"], "true": ["planet", "orbit", "sun", "elliptical", "space", "gravity", "attracts"], "pred": ["planet", "orbit", "sun", "elliptical", "space", "gravity", "attracts"], "alignment_match": true, "accuracy": 1, "similarity": 0.9999999995249175, "similarity_true": 0.9999999995249175}, {"source": ["breeds", "selection", "conformance", "artificial", "popularity", "breeding", "domesticated"], "true": ["species", "competition", "adaptation", "natural", "fitness", "mating", "wild"], "pred": ["species", "adaptation", "fitness", "natural", "competition", "mating", "wild"], "alignment_match": false, "accuracy": 0.5714285714285714, "similarity": 0.9995177178326048, "similarity_true": 0.9995177178326048}, {"source": ["ball", "billiards", "speed", "table", "bouncing", "moving", "slow", "fast"], "true": ["molecules", "gas", "temperature", "container", "pressing", "moving", "cold", "hot"], "pred": ["molecules", "gas", "temperature", "container", "pressing", "moving", "hot", "cold"], "alignment_match": false, "accuracy": 0.75, "similarity": 0.9994770022558482, "similarity_true": 0.9994731336966969}, {"source": ["computer", "processing", "erasing", "write", "read", "memory", "outputs", "inputs", "bug"], "true": ["mind", "thinking", "forgetting", "memorize", "remember", "memory", "muscles", "senses", "mistake"], "pred": ["mind", "thinking", "forgetting", "remember", "memorize", "memory", "muscles", "senses", "mistake"], "alignment_match": false, "accuracy": 0.7777777777777778, "similarity": 0.9995428622103467, "similarity_true": 0.9995428622103467}, {"source": ["slot machines", "reels", "spinning", "winning", "losing"], "true": ["bacteria", "genes", "mutating", "reproducing", "dying"], "pred": ["mutating", "bacteria", "genes", "reproducing", "dying"], "alignment_match": false, "accuracy": 0.4, "similarity": 0.9987115865180716, "similarity_true": 0.9985854105903229}, {"source": ["war", "soldier", "destroy", "fighting", "defeat", "attacks", "weapon"], "true": ["argument", "debater", "refute", "arguing", "acceptance", "criticizes", "logic"], "pred": ["logic", "debater", "refute", "arguing", "acceptance", "criticizes", "argument"], "alignment_match": false, "accuracy": 0.7142857142857143, "similarity": 0.9992227740647648, "similarity_true": 0.9992227740647648}, {"source": ["buyer", "merchandise", "buying", "selling", "returning", "valuable", "worthless"], "true": ["believer", "belief", "accepting", "advocating", "rejecting", "true", "false"], "pred": ["believer", "belief", "true", "false", "accepting", "advocating", "rejecting"], "alignment_match": false, "accuracy": 0.2857142857142857, "similarity": 0.999088218620857, "similarity_true": 0.9988750156517561}, {"source": ["foundations", "buildings", "supporting", "solid", "weak", "crack"], "true": ["reasons", "theories", "confirming", "rational", "dubious", "flaw"], "pred": ["reasons", "theories", "dubious", "rational", "confirming", "flaw"], "alignment_match": false, "accuracy": 0.6666666666666666, "similarity": 0.999241776871482, "similarity_true": 0.9992348984789574}, {"source": ["obstructions", "destination", "route", "traveller", "traveling", "companion", "arriving"], "true": ["difficulties", "goal", "plan", "person", "problem solving", "partner", "succeeding"], "pred": ["difficulties", "goal", "plan", "person", "problem solving", "partner", "succeeding"], "alignment_match": true, "accuracy": 1, "similarity": 0.9993953629931627, "similarity_true": 0.9993953629931627}, {"source": ["money", "allocate", "budget", "effective", "cheap", "expansive"], "true": ["time", "invest", "schedule", "efficient", "quick", "slow"], "pred": ["time", "invest", "schedule", "efficient", "slow", "quick"], "alignment_match": false, "accuracy": 0.6666666666666666, "similarity": 0.9994473421881276, "similarity_true": 0.9994473421881276}, {"source": ["seeds", "planted", "fruitful", "fruit", "grow", "wither", "blossom"], "true": ["ideas", "inspired", "productive", "product", "develop", "fail", "succeed"], "pred": ["ideas", "inspired", "productive", "product", "develop", "fail", "succeed"], "alignment_match": true, "accuracy": 1, "similarity": 0.9993327827484848, "similarity_true": 0.9993327827484848}, {"source": ["machine", "working", "turned on", "turned off", "broken", "power", "repair"], "true": ["mind", "thinking", "awake", "asleep", "confused", "intelligence", "therapy"], "pred": ["mind", "thinking", "awake", "asleep", "confused", "intelligence", "therapy"], "alignment_match": true, "accuracy": 1, "similarity": 0.999309220616769, "similarity_true": 0.999309220616769}, {"source": ["object", "hold", "weight", "heavy", "light"], "true": ["idea", "understand", "analyze", "important", "trivial"], "pred": ["idea", "analyze", "trivial", "understand", "important"], "alignment_match": false, "accuracy": 0.2, "similarity": 0.9990302641293591, "similarity_true": 0.9989765156581157}, {"source": ["follow", "leader", "path", "follower", "lost", "wanders", "twisted", "straight"], "true": ["understand", "speaker", "argument", "listener", "misunderstood", "digresses", "complicated", "simple"], "pred": ["understand", "speaker", "argument", "listener", "complicated", "misunderstood", "digresses", "simple"], "alignment_match": false, "accuracy": 0.625, "similarity": 0.9992124395971402, "similarity_true": 0.9990858804827876}, {"source": ["seeing", "light", "illuminating", "darkness", "view", "hidden"], "true": ["understanding", "knowledge", "explaining", "confusion", "interpretation", "secret"], "pred": ["understanding", "knowledge", "explaining", "confusion", "interpretation", "secret"], "alignment_match": true, "accuracy": 1, "similarity": 0.9992551227116007, "similarity_true": 0.9992551227116007}]}
|