model update
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ model-index:
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
-
value: 0.
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
@@ -173,7 +173,7 @@ It achieves the following results on the relation understanding tasks:
|
|
173 |
- Micro F1 score on K&H+N: 0.962092230646171
|
174 |
- Micro F1 score on ROOT09: 0.898464431212786
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-e-nce/raw/main/relation_mapping.json)):
|
176 |
-
- Accuracy on Relation Mapping: 0.
|
177 |
|
178 |
|
179 |
### Usage
|
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
+
value: 0.9348214285714286
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
|
|
173 |
- Micro F1 score on K&H+N: 0.962092230646171
|
174 |
- Micro F1 score on ROOT09: 0.898464431212786
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-e-nce/raw/main/relation_mapping.json)):
|
176 |
+
- Accuracy on Relation Mapping: 0.9348214285714286
|
177 |
|
178 |
|
179 |
### Usage
|