model update
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ model-index:
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
-
value: 0.
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
@@ -173,7 +173,7 @@ It achieves the following results on the relation understanding tasks:
|
|
173 |
- Micro F1 score on K&H+N: 0.9611184530847882
|
174 |
- Micro F1 score on ROOT09: 0.9012848636790974
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-b-nce/raw/main/relation_mapping.json)):
|
176 |
-
- Accuracy on Relation Mapping: 0.
|
177 |
|
178 |
|
179 |
### Usage
|
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
+
value: 0.9619047619047619
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
|
|
173 |
- Micro F1 score on K&H+N: 0.9611184530847882
|
174 |
- Micro F1 score on ROOT09: 0.9012848636790974
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-b-nce/raw/main/relation_mapping.json)):
|
176 |
+
- Accuracy on Relation Mapping: 0.9619047619047619
|
177 |
|
178 |
|
179 |
### Usage
|