init
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
datasets:
|
3 |
- relbert/semeval2012_relational_similarity
|
4 |
model-index:
|
5 |
-
- name: relbert/relbert-albert-base-nce-
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
@@ -186,11 +186,11 @@ model-index:
|
|
186 |
value: 0.8539588731534683
|
187 |
|
188 |
---
|
189 |
-
# relbert/relbert-albert-base-nce-
|
190 |
|
191 |
RelBERT based on [albert-base-v2](https://huggingface.co/albert-base-v2) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
192 |
This model achieves the following results on the relation understanding tasks:
|
193 |
-
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-albert-base-nce-
|
194 |
- Accuracy on SAT (full): 0.4037433155080214
|
195 |
- Accuracy on SAT: 0.39762611275964393
|
196 |
- Accuracy on BATS: 0.5919955530850473
|
@@ -200,13 +200,13 @@ This model achieves the following results on the relation understanding tasks:
|
|
200 |
- Accuracy on ConceptNet Analogy: 0.25671140939597314
|
201 |
- Accuracy on T-Rex Analogy: 0.32786885245901637
|
202 |
- Accuracy on NELL-ONE Analogy: 0.4766666666666667
|
203 |
-
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-albert-base-nce-
|
204 |
- Micro F1 score on BLESS: 0.880066295012807
|
205 |
- Micro F1 score on CogALexV: 0.7826291079812207
|
206 |
- Micro F1 score on EVALution: 0.5812567713976164
|
207 |
- Micro F1 score on K&H+N: 0.9288446824789595
|
208 |
- Micro F1 score on ROOT09: 0.8558445628329677
|
209 |
-
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-albert-base-nce-
|
210 |
- Accuracy on Relation Mapping: 0.7952777777777778
|
211 |
|
212 |
|
@@ -218,7 +218,7 @@ pip install relbert
|
|
218 |
and activate model as below.
|
219 |
```python
|
220 |
from relbert import RelBERT
|
221 |
-
model = RelBERT("relbert/relbert-albert-base-nce-
|
222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
223 |
```
|
224 |
|
@@ -242,7 +242,7 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
|
242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
|
243 |
- augment_negative_by_positive: True
|
244 |
|
245 |
-
See the full configuration at [config file](https://huggingface.co/relbert/relbert-albert-base-nce-
|
246 |
|
247 |
### Reference
|
248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|
|
|
2 |
datasets:
|
3 |
- relbert/semeval2012_relational_similarity
|
4 |
model-index:
|
5 |
+
- name: relbert/relbert-albert-base-nce-semeval2012
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
|
|
186 |
value: 0.8539588731534683
|
187 |
|
188 |
---
|
189 |
+
# relbert/relbert-albert-base-nce-semeval2012
|
190 |
|
191 |
RelBERT based on [albert-base-v2](https://huggingface.co/albert-base-v2) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
192 |
This model achieves the following results on the relation understanding tasks:
|
193 |
+
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-albert-base-nce-semeval2012/raw/main/analogy.forward.json)):
|
194 |
- Accuracy on SAT (full): 0.4037433155080214
|
195 |
- Accuracy on SAT: 0.39762611275964393
|
196 |
- Accuracy on BATS: 0.5919955530850473
|
|
|
200 |
- Accuracy on ConceptNet Analogy: 0.25671140939597314
|
201 |
- Accuracy on T-Rex Analogy: 0.32786885245901637
|
202 |
- Accuracy on NELL-ONE Analogy: 0.4766666666666667
|
203 |
+
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-albert-base-nce-semeval2012/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.880066295012807
|
205 |
- Micro F1 score on CogALexV: 0.7826291079812207
|
206 |
- Micro F1 score on EVALution: 0.5812567713976164
|
207 |
- Micro F1 score on K&H+N: 0.9288446824789595
|
208 |
- Micro F1 score on ROOT09: 0.8558445628329677
|
209 |
+
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-albert-base-nce-semeval2012/raw/main/relation_mapping.json)):
|
210 |
- Accuracy on Relation Mapping: 0.7952777777777778
|
211 |
|
212 |
|
|
|
218 |
and activate model as below.
|
219 |
```python
|
220 |
from relbert import RelBERT
|
221 |
+
model = RelBERT("relbert/relbert-albert-base-nce-semeval2012")
|
222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
223 |
```
|
224 |
|
|
|
242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
|
243 |
- augment_negative_by_positive: True
|
244 |
|
245 |
+
See the full configuration at [config file](https://huggingface.co/relbert/relbert-albert-base-nce-semeval2012/raw/main/finetuning_config.json).
|
246 |
|
247 |
### Reference
|
248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|