Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -88,9 +88,10 @@ This dataset tests the capabilities of language models to correctly capture the
|
|
88 |
|
89 |
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities.
|
90 |
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
|
91 |
-
Code : https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing
|
92 |
|
93 |
-
|
|
|
|
|
94 |
|
95 |
```bib
|
96 |
@article{sileo2022probing,
|
|
|
88 |
|
89 |
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities.
|
90 |
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
|
|
|
91 |
|
92 |
+
Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing)
|
93 |
+
|
94 |
+
**Accepted at Starsem2023** (The 12th Joint Conference on Lexical and Computational Semantics). Temporary citation:
|
95 |
|
96 |
```bib
|
97 |
@article{sileo2022probing,
|