File size: 1,471 Bytes
231a52f
b1be79d
 
 
 
 
 
 
 
 
 
b6f20f7
b1be79d
 
 
 
 
a180b7d
b1be79d
015ac88
b1be79d
 
 
 
7e75aa2
 
 
 
 
 
edfc230
 
 
 
7e75aa2
 
 
763cee4
 
 
 
7e75aa2
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'probability_words_nli'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
- natural-language-inference
tags:
- wep
- words of estimative probability
- probability
- logical reasoning
- soft logic
- nli
- natural-language-inference
- reasoning
- logic
---

# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article

This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP). We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reaosning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities.
The dataset can be used as natural langauge inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).

```bib
@article{sileo2022probing,
  title={Probing neural language models for understanding of words of estimative probability},
  author={Sileo, Damien and Moens, Marie-Francine},
  journal={arXiv preprint arXiv:2211.03358},
  year={2022}
}
```