Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,81 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
language_creators:
|
7 |
+
- found
|
8 |
+
license:
|
9 |
+
- gpl-3.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
paperswithcode_id: semi-supervised-exaggeration-detection-of
|
13 |
+
pretty_name: Scientific Exaggeration Detection
|
14 |
+
size_categories:
|
15 |
+
- n<1K
|
16 |
+
source_datasets: []
|
17 |
+
tags:
|
18 |
+
- scientific text
|
19 |
+
- scholarly text
|
20 |
+
- inference
|
21 |
+
- fact checking
|
22 |
+
- misinformation
|
23 |
+
task_categories:
|
24 |
+
- text-classification
|
25 |
+
task_ids:
|
26 |
+
- natural-language-inference
|
27 |
+
- multi-input-text-classification
|
28 |
---
|
29 |
+
|
30 |
+
# Dataset Card for Scientific Exaggeration Detection
|
31 |
+
|
32 |
+
## Dataset Description
|
33 |
+
|
34 |
+
- **Homepage:** https://github.com/copenlu/scientific-exaggeration-detection
|
35 |
+
- **Repository:** https://github.com/copenlu/scientific-exaggeration-detection
|
36 |
+
- **Paper:** https://aclanthology.org/2021.emnlp-main.845.pdf
|
37 |
+
|
38 |
+
### Dataset Summary
|
39 |
+
|
40 |
+
Public trust in science depends on honest and factual communication of scientific papers. However, recent studies have demonstrated a tendency of news media to misrepresent scientific papers by exaggerating their findings. Given this, we present a formalization of and study into the problem of exaggeration detection in science communication. While there are an abundance of scientific papers and popular media articles written about them, very rarely do the articles include a direct link to the original paper, making data collection challenging. We address this by curating a set of labeled press release/abstract pairs from existing expert annotated studies on exaggeration in press releases of scientific papers suitable for benchmarking the performance of machine learning models on the task. Using limited data from this and previous studies on exaggeration detection in science, we introduce MT-PET, a multi-task version of Pattern Exploiting Training (PET), which leverages knowledge from complementary cloze-style QA tasks to improve few-shot learning. We demonstrate that MT-PET outperforms PET and supervised learning both when data is limited, as well as when there is an abundance of data for the main task.
|
41 |
+
|
42 |
+
## Dataset Structure
|
43 |
+
|
44 |
+
The training and test data are derived from the InSciOut studies from [Sumner et al. 2014](https://www.bmj.com/content/349/bmj.g7015) and [Bratton et al. 2019](https://pubmed.ncbi.nlm.nih.gov/31728413/#:~:text=Results%3A%20We%20found%20that%20the,inference%20from%20non%2Dhuman%20studies.). The splits have the following fields:
|
45 |
+
|
46 |
+
```
|
47 |
+
original_file_id: The ID of the original spreadsheet in the Sumner/Bratton data where the annotations are derived from
|
48 |
+
press_release_conclusion: The conclusion sentence from the press release
|
49 |
+
press_release_strength: The strength label for the press release
|
50 |
+
abstract_conclusion: The conclusion sentence from the abstract
|
51 |
+
abstract_strength: The strength label for the abstract
|
52 |
+
exaggeration_label: The final exaggeration label
|
53 |
+
```
|
54 |
+
|
55 |
+
The exaggeration label is one of `same`, `exaggerates`, or `downplays`. The strength label is one of the following:
|
56 |
+
|
57 |
+
```
|
58 |
+
0: Statement of no relationship
|
59 |
+
1: Statement of correlation
|
60 |
+
2: Conditional statement of causation
|
61 |
+
3: Statement of causation
|
62 |
+
```
|
63 |
+
|
64 |
+
## Dataset Creation
|
65 |
+
|
66 |
+
See section 4 of the [paper](https://aclanthology.org/2021.emnlp-main.845.pdf) for details on how the dataset was curated. The original InSciOut data can be found [here](https://figshare.com/articles/dataset/InSciOut/903704)
|
67 |
+
|
68 |
+
## Citation
|
69 |
+
|
70 |
+
```
|
71 |
+
@inproceedings{wright2021exaggeration,
|
72 |
+
title={{Semi-Supervised Exaggeration Detection of Health Science Press Releases}},
|
73 |
+
author={Dustin Wright and Isabelle Augenstein},
|
74 |
+
booktitle = {Proceedings of EMNLP},
|
75 |
+
publisher = {Association for Computational Linguistics},
|
76 |
+
year = 2021
|
77 |
+
}
|
78 |
+
```
|
79 |
+
|
80 |
+
|
81 |
+
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset.
|