David Wadden commited on
Commit
8bf74e5
1 Parent(s): 076df5b

Copy over version from SciFact.

Browse files
Files changed (2) hide show
  1. README.md +90 -0
  2. covidfact_entailment.py +157 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc-by-nc-2.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: SciFact
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - fact-checking
21
+ paperswithcode_id: scifact
22
+ dataset_info:
23
+ features:
24
+ - name: claim_id
25
+ dtype: int32
26
+ - name: claim
27
+ dtype: string
28
+ - name: abstract_id
29
+ dtype: int32
30
+ - name: title
31
+ dtype: string
32
+ - name: abstract
33
+ sequence: string
34
+ - name: verdict
35
+ dtype: string
36
+ - name: evidence
37
+ sequence: int32
38
+ splits:
39
+ - name: train
40
+ num_bytes: 1649655
41
+ num_examples: 919
42
+ - name: validation
43
+ num_bytes: 605262
44
+ num_examples: 340
45
+ download_size: 3115079
46
+ dataset_size: 2254917
47
+ ---
48
+
49
+
50
+ # Dataset Card for "scifact_entailment"
51
+
52
+ ## Table of Contents
53
+
54
+ - [Dataset Description](#dataset-description)
55
+ - [Dataset Summary](#dataset-summary)
56
+ - [Dataset Structure](#dataset-structure)
57
+ - [Data Fields](#data-fields)
58
+ - [Data Splits](#data-splits)
59
+
60
+ ## Dataset Description
61
+
62
+ - **Homepage:** [https://scifact.apps.allenai.org/](https://scifact.apps.allenai.org/)
63
+ - **Repository:** <https://github.com/allenai/scifact>
64
+ - **Paper:** [Fact or Fiction: Verifying Scientific Claims](https://aclanthology.org/2020.emnlp-main.609/)
65
+ - **Point of Contact:** [David Wadden](mailto:[email protected])
66
+
67
+ ### Dataset Summary
68
+
69
+ SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
70
+
71
+ For more information on the dataset, see [allenai/scifact](https://huggingface.co/datasets/allenai/scifact).
72
+ This has the same data, but reformatted as an entailment task. A single instance includes a claim paired with a paper title and abstract, together with an entailment label and a list of evidence sentences (if any).
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data fields
77
+
78
+ - `claim_id`: An `int32` claim identifier.
79
+ - `claim`: A `string`.
80
+ - `abstract_id`: An `int32` abstract identifier.
81
+ - `title`: A `string`.
82
+ - `abstract`: A list of `strings`, one for each sentence in the abstract.
83
+ - `verdict`: The fact-checking verdict, a `string`.
84
+ - `evidence`: A list of sentences from the abstract which provide evidence for the verdict.
85
+
86
+ ### Data Splits
87
+
88
+ | |train|validation|
89
+ |------|----:|---------:|
90
+ |claims| 919 | 340|
covidfact_entailment.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Scientific fact-checking dataset. Verifies claims based on citation sentences
2
+ using evidence from the cited abstracts. Formatted as a paragraph-level entailment task."""
3
+
4
+
5
+ import datasets
6
+ import json
7
+
8
+
9
+ _CITATION = """\
10
+ @inproceedings{Wadden2020FactOF,
11
+ title={Fact or Fiction: Verifying Scientific Claims},
12
+ author={David Wadden and Shanchuan Lin and Kyle Lo and Lucy Lu Wang and Madeleine van Zuylen and Arman Cohan and Hannaneh Hajishirzi},
13
+ booktitle={EMNLP},
14
+ year={2020},
15
+ }
16
+ """
17
+
18
+ _DESCRIPTION = """\
19
+ SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
20
+ """
21
+
22
+ _URL = "https://scifact.s3-us-west-2.amazonaws.com/release/latest/data.tar.gz"
23
+
24
+
25
+ def flatten(xss):
26
+ return [x for xs in xss for x in xs]
27
+
28
+
29
+ class ScifactEntailmentConfig(datasets.BuilderConfig):
30
+ """BuilderConfig for Scifact"""
31
+
32
+ def __init__(self, **kwargs):
33
+ """
34
+
35
+ Args:
36
+ **kwargs: keyword arguments forwarded to super.
37
+ """
38
+ super(ScifactEntailmentConfig, self).__init__(
39
+ version=datasets.Version("1.0.0", ""), **kwargs
40
+ )
41
+
42
+
43
+ class ScifactEntailment(datasets.GeneratorBasedBuilder):
44
+ """TODO(scifact): Short description of my dataset."""
45
+
46
+ # TODO(scifact): Set up version.
47
+ VERSION = datasets.Version("0.1.0")
48
+
49
+ def _info(self):
50
+ # TODO(scifact): Specifies the datasets.DatasetInfo object
51
+
52
+ features = {
53
+ "claim_id": datasets.Value("int32"),
54
+ "claim": datasets.Value("string"),
55
+ "abstract_id": datasets.Value("int32"),
56
+ "title": datasets.Value("string"),
57
+ "abstract": datasets.features.Sequence(datasets.Value("string")),
58
+ "verdict": datasets.Value("string"),
59
+ "evidence": datasets.features.Sequence(datasets.Value("int32")),
60
+ }
61
+
62
+ return datasets.DatasetInfo(
63
+ # This is the description that will appear on the datasets page.
64
+ description=_DESCRIPTION,
65
+ # datasets.features.FeatureConnectors
66
+ features=datasets.Features(
67
+ features
68
+ # These are the features of your dataset like images, labels ...
69
+ ),
70
+ # If there's a common (input, target) tuple from the features,
71
+ # specify them here. They'll be used if as_supervised=True in
72
+ # builder.as_dataset.
73
+ supervised_keys=None,
74
+ # Homepage of the dataset for documentation
75
+ homepage="https://scifact.apps.allenai.org/",
76
+ citation=_CITATION,
77
+ )
78
+
79
+ @staticmethod
80
+ def _read_tar_file(f):
81
+ res = []
82
+ for row in f:
83
+ this_row = json.loads(row.decode("utf-8"))
84
+ res.append(this_row)
85
+
86
+ return res
87
+
88
+ def _split_generators(self, dl_manager):
89
+ """Returns SplitGenerators."""
90
+ # TODO(scifact): Downloads the data and defines the splits
91
+ # dl_manager is a datasets.download.DownloadManager that can be used to
92
+ # download and extract URLs
93
+ archive = dl_manager.download(_URL)
94
+ for path, f in dl_manager.iter_archive(archive):
95
+ if path == "data/corpus.jsonl":
96
+ corpus = self._read_tar_file(f)
97
+ corpus = {x["doc_id"]: x for x in corpus}
98
+ elif path == "data/claims_train.jsonl":
99
+ claims_train = self._read_tar_file(f)
100
+ elif path == "data/claims_dev.jsonl":
101
+ claims_validation = self._read_tar_file(f)
102
+
103
+ return [
104
+ datasets.SplitGenerator(
105
+ name=datasets.Split.TRAIN,
106
+ # These kwargs will be passed to _generate_examples
107
+ gen_kwargs={
108
+ "claims": claims_train,
109
+ "corpus": corpus,
110
+ "split": "train",
111
+ },
112
+ ),
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.VALIDATION,
115
+ # These kwargs will be passed to _generate_examples
116
+ gen_kwargs={
117
+ "claims": claims_validation,
118
+ "corpus": corpus,
119
+ "split": "validation",
120
+ },
121
+ ),
122
+ ]
123
+
124
+ def _generate_examples(self, claims, corpus, split):
125
+ """Yields examples."""
126
+ # Loop over claims and put evidence together with claim.
127
+ id_ = -1 # Will increment to 0 on first iteration.
128
+ for claim in claims:
129
+ evidence = {int(k): v for k, v in claim["evidence"].items()}
130
+ for cited_doc_id in claim["cited_doc_ids"]:
131
+ cited_doc = corpus[cited_doc_id]
132
+ abstract_sents = [sent.strip() for sent in cited_doc["abstract"]]
133
+
134
+ if cited_doc_id in evidence:
135
+ this_evidence = evidence[cited_doc_id]
136
+ verdict = this_evidence[0][
137
+ "label"
138
+ ] # Can take first evidence since all labels are same.
139
+ evidence_sents = flatten(
140
+ [entry["sentences"] for entry in this_evidence]
141
+ )
142
+ else:
143
+ verdict = "NEI"
144
+ evidence_sents = []
145
+
146
+ instance = {
147
+ "claim_id": claim["id"],
148
+ "claim": claim["claim"],
149
+ "abstract_id": cited_doc_id,
150
+ "title": cited_doc["title"],
151
+ "abstract": abstract_sents,
152
+ "verdict": verdict,
153
+ "evidence": evidence_sents,
154
+ }
155
+
156
+ id_ += 1
157
+ yield id_, instance