Shahbaz Syed commited on
Commit
b049fc6
1 Parent(s): 6e2c38e

Add first version of Conclugen

Browse files
Files changed (2) hide show
  1. README.md +164 -0
  2. conclugen.py +140 -0
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for ConcluGen
2
+
3
+ ## Table of Contents
4
+ - [Dataset Card for ConcluGen](#dataset-card-for-conclugen)
5
+ - [Table of Contents](#table-of-contents)
6
+ - [Dataset Description](#dataset-description)
7
+ - [Dataset Summary](#dataset-summary)
8
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
9
+ - [Languages](#languages)
10
+ - [Dataset Structure](#dataset-structure)
11
+ - [Data Instances](#data-instances)
12
+ - [Data Fields](#data-fields)
13
+ - [Data Splits](#data-splits)
14
+ - [Dataset Creation](#dataset-creation)
15
+ - [Curation Rationale](#curation-rationale)
16
+ - [Source Data](#source-data)
17
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
18
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
19
+ - [Annotations](#annotations)
20
+ - [Annotation process](#annotation-process)
21
+ - [Who are the annotators?](#who-are-the-annotators)
22
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
23
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
24
+ - [Social Impact of Dataset](#social-impact-of-dataset)
25
+ - [Discussion of Biases](#discussion-of-biases)
26
+ - [Other Known Limitations](#other-known-limitations)
27
+ - [Additional Information](#additional-information)
28
+ - [Dataset Curators](#dataset-curators)
29
+ - [Licensing Information](#licensing-information)
30
+ - [Citation Information](#citation-information)
31
+
32
+ ## Dataset Description
33
+
34
+ - **Homepage:** https://zenodo.org/record/4818134
35
+ - **Repository:** https://github.com/webis-de/acl21-informative-conclusion-generation
36
+ - **Paper:** Generating Informative Conclusions for Argumentative Texts
37
+ - **Leaderboard:** [N/A]
38
+ - **Point of Contact:** [email protected]
39
+
40
+ ### Dataset Summary
41
+
42
+ The ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics.
43
+
44
+ The corpus has three variants: aspects, topics, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions.
45
+
46
+ ### Supported Tasks and Leaderboards
47
+
48
+ Argument Summarization, Conclusion Generation
49
+
50
+ ### Languages
51
+
52
+ English ('en') as spoken by Reddit users on the [r/changemyview](https://old.reddit.com/r/changemyview/) subreddits.
53
+
54
+ ## Dataset Structure
55
+
56
+ ### Data Instances
57
+
58
+ An example consists of a unique 'id', an 'argument', and its 'conclusion'.
59
+
60
+ ```
61
+ {'id': 'ee11c116-23df-4795-856e-8b6c6626d5ed',
62
+ 'argument': "In my opinion, the world would be a better place if alcohol was illegal. I've done a little bit of research to get some numbers, and I was quite shocked at what I found. Source On average, one in three people will be involved in a drunk driving crash in their lifetime. In 2011, 9,878 people died in drunk driving crashes Drunk driving costs each adult in this country almost 500 per year. Drunk driving costs the United States 132 billion a year. Every day in America, another 27 people die as a result of drunk driving crashes. Almost every 90 seconds, a person is injured in a drunk driving crash. These are just the driving related statistics. They would each get reduced by at least 75 if the sale of alcohol was illegal. I just don't see enough positives to outweigh all the deaths and injuries that result from irresponsible drinking. Alcohol is quite literally a drug, and is also extremely addicting. It would already be illegal if not for all these pointless ties with culture. Most people wouldn't even think to live in a world without alcohol, but in my opinion that world would be a better, safer, and more productive one. , or at least defend the fact that it's legal.",
63
+ 'conclusion': 'I think alcohol should be illegal.'}
64
+ ```
65
+
66
+ ### Data Fields
67
+
68
+ - `id`: a string identifier for each example.
69
+ - `argument`: the argumentative text.
70
+ - `conclusion`: the conclusion of the argumentative text.
71
+
72
+
73
+ ### Data Splits
74
+
75
+ The data is split into train, validation, and test splits for each variation of the dataset (including base).
76
+
77
+ | | Train | Validation | Test |
78
+ |--------- |--------- |------------ |------ |
79
+ | Base | 123,539 | 12,354 | 1373 |
80
+ | Aspects | 122,040 | 12,192 | 1359 |
81
+ | Targets | 110,867 | 11,068 | 1238 |
82
+ | Topic | 123,538 | 12,354 | 1374 |
83
+
84
+
85
+ ## Dataset Creation
86
+
87
+ ### Curation Rationale
88
+
89
+ ConcluGen was built as a first step towards argument summarization technology. The [rules of the subreddit](https://old.reddit.com/r/changemyview/wiki/rules) ensure high quality data suitable for the task.
90
+
91
+ ### Source Data
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ Reddit [ChangeMyView](https://old.reddit.com/r/changemyview/)
96
+
97
+ #### Who are the source language producers?
98
+
99
+ Users of the subreddit [r/chanhemyview](https://old.reddit.com/r/changemyview/). Further demographic information is unavailable from the data source.
100
+
101
+ ### Annotations
102
+
103
+ The dataset is augmented with automatically extracted knowledge such as the argument's aspects, the discussion topic, and possible conclusion targets.
104
+
105
+ #### Annotation process
106
+
107
+ [N/A]
108
+
109
+ #### Who are the annotators?
110
+
111
+ [N/A]
112
+
113
+ ### Personal and Sensitive Information
114
+
115
+ Only the argumentative text and its conclusion are provided. No personal information of the posters is included.
116
+
117
+ ## Considerations for Using the Data
118
+
119
+ ### Social Impact of Dataset
120
+
121
+ [Needs More Information]
122
+
123
+ ### Discussion of Biases
124
+
125
+ [Needs More Information]
126
+
127
+ ### Other Known Limitations
128
+
129
+ [Needs More Information]
130
+
131
+ ## Additional Information
132
+
133
+ ### Dataset Curators
134
+
135
+ [Needs More Information]
136
+
137
+ ### Licensing Information
138
+
139
+ The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
140
+
141
+ ### Citation Information
142
+
143
+ ```
144
+ @inproceedings{syed:2021,
145
+ author = {Shahbaz Syed and
146
+ Khalid Al Khatib and
147
+ Milad Alshomary and
148
+ Henning Wachsmuth and
149
+ Martin Potthast},
150
+ editor = {Chengqing Zong and
151
+ Fei Xia and
152
+ Wenjie Li and
153
+ Roberto Navigli},
154
+ title = {Generating Informative Conclusions for Argumentative Texts},
155
+ booktitle = {Findings of the Association for Computational Linguistics: {ACL/IJCNLP}
156
+ 2021, Online Event, August 1-6, 2021},
157
+ pages = {3482--3493},
158
+ publisher = {Association for Computational Linguistics},
159
+ year = {2021},
160
+ url = {https://doi.org/10.18653/v1/2021.findings-acl.306},
161
+ doi = {10.18653/v1/2021.findings-acl.306}
162
+ }
163
+ ```
164
+
conclugen.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ConcluGen Dataset"""
16
+
17
+
18
+ import json
19
+
20
+ import datasets
21
+
22
+ _CITATION = """\
23
+ @inproceedings{syed:2021,
24
+ author = {Shahbaz Syed and
25
+ Khalid Al Khatib and
26
+ Milad Alshomary and
27
+ Henning Wachsmuth and
28
+ Martin Potthast},
29
+ editor = {Chengqing Zong and
30
+ Fei Xia and
31
+ Wenjie Li and
32
+ Roberto Navigli},
33
+ title = {Generating Informative Conclusions for Argumentative Texts},
34
+ booktitle = {Findings of the Association for Computational Linguistics: {ACL/IJCNLP}
35
+ 2021, Online Event, August 1-6, 2021},
36
+ pages = {3482--3493},
37
+ publisher = {Association for Computational Linguistics},
38
+ year = {2021},
39
+ url = {https://doi.org/10.18653/v1/2021.findings-acl.306},
40
+ doi = {10.18653/v1/2021.findings-acl.306}
41
+ }
42
+ """
43
+
44
+
45
+ _DESCRIPTION = """\
46
+ The ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics.
47
+
48
+ The corpus has three variants: aspects, topics, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions.
49
+ """
50
+
51
+ _HOMEPAGE = "https://zenodo.org/record/4818134"
52
+
53
+ _LICENSE = "https://creativecommons.org/licenses/by/4.0/legalcode"
54
+
55
+
56
+ _REPO = "https://huggingface.co/datasets/webis/conclugen/resolve/main"
57
+
58
+ _URLS = {
59
+ 'base_train': f"{_REPO}/base_train.jsonl",
60
+ 'base_validation': f"{_REPO}/base_validation.jsonl",
61
+ 'base_test': f"{_REPO}/base_test.jsonl",
62
+ 'aspects_train': f"{_REPO}/aspects_train.jsonl",
63
+ 'aspects_validation': f"{_REPO}/aspects_validation.jsonl",
64
+ 'aspects_test': f"{_REPO}/aspects_test.jsonl",
65
+ 'targets_train': f"{_REPO}/targets_train.jsonl",
66
+ 'targets_validation': f"{_REPO}/targets_validation.jsonl",
67
+ 'targets_test': f"{_REPO}/targets_test.jsonl",
68
+ 'topic_train': f"{_REPO}/topic_train.jsonl",
69
+ 'topic_validation': f"{_REPO}/topic_validation.jsonl",
70
+ 'topic_test': f"{_REPO}/topic_test.jsonl"
71
+ }
72
+
73
+
74
+ class ArgsMe(datasets.GeneratorBasedBuilder):
75
+ """382,545 arguments crawled from debate portals"""
76
+
77
+ VERSION = datasets.Version("1.1.0")
78
+ BUILDER_CONFIGS = [
79
+ datasets.BuilderConfig(name="base", version=VERSION, description="The base version of the dataset with no argumentative knowledge."),
80
+ datasets.BuilderConfig(name="aspects", version=VERSION, description="Variation with argument aspects encoded."),
81
+ datasets.BuilderConfig(name="targets", version=VERSION, description="Variation with conclusion targets encoded."),
82
+ datasets.BuilderConfig(name="topic", version=VERSION, description="Variation with discussion topic encoded."),
83
+ ]
84
+
85
+ DEFAULT_CONFIG_NAME = "base"
86
+
87
+ def _info(self):
88
+ features = datasets.Features(
89
+ {
90
+ "argument": datasets.Value("string"),
91
+ "conclusion": datasets.Value("string"),
92
+ "id": datasets.Value("string")
93
+ }
94
+ )
95
+ return datasets.DatasetInfo(
96
+ description=_DESCRIPTION,
97
+ features=features,
98
+ supervised_keys=None,
99
+ homepage=_HOMEPAGE,
100
+ license=_LICENSE,
101
+ citation=_CITATION,
102
+ )
103
+
104
+ def _split_generators(self, dl_manager):
105
+ """Returns SplitGenerators."""
106
+ train_file = dl_manager.download(_URLS[self.config.name+"_train"])
107
+ validation_file = dl_manager.download(_URLS[self.config.name+"_validation"])
108
+ test_file = dl_manager.download(_URLS[self.config.name+"_test"])
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.TRAIN,
112
+ gen_kwargs={
113
+ "data_file": train_file,
114
+ },
115
+ ),
116
+ datasets.SplitGenerator(
117
+ name=datasets.Split.VALIDATION,
118
+ gen_kwargs={
119
+ "data_file": validation_file,
120
+ },
121
+ ),
122
+ datasets.SplitGenerator(
123
+ name=datasets.Split.TEST,
124
+ gen_kwargs={
125
+ "data_file": test_file,
126
+ },
127
+ )
128
+ ]
129
+
130
+ def _generate_examples(self, data_file):
131
+ """ Yields examples as (key, example) tuples. """
132
+ with open(data_file, encoding="utf-8") as f:
133
+ for row in f:
134
+ data = json.loads(row)
135
+ id_ = data['id']
136
+ yield id_, {
137
+ "argument": data['argument'],
138
+ "conclusion": data["conclusion"],
139
+ "id": id_
140
+ }