The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ImportError Message: To be able to use pie/aae2, you need to install the following dependencies: pie_modules, pytorch_ie, pie_datasets. Please install them using 'pip install pie_modules pytorch_ie pie_datasets' for instance. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1880, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1504, in get_module local_imports = _download_additional_modules( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 354, in _download_additional_modules raise ImportError( ImportError: To be able to use pie/aae2, you need to install the following dependencies: pie_modules, pytorch_ie, pie_datasets. Please install them using 'pip install pie_modules pytorch_ie pie_datasets' for instance.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
PIE Dataset Card for "aae2"
This is a PyTorch-IE wrapper for the Argument Annotated Essays v2 (AAE2) dataset (paper and homepage). Since the AAE2 dataset is published in the BRAT standoff format, this dataset builder is based on the PyTorch-IE brat dataset loading script.
Therefore, the aae2
dataset as described here follows the data structure from the PIE brat dataset card.
Dataset Summary
Argument Annotated Essays Corpus (AAEC) (Stab and Gurevych, 2017) contains student essays. A stance for a controversial theme is expressed by a major claim component as well as claim components, and premise components justify or refute the claims. Attack and support labels are defined as relations. The span covers a statement, which can stand in isolation as a complete sentence, according to the AAEC annotation guidelines. All components are annotated with minimum boundaries of a clause or sentence excluding so-called "shell" language such as On the other hand and Hence. (Morio et al., 2022, p. 642)
There is no premise that links to another premise or claim in a different paragraph. That means, an argumentation tree structure is complete within each paragraph. Therefore, it is possible to train a model on the full documents or just at the paragraph-level which is usually less memory-exhaustive (Eger et al., 2017, p. 16).
Supported Tasks and Leaderboards
- Tasks: Argumentation Mining, Component Identification, Component Classification, Structure Identification
- Leaderboard: More Information Needed
Languages
The language in the dataset is English (persuasive essays).
Dataset Variants
The aae2
dataset comes in a single version (default
) with BratDocumentWithMergedSpans
as document type. Note, that this in contrast to the base brat dataset, where the document type for the default
variant is BratDocument
. The reason is that the AAE2 dataset has already been published with only single-fragment spans. Without any need to merge fragments, the document type BratDocumentWithMergedSpans
is easier to handle for most of the task modules.
Data Schema
See PIE-Brat Data Schema.
Usage
from pie_datasets import load_dataset, builders
# load default version
datasets = load_dataset("pie/aae2")
doc = datasets["train"][0]
assert isinstance(doc, builders.brat.BratDocumentWithMergedSpans)
Data Splits
Statistics | Train | Test |
---|---|---|
No. of document | 322 | 80 |
Components - MajorClaim - Claim - Premise |
598 1202 3023 |
153 304 809 |
Relations* - supports - attacks |
3820 405 |
1021 92 |
* included all relations between claims and premises and all claim attributions.
See further statistics in Stab & Gurevych (2017), p. 650, Table A.1.
Label Descriptions
Components
Components | Count | Percentage |
---|---|---|
MajorClaim |
751 | 12.3 % |
Claim |
1506 | 24.7 % |
Premise |
3832 | 62.9 % |
MajorClaim
is the root node of the argumentation structure and represents the author’s standpoint on the topic. Essay bodies either support or attack the author’s standpoint expressed in the major claim. The major claim can be mentioned multiple times in a single document.Claim
constitutes the central component of each argument. Each one has at least one premise and takes stance attribute values "for" or "against" with regarding the major claim.Premise
is the reasons of the argument; either linked to claim or another premise.
Note that relations between MajorClaim
and Claim
were not annotated; however, each claim is annotated with an Attribute
annotation with value for
or against
- which indicates the relation between itself and MajorClaim
. In addition, when two non-related Claim
's appear in one paragraph, there is also no relations to one another.
Relations
Relations | Count | Percentage |
---|---|---|
support: supports |
3613 | 94.3 % |
attack: attacks |
219 | 5.7 % |
- "Each premise
p
has one outgoing relation (i.e., there is a relation that has p as source component) and none or several incoming relations (i.e., there can be a relation withp
as target component)." - "A
Claim
can exhibit several incoming relations but no outgoing relation." (S&G, 2017, p. 68) - "The relations from the claims of the arguments to the major claim are dotted since we will not explicitly annotated them. The relation of each argument to the major claim is indicated by a stance attribute of each claim. This attribute can either be for or against as illustrated in figure 1.4." (Stab & Gurevych, Guidelines for Annotating Argumentation Structures in Persuasive Essays, 2015, p. 5)
See further description in Stab & Gurevych 2017, p.627 and the annotation guideline.
Document Converters
The dataset provides document converters for the following target document types:
pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations
with layers:labeled_spans
:LabeledSpan
annotations, converted fromBratDocumentWithMergedSpans
'sspans
- labels:
MajorClaim
,Claim
,Premise
- labels:
binary_relations
:BinaryRelation
annotations, converted fromBratDocumentWithMergedSpans
'srelations
- there are two conversion methods that convert
Claim
attributes to their relations toMajorClaim
(also see the label-count changes after this relation conversion here below):connect_first
(default setting):- build a
supports
orattacks
relation from eachClaim
to the firstMajorClaim
depending on theClaim
's attribute (for
oragainst
), and - build a
semantically_same
relation between followingMajorClaim
to the firstMajorClaim
- build a
connect_all
- build a
supports
orattacks
relation from eachClaim
to everyMajorClaim
- no relations between each
MajorClaim
- build a
- labels:
supports
,attacks
, andsemantically_same
ifconnect_first
- there are two conversion methods that convert
pytorch_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
with layers:labeled_spans
, as abovebinary_relations
, as abovelabeled_partitions
,LabeledSpan
annotations, created from splittingBratDocumentWithMergedSpans
'stext
at new lines (\n
).- every partition is labeled as
paragraph
- every partition is labeled as
See here for the document type definitions.
Label Statistics after Document Conversion
When converting from BratDocumentWithMergedSpan
to TextDocumentWithLabeledSpansAndBinaryRelations
and TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
,
we apply a relation-conversion method (see above) that changes the label counts for the relations, as follows:
connect_first
(default):
Relations | Count | Percentage |
---|---|---|
support: supports |
4841 | 85.1 % |
attack: attacks |
497 | 8.7 % |
other: semantically_same |
349 | 6.2 % |
connect_all
Relations | Count | Percentage |
---|---|---|
support: supports |
5958 | 89.3 % |
attack: attacks |
715 | 10.7 % |
Dataset Creation
Curation Rationale
"The identification of argumentation structures involves several subtasks like separating argumentative from non-argumentative text units (Moens et al. 2007; Florou et al. 2013), classifying argument components into claims and premises (Mochales-Palau and Moens 2011; Rooney, Wang, and Browne 2012; Stab and Gurevych 2014b), and identifying argumentative relations (Mochales-Palau and Moens 2009; Peldszus 2014; Stab and Gurevych 2014b). However, an approach that covers all subtasks is still missing. However, an approach that covers all subtasks is still missing. Furthermore, most approaches operate locally and do not optimize the global argumentation structure.
"In addition, to the lack of end-to-end approaches for parsing argumentation structures, there are relatively few corpora annotated with argumentation structures at the discourse-level." (p. 621)
"Our primary motivation for this work is to create argument analysis methods for argumentative writing support systems and to achieve a better understanding of argumentation structures." (p. 622)
Source Data
Persuasive essays were collected from essayforum.com (See essay prompts, along with the essay's id
's here).
Initial Data Collection and Normalization
"We randomly selected 402 English essays with a description of the writing prompt from essayforum.com. This online forum is an active community that provides correction and feedback about different texts such as research papers, essays, or poetry. For example, students post their essays in order to receive feedback about their writing skills while preparing for standardized language tests. The corpus includes 7,116 sentences with 147,271 tokens." (p. 630)
Who are the source language producers?
Annotations
Annotation process
The annotation were done using BRAT Rapid Annotation Tool (Stenetorp et al., 2012).
All three annotators independently annotated a random subset of 80 essays. The remaining 322 essays were annotated by the expert annotator.
The authors evaluated the inter-annotator agreement using observed agreement and Fleiss’ κ (Fleiss 1971), on each label on each sub-tasks, namely, component identification, component classification, and relation identification. The results were reported in their paper in Tables 2-4.
Who are the annotators?
Three non-native speakers; one of the three being an expert annotator.
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
"[Computational Argumentation] have broad application potential in various areas such as legal decision support (Mochales-Palau and Moens 2009), information retrieval (Carstens and Toni 2015), policy making (Sardianos et al. 2015), and debating technologies (Levy et al. 2014; Rinott et al. 2015)." (p. 619)
Discussion of Biases
Other Known Limitations
The relations between claims and major claims are not explicitly annotated.
"The proportion of non-argumentative text amounts to 47,474 tokens (32.2%) and 1,631 sentences (22.9%). The number of sentences with several argument components is 583, of which 302 include several components with different types (e.g., a claim followed by premise)... [T]he identification of argument components requires the separation of argumentative from non-argumentative text units and the recognition of component boundaries at the token level...The proportion of paragraphs with unlinked argument components (e.g., unsupported claims without incoming relations) is 421 (23%). Thus, methods that link all argument components in a paragraph are only of limited use for identifying the argumentation structures in our corpus.
"Most of the arguments are convergent—that is, the depth of the argument is 1. The number of arguments with serial structure is 236 (20.9%)." (p. 634)
Additional Information
Dataset Curators
Licensing Information
License: License description by TU Darmstadt
Funding: This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant no. I/82806 and by the German Federal Ministry of Education and Research (BMBF) as a part of the Software Campus project AWS under grant no. 01—S12054.
Citation Information
@article{stab2017parsing,
title={Parsing argumentation structures in persuasive essays},
author={Stab, Christian and Gurevych, Iryna},
journal={Computational Linguistics},
volume={43},
number={3},
pages={619--659},
year={2017},
publisher={MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info~…}
}
@misc{https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2422,
url = { https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2422 },
author = { Stab, Christian and Gurevych, Iryna },
keywords = { Argument Mining, 409-06 Informationssysteme, Prozess- und Wissensmanagement, 004 },
publisher = { Technical University of Darmstadt },
year = { 2017 },
copyright = { License description },
title = { Argument Annotated Essays (version 2) }
}
Contributions
Thanks to @ArneBinder and @idalr for adding this dataset.
- Downloads last month
- 39