wiki-stance / README.md
frimelle's picture
frimelle HF staff
Update README.md
a2f4d1b verified
---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- en
- de
- tr
pretty_name: Wikipedia Deletion Discussions with stance and policy labels
size_categories:
- 100K<n<1M
tags:
- wikimedia
---
# Dataset Card for Wiki-Stance
## Dataset Details
### Dataset Description
This is the dataset Wiki-Stance introduced in EMNLP 2023 paper "[Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions](https://aclanthology.org/2023.emnlp-main.361/)"
A pre-print version of the paper can be found here: [Arxiv](https://arxiv.org/abs/2310.05779)
### Dataset Sources
- **Repository:** https://github.com/copenlu/wiki-stance
- **Paper:** https://aclanthology.org/2023.emnlp-main.361/
### Column name descriptions:
- *title* - Title of the Wikipedia page under consideration for deletion
- *username* - Wikipedia username of the author of the comment
- *timestamp* - Timestamp for the coment
- *decision* - Stance label for the comment in the original language
- *comment* - Text of the deletion discussion comment by a Wikipedia editor
- *topic* - Topic for the stance task (Usually "Deletion of [Title]")
- *en_label* - English translation of the Decision
- *policy* - Wikipedia policy code relevant for the comment
- *policy_title* - Title of Wikipedia policy relevant for the comment
- *policy_index* - Index of the Wikipedia policy (specific to our dataset)
## Uses
The dataset was created to support content moderation in Wikipedia by stance detection and prediction of policies given a comment in the Article for Deletion discussions across three language Wikipedias.
### Direct Use
This dataset can be used for stance detection in discussions to support content moderation, and to predict policies in communities that refer to predefined standards and guidelines.
This dataset has not been tested for use outside of the Wikipedia context yet but could contribute to content moderation at large.
It also could be used for transparent stance detection, i.e., stance detection referring to a policy, with larger application than Wikipedia.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
The dataset is based on the Wikipedia Articles for Deletion discussions on three language Wikipedias (English, German, Turkish) from 2005 (2006 for Turkish) to 2022.
#### Data Collection and Processing
We identify the article deletion discussion archive pages for the English, German, and Turkish Wikipedia respectively, and retrieve all deletion discussions in the considered time frame through the respective MediaWiki APIs
([English](https://en.wikipedia.org/w/api.php), [German](https://de.wikipedia.org/w/api.php), [Turkish](https://tr.wikipedia.org/w/api.php)).
From those pages, we select comments which mention a Wikipedia page, identified by the prefix [[WP: or [[Wikipedia:.
We find that these generally refer to policies, or policy abbreviations such as WP:NOTE ([Wikipedia:Notability](https://en.wikipedia.org/wiki/Wikipedia:Notability)).
If the policy abbreviations link to a policy page, the Wikimedia API resolves them and returns the actual policy or Wikipedia page title.
For each of the three languages, we retrieve the full policy page through the Wikimedia API, manually select the policies that are actual policy pages, and discard
other Wikipedia pages, such as articles.
We further discard all policies that are mentioned infrequently (100 in English, 10 for German, and 2 for Turkish, due to the varying data set sizes) across all comments in the respective language deletion discussions.
To collapse sub-polices with the same or similar meaning, or subcategories of one policy into the main policy, we merge them based on the link of the sub-policy to the main policy in the policy page text, e.g., notability criteria for specific
article types such as Wikipedia:Notability (music) were merged into the Wikipedia:Notability policy.
This was done manually based on the original as well as machine translated versions of the policy texts by an annotator proficient in German and English with basic understanding of Turkish.
As the majority of comments refer to only one policy, we keep only one policy per comment by selecting the first policy mentioned.
We further remove all mentions of policies from the comments using regular expressions, which often breaks grammaticality of the sentence but is necessary to prevent leakage of label information.
The stance labels (keep, delete, merge, and comment), can be expressed in different forms or spelled differently.
We manually identify the different ways the labels might be expressed and aggregate them into the four standard labels.
We create a multilingual dataset by (semi-automated) linking the policies across the three languages to the respective English policy, if it exists for the German or Turkish policy. We use the interlanguage links for this.
The dataset is split in train/test/dev, where the split for English and German is 80%/15%/5%, but due to the low number in comments in Turkish, we decided to alter the split for Turkish to have at least 200 test examples.
#### Who are the source data producers?
The data creators are Wikipedia editors contributing to the Article for Deletion discussion in the respective Wikipedia language.
#### Annotation process
The annotations are created based on the discussion comments. The labels for stance are created based on the label the editor expressed in the discussion in their comment, the same for the policy the editor mentions.
#### Who are the annotators?
Accordingly, the editors can be seen as the annotators.
#### Personal and Sensitive Information
All data collected from an online community should be treated as sensitive information, especially to preserve the privacy of the editors.
## Bias, Risks, and Limitations
Data of communities should be treated with respect and careful to not overstep the wishes of the creators.
The data provided in this dataset is a snapshot of the communities discussions as it only focuses on the comments which mention policies (only around 20% for English, and around 2% for German and Turkish).
### Recommendations
We would like to discourage work that identifies editors, or works with the editor information on a individual level in any form.
## Citation
If you find our dataset helpful, kindly refer to us in your work using the following citation:
```
@inproceedings{kaffee-etal-2023-article,
title = "Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual {W}ikipedia Editor Discussions",
author = "Kaffee, Lucie-Aim{\'e}e and
Arora, Arnav and
Augenstein, Isabelle",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.361",
doi = "10.18653/v1/2023.emnlp-main.361",
pages = "5891--5909",
abstract = "The moderation of content on online platforms is usually non-transparent. On Wikipedia, however, this discussion is carried out publicly and editors are encouraged to use the content moderation policies as explanations for making moderation decisions. Currently, only a few comments explicitly mention those policies {--} 20{\%} of the English ones, but as few as 2{\%} of the German and Turkish comments. To aid in this process of understanding how content is moderated, we construct a novel multilingual dataset of Wikipedia editor discussions along with their reasoning in three languages. The dataset contains the stances of the editors (keep, delete, merge, comment), along with the stated reason, and a content moderation policy, for each edit decision. We demonstrate that stance and corresponding reason (policy) can be predicted jointly with a high degree of accuracy, adding transparency to the decision-making process. We release both our joint prediction models and the multilingual content moderation dataset for further research on automated transparent content moderation.",
}
```