metadata
license: pddl
task_categories:
- text-classification
tags:
- croissant
size_categories:
- 10M<n<100M
language:
- en
configs:
- config_name: train10k_val2k_test2k_edit_diff
data_files:
- split: train
path: train10k_val2k_test2k_edit_diff/train.csv
- split: validation
path: train10k_val2k_test2k_edit_diff/val.csv
- split: test
path: train10k_val2k_test2k_edit_diff/test.csv
- config_name: train10k_val2k_test2k_sentence
data_files:
- split: train
path: train10k_val2k_test2k_sentence/train.csv
- split: validation
path: train10k_val2k_test2k_sentence/val.csv
- split: test
path: train10k_val2k_test2k_sentence/test.csv
- config_name: train50k_val2k_test2k_edit_diff
data_files:
- split: train
path: train50k_val2k_test2k_edit_diff/train.csv
- split: validation
path: train50k_val2k_test2k_edit_diff/val.csv
- split: test
path: train50k_val2k_test2k_edit_diff/test.csv
- config_name: train50k_val2k_test2k_sentence
data_files:
- split: train
path: train50k_val2k_test2k_sentence/train.csv
- split: validation
path: train50k_val2k_test2k_sentence/val.csv
- split: test
path: train50k_val2k_test2k_sentence/test.csv
- config_name: train100k_val2k_test2k_edit_diff
data_files:
- split: train
path: train100k_val2k_test2k_edit_diff/train.csv
- split: validation
path: train100k_val2k_test2k_edit_diff/val.csv
- split: test
path: train100k_val2k_test2k_edit_diff/test.csv
- config_name: train100k_val2k_test2k_sentence
data_files:
- split: train
path: train100k_val2k_test2k_sentence/train.csv
- split: validation
path: train100k_val2k_test2k_sentence/val.csv
- split: test
path: train100k_val2k_test2k_sentence/test.csv
- config_name: train200k_val2k_test2k_edit_diff
data_files:
- split: train
path: train200k_val2k_test2k_edit_diff/train.csv
- split: validation
path: train200k_val2k_test2k_edit_diff/val.csv
- split: test
path: train200k_val2k_test2k_edit_diff/test.csv
- config_name: train200k_val2k_test2k_sentence
data_files:
- split: train
path: train200k_val2k_test2k_sentence/train.csv
- split: validation
path: train200k_val2k_test2k_sentence/val.csv
- split: test
path: train200k_val2k_test2k_sentence/test.csv
- config_name: train400k_val2k_test2k_edit_diff
data_files:
- split: train
path: train400k_val2k_test2k_edit_diff/train.csv
- split: validation
path: train400k_val2k_test2k_edit_diff/val.csv
- split: test
path: train400k_val2k_test2k_edit_diff/test.csv
- config_name: train400k_val2k_test2k_sentence
data_files:
- split: train
path: train400k_val2k_test2k_sentence/train.csv
- split: validation
path: train400k_val2k_test2k_sentence/val.csv
- split: test
path: train400k_val2k_test2k_sentence/test.csv
WikiEditBias Dataset
Wikipedia Editorial Bias Dataset. This dataset serves for the task of detecting biases in Wikipedia historical revisions. This dataset is generated by tracking Wikipedia revisions and corresponding editors' bias labels from the MediaWiki Historical Dump.
Uses
Direct Use
from datasets import load_dataset
dataset = load_dataset("fgs218ok/WikiEditBias")
Dataset Structure
The WikiEditBias Dataset has two data formats:
- Edit diff format: Contains by sentence pairs extracted from sentence-level differences of Wikipedia revisions. For each .csv file there are 3 fields:
- label: 0 refers to the non-biased/neutral edits and 1 refers to the biased edits.
- old_text: pre-edit sentence-level texts
- new_text: after-edit sentence-level texts
- Sentence format: Contains sentences extracted from the Wikipedia revisions. The fields are similar to the edit diff format:
- label: 0 refers to the non-biased/neutral edits and 1 refers to the biased edits.
- text: sentence-level texts of edits.
For each format there are five scales of data given: 10k, 50k, 100k, 200k, 400k