File size: 4,579 Bytes
1c2e6dc
 
 
 
 
11e33d7
1c2e6dc
 
11e33d7
 
536e9f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2bd718e
536e9f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99beb47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: pddl
task_categories:
- text-classification
tags:
- croissant
size_categories:
- 10M<n<100M
language:
- en
configs:
  - config_name: train10k_val2k_test2k_edit_diff
    data_files:
      - split: train
        path: train10k_val2k_test2k_edit_diff/train.csv
      - split: validation
        path: train10k_val2k_test2k_edit_diff/val.csv
      - split: test
        path: train10k_val2k_test2k_edit_diff/test.csv
  - config_name: train10k_val2k_test2k_sentence
    data_files:
      - split: train
        path: train10k_val2k_test2k_sentence/train.csv
      - split: validation
        path: train10k_val2k_test2k_sentence/val.csv
      - split: test
        path: train10k_val2k_test2k_sentence/test.csv
  - config_name: train50k_val2k_test2k_edit_diff
    data_files:
      - split: train
        path: train50k_val2k_test2k_edit_diff/train.csv
      - split: validation
        path: train50k_val2k_test2k_edit_diff/val.csv
      - split: test
        path: train50k_val2k_test2k_edit_diff/test.csv
  - config_name: train50k_val2k_test2k_sentence
    data_files:
      - split: train
        path: train50k_val2k_test2k_sentence/train.csv
      - split: validation
        path: train50k_val2k_test2k_sentence/val.csv
      - split: test
        path: train50k_val2k_test2k_sentence/test.csv
  - config_name: train100k_val2k_test2k_edit_diff
    data_files:
      - split: train
        path: train100k_val2k_test2k_edit_diff/train.csv
      - split: validation
        path: train100k_val2k_test2k_edit_diff/val.csv
      - split: test
        path: train100k_val2k_test2k_edit_diff/test.csv
  - config_name: train100k_val2k_test2k_sentence
    data_files:
      - split: train
        path: train100k_val2k_test2k_sentence/train.csv
      - split: validation
        path: train100k_val2k_test2k_sentence/val.csv
      - split: test
        path: train100k_val2k_test2k_sentence/test.csv
  - config_name: train200k_val2k_test2k_edit_diff
    data_files:
      - split: train
        path: train200k_val2k_test2k_edit_diff/train.csv
      - split: validation
        path: train200k_val2k_test2k_edit_diff/val.csv
      - split: test
        path: train200k_val2k_test2k_edit_diff/test.csv
  - config_name: train200k_val2k_test2k_sentence
    data_files:
      - split: train
        path: train200k_val2k_test2k_sentence/train.csv
      - split: validation
        path: train200k_val2k_test2k_sentence/val.csv
      - split: test
        path: train200k_val2k_test2k_sentence/test.csv
  - config_name: train400k_val2k_test2k_edit_diff
    data_files:
      - split: train
        path: train400k_val2k_test2k_edit_diff/train.csv
      - split: validation
        path: train400k_val2k_test2k_edit_diff/val.csv
      - split: test
        path: train400k_val2k_test2k_edit_diff/test.csv
  - config_name: train400k_val2k_test2k_sentence
    data_files:
      - split: train
        path: train400k_val2k_test2k_sentence/train.csv
      - split: validation
        path: train400k_val2k_test2k_sentence/val.csv
      - split: test
        path: train400k_val2k_test2k_sentence/test.csv
---
# WikiEditBias Dataset

Wikipedia Editorial Bias Dataset. This dataset serves for the task of detecting biases in Wikipedia historical revisions. This dataset is generated by tracking Wikipedia revisions and corresponding editors' bias labels from the [MediaWiki Historical Dump](https://dumps.wikimedia.org/other/mediawiki_history/readme.html).

## Uses

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

```
from datasets import load_dataset
dataset = load_dataset("fgs218ok/WikiEditBias")
```

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The WikiEditBias Dataset has two data formats: 
* Edit diff format: Contains by sentence pairs extracted from sentence-level differences of Wikipedia revisions. For each .csv file there are 3 fields:
  * label: 0 refers to the non-biased/neutral edits and 1 refers to the biased edits.
  * old_text: pre-edit sentence-level texts
  * new_text: after-edit sentence-level texts
* Sentence format: Contains sentences extracted from the Wikipedia revisions. The fields are similar to the edit diff format:
  * label: 0 refers to the non-biased/neutral edits and 1 refers to the biased edits.
  * text: sentence-level texts of edits.
  
For each format there are five scales of data given: 10k, 50k, 100k, 200k, 400k