Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: pile-detoxify
13
+ size_categories:
14
+ - 1M<n<10M
15
+ source_datasets:
16
+ - extended|the_pile
17
+ tags:
18
+ - toxicity
19
+ task_categories:
20
+ - text-classification
21
+ - other
22
+ task_ids:
23
+ - acceptability-classification
24
+ - hate-speech-detection
25
+ - text-scoring
26
+ ---
27
+
28
+ # Dataset Card for pile-pii-scrubadub
29
+
30
+ ## Dataset Description
31
+
32
+ - **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
33
+ - **Paper: Arxiv link to be added**
34
+
35
+ ### Dataset Summary
36
+
37
+ This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the toxicity of each sentence.
38
+ Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the [Detoxify](https://github.com/unitaryai/detoxify).
39
+
40
+ ### Supported Tasks and Leaderboards
41
+
42
+ [More Information Needed]
43
+
44
+ ### Languages
45
+
46
+ This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text.
47
+
48
+ ## Dataset Structure
49
+
50
+ ### Data Instances
51
+
52
+ 1949977
53
+
54
+ ### Data Fields
55
+
56
+ - texts (sequence): a list of the sentences in the document, segmented using SpaCy
57
+ - meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated
58
+ - scores (sequence): a score for each sentence in the `texts` column indicating the toxicity predicted by [Detoxify](https://github.com/unitaryai/detoxify)
59
+ - avg_score (float64): the average of the scores listed in the `scores` column
60
+ - num_sents (int64): the number of sentences (and scores) in that document
61
+
62
+ ### Data Splits
63
+
64
+ Training set only
65
+
66
+ ## Dataset Creation
67
+
68
+ ### Curation Rationale
69
+
70
+ This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text.
71
+
72
+ ### Source Data
73
+
74
+ #### Initial Data Collection and Normalization
75
+
76
+ This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile).
77
+
78
+ #### Who are the source language producers?
79
+
80
+ Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset.
81
+
82
+ ### Annotations
83
+
84
+ #### Annotation process
85
+
86
+ Each sentence was scored using [Detoxify](https://github.com/unitaryai/detoxify), which is a toxic comment classifier.
87
+ We used the `unbiased` model which is based on the 124M parameter [RoBERTa](https://arxiv.org/abs/1907.11692) and trained on the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification).
88
+
89
+ #### Who are the annotators?
90
+
91
+ [Detoxify](https://github.com/unitaryai/detoxify)
92
+
93
+ ### Personal and Sensitive Information
94
+
95
+ This dataset contains all personal identifable information and toxic text that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile).
96
+
97
+ ## Considerations for Using the Data
98
+
99
+ ### Social Impact of Dataset
100
+
101
+ This dataset contains examples of toxic text and personal identifiable information.
102
+ (A version of this datatset with personal identifiable information annotated is [available here](https://huggingface.co/datasets/tomekkorbak/pile-pii-scrubadub).)
103
+ Please take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information.
104
+ This dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text.
105
+ We do not recommend deploying models trained on this data.
106
+
107
+ ### Discussion of Biases
108
+
109
+ This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027
110
+
111
+ ### Other Known Limitations
112
+
113
+ The toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.
114
+
115
+ ## Additional Information
116
+
117
+ ### Dataset Curators
118
+
119
+ [The Pile](https://huggingface.co/datasets/the_pile)
120
+
121
+ ### Licensing Information
122
+
123
+ From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
124
+
125
+ ### Citation Information
126
+
127
+ Paper information to be added
128
+
129
+ ### Contributions
130
+
131
+ [The Pile](https://huggingface.co/datasets/the_pile)