hoan commited on
Commit
07373d7
1 Parent(s): 32af134

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +331 -0
README.md ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-classification
3
+ license: apache-2.0
4
+ ---
5
+ <div align="center">
6
+ **⚠️**
7
+ This model card is extracted from the original model: (https://huggingface.co/unitary/multilingual-toxic-xlm-roberta)
8
+
9
+ **⚠️ Disclaimer:**
10
+ The huggingface models currently give different results to the detoxify library (see issue [here](https://github.com/unitaryai/detoxify/issues/15)). For the most up to date models we recommend using the models from https://github.com/unitaryai/detoxify
11
+
12
+ # 🙊 Detoxify
13
+ ## Toxic Comment Classification with ⚡ Pytorch Lightning and 🤗 Transformers
14
+
15
+ ![CI testing](https://github.com/unitaryai/detoxify/workflows/CI%20testing/badge.svg)
16
+ ![Lint](https://github.com/unitaryai/detoxify/workflows/Lint/badge.svg)
17
+
18
+ </div>
19
+
20
+ ![Examples image](examples.png)
21
+
22
+ ## Description
23
+
24
+ Trained models & code to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, Multilingual toxic comment classification.
25
+
26
+ Built by [Laura Hanu](https://laurahanu.github.io/) at [Unitary](https://www.unitary.ai/), where we are working to stop harmful content online by interpreting visual content in context.
27
+
28
+ Dependencies:
29
+ - For inference:
30
+ - 🤗 Transformers
31
+ - ⚡ Pytorch lightning
32
+ - For training will also need:
33
+ - Kaggle API (to download data)
34
+
35
+
36
+ | Challenge | Year | Goal | Original Data Source | Detoxify Model Name | Top Kaggle Leaderboard Score | Detoxify Score
37
+ |-|-|-|-|-|-|-|
38
+ | [Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) | 2018 | build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate. | Wikipedia Comments | `original` | 0.98856 | 0.98636
39
+ | [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) | 2019 | build a model that recognizes toxicity and minimizes this type of unintended bias with respect to mentions of identities. You'll be using a dataset labeled for identity mentions and optimizing a metric designed to measure unintended bias. | Civil Comments | `unbiased` | 0.94734 | 0.93639
40
+ | [Jigsaw Multilingual Toxic Comment Classification](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification) | 2020 | build effective multilingual models | Wikipedia Comments + Civil Comments | `multilingual` | 0.9536 | 0.91655*
41
+
42
+ *Score not directly comparable since it is obtained on the validation set provided and not on the test set. To update when the test labels are made available.
43
+
44
+ It is also noteworthy to mention that the top leadearboard scores have been achieved using model ensembles. The purpose of this library was to build something user-friendly and straightforward to use.
45
+
46
+ ## Limitations and ethical considerations
47
+
48
+ If words that are associated with swearing, insults or profanity are present in a comment, it is likely that it will be classified as toxic, regardless of the tone or the intent of the author e.g. humorous/self-deprecating. This could present some biases towards already vulnerable minority groups.
49
+
50
+ The intended use of this library is for research purposes, fine-tuning on carefully constructed datasets that reflect real world demographics and/or to aid content moderators in flagging out harmful content quicker.
51
+
52
+ Some useful resources about the risk of different biases in toxicity or hate speech detection are:
53
+ - [The Risk of Racial Bias in Hate Speech Detection](https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf)
54
+ - [Automated Hate Speech Detection and the Problem of Offensive Language](https://arxiv.org/pdf/1703.04009.pdf%201.pdf)
55
+ - [Racial Bias in Hate Speech and Abusive Language Detection Datasets](https://arxiv.org/pdf/1905.12516.pdf)
56
+
57
+ ## Quick prediction
58
+
59
+
60
+ The `multilingual` model has been trained on 7 different languages so it should only be tested on: `english`, `french`, `spanish`, `italian`, `portuguese`, `turkish` or `russian`.
61
+
62
+ ```bash
63
+ # install detoxify
64
+
65
+ pip install detoxify
66
+
67
+ ```
68
+ ```python
69
+
70
+ from detoxify import Detoxify
71
+
72
+ # each model takes in either a string or a list of strings
73
+
74
+ results = Detoxify('original').predict('example text')
75
+
76
+ results = Detoxify('unbiased').predict(['example text 1','example text 2'])
77
+
78
+ results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста'])
79
+
80
+ # optional to display results nicely (will need to pip install pandas)
81
+
82
+ import pandas as pd
83
+
84
+ print(pd.DataFrame(results, index=input_text).round(5))
85
+
86
+ ```
87
+ For more details check the Prediction section.
88
+
89
+
90
+ ## Labels
91
+ All challenges have a toxicity label. The toxicity labels represent the aggregate ratings of up to 10 annotators according the following schema:
92
+ - **Very Toxic** (a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective)
93
+ - **Toxic** (a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective)
94
+ - **Hard to Say**
95
+ - **Not Toxic**
96
+
97
+ More information about the labelling schema can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).
98
+
99
+ ### Toxic Comment Classification Challenge
100
+ This challenge includes the following labels:
101
+
102
+ - `toxic`
103
+ - `severe_toxic`
104
+ - `obscene`
105
+ - `threat`
106
+ - `insult`
107
+ - `identity_hate`
108
+
109
+ ### Jigsaw Unintended Bias in Toxicity Classification
110
+ This challenge has 2 types of labels: the main toxicity labels and some additional identity labels that represent the identities mentioned in the comments.
111
+
112
+ Only identities with more than 500 examples in the test set (combined public and private) are included during training as additional labels and in the evaluation calculation.
113
+
114
+ - `toxicity`
115
+ - `severe_toxicity`
116
+ - `obscene`
117
+ - `threat`
118
+ - `insult`
119
+ - `identity_attack`
120
+ - `sexual_explicit`
121
+
122
+ Identity labels used:
123
+ - `male`
124
+ - `female`
125
+ - `homosexual_gay_or_lesbian`
126
+ - `christian`
127
+ - `jewish`
128
+ - `muslim`
129
+ - `black`
130
+ - `white`
131
+ - `psychiatric_or_mental_illness`
132
+
133
+ A complete list of all the identity labels available can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).
134
+
135
+
136
+ ### Jigsaw Multilingual Toxic Comment Classification
137
+
138
+ Since this challenge combines the data from the previous 2 challenges, it includes all labels from above, however the final evaluation is only on:
139
+
140
+ - `toxicity`
141
+
142
+ ## How to run
143
+
144
+ First, install dependencies
145
+ ```bash
146
+ # clone project
147
+
148
+ git clone https://github.com/unitaryai/detoxify
149
+
150
+ # create virtual env
151
+
152
+ python3 -m venv toxic-env
153
+ source toxic-env/bin/activate
154
+
155
+ # install project
156
+
157
+ pip install -e detoxify
158
+ cd detoxify
159
+
160
+ # for training
161
+ pip install -r requirements.txt
162
+
163
+ ```
164
+
165
+ ## Prediction
166
+
167
+ Trained models summary:
168
+
169
+ |Model name| Transformer type| Data from
170
+ |:--:|:--:|:--:|
171
+ |`original`| `bert-base-uncased` | Toxic Comment Classification Challenge
172
+ |`unbiased`| `roberta-base`| Unintended Bias in Toxicity Classification
173
+ |`multilingual`| `xlm-roberta-base`| Multilingual Toxic Comment Classification
174
+
175
+ For a quick prediction can run the example script on a comment directly or from a txt containing a list of comments.
176
+ ```bash
177
+
178
+ # load model via torch.hub
179
+
180
+ python run_prediction.py --input 'example' --model_name original
181
+
182
+ # load model from from checkpoint path
183
+
184
+ python run_prediction.py --input 'example' --from_ckpt_path model_path
185
+
186
+ # save results to a .csv file
187
+
188
+ python run_prediction.py --input test_set.txt --model_name original --save_to results.csv
189
+
190
+ # to see usage
191
+
192
+ python run_prediction.py --help
193
+
194
+ ```
195
+
196
+ Checkpoints can be downloaded from the latest release or via the Pytorch hub API with the following names:
197
+ - `toxic_bert`
198
+ - `unbiased_toxic_roberta`
199
+ - `multilingual_toxic_xlm_r`
200
+ ```bash
201
+ model = torch.hub.load('unitaryai/detoxify','toxic_bert')
202
+ ```
203
+
204
+ Importing detoxify in python:
205
+
206
+ ```python
207
+
208
+ from detoxify import Detoxify
209
+
210
+ results = Detoxify('original').predict('some text')
211
+
212
+ results = Detoxify('unbiased').predict(['example text 1','example text 2'])
213
+
214
+ results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста'])
215
+
216
+ # to display results nicely
217
+
218
+ import pandas as pd
219
+
220
+ print(pd.DataFrame(results,index=input_text).round(5))
221
+
222
+ ```
223
+
224
+
225
+ ## Training
226
+
227
+ If you do not already have a Kaggle account:
228
+ - you need to create one to be able to download the data
229
+
230
+ - go to My Account and click on Create New API Token - this will download a kaggle.json file
231
+
232
+ - make sure this file is located in ~/.kaggle
233
+
234
+ ```bash
235
+
236
+ # create data directory
237
+
238
+ mkdir jigsaw_data
239
+ cd jigsaw_data
240
+
241
+ # download data
242
+
243
+ kaggle competitions download -c jigsaw-toxic-comment-classification-challenge
244
+
245
+ kaggle competitions download -c jigsaw-unintended-bias-in-toxicity-classification
246
+
247
+ kaggle competitions download -c jigsaw-multilingual-toxic-comment-classification
248
+
249
+ ```
250
+ ## Start Training
251
+ ### Toxic Comment Classification Challenge
252
+
253
+ ```bash
254
+
255
+ python create_val_set.py
256
+
257
+ python train.py --config configs/Toxic_comment_classification_BERT.json
258
+ ```
259
+ ### Unintended Bias in Toxicicity Challenge
260
+
261
+ ```bash
262
+
263
+ python train.py --config configs/Unintended_bias_toxic_comment_classification_RoBERTa.json
264
+
265
+ ```
266
+ ### Multilingual Toxic Comment Classification
267
+
268
+ This is trained in 2 stages. First, train on all available data, and second, train only on the translated versions of the first challenge.
269
+
270
+ The [translated data](https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api) can be downloaded from Kaggle in french, spanish, italian, portuguese, turkish, and russian (the languages available in the test set).
271
+
272
+ ```bash
273
+
274
+ # stage 1
275
+
276
+ python train.py --config configs/Multilingual_toxic_comment_classification_XLMR.json
277
+
278
+ # stage 2
279
+
280
+ python train.py --config configs/Multilingual_toxic_comment_classification_XLMR_stage2.json
281
+
282
+ ```
283
+ ### Monitor progress with tensorboard
284
+
285
+ ```bash
286
+
287
+ tensorboard --logdir=./saved
288
+
289
+ ```
290
+ ## Model Evaluation
291
+
292
+ ### Toxic Comment Classification Challenge
293
+
294
+ This challenge is evaluated on the mean AUC score of all the labels.
295
+
296
+ ```bash
297
+
298
+ python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
299
+
300
+ ```
301
+ ### Unintended Bias in Toxicicity Challenge
302
+
303
+ This challenge is evaluated on a novel bias metric that combines different AUC scores to balance overall performance. More information on this metric [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation).
304
+
305
+ ```bash
306
+
307
+ python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
308
+
309
+ # to get the final bias metric
310
+ python model_eval/compute_bias_metric.py
311
+
312
+ ```
313
+ ### Multilingual Toxic Comment Classification
314
+
315
+ This challenge is evaluated on the AUC score of the main toxic label.
316
+
317
+ ```bash
318
+
319
+ python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
320
+
321
+ ```
322
+
323
+ ### Citation
324
+ ```
325
+ @misc{Detoxify,
326
+ title={Detoxify},
327
+ author={Hanu, Laura and {Unitary team}},
328
+ howpublished={Github. https://github.com/unitaryai/detoxify},
329
+ year={2020}
330
+ }
331
+ ```