Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,10 @@ datasets:
|
|
7 |
|
8 |
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence.
|
9 |
|
10 |
-
The dataset and models are available here: https://github.com/punyajoy/
|
|
|
|
|
|
|
11 |
|
12 |
**For more details about our paper**
|
13 |
|
|
|
7 |
|
8 |
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence.
|
9 |
|
10 |
+
The dataset and models are available here: https://github.com/punyajoy/HateXplain
|
11 |
+
|
12 |
+
**Details of usage**
|
13 |
+
Please use the **Model_Rational_Label** class to inside models.py to load the models.
|
14 |
|
15 |
**For more details about our paper**
|
16 |
|