Datasets:
Tasks:
Text Classification
Formats:
csv
Sub-tasks:
hate-speech-detection
Languages:
English
Size:
1K - 10K
ArXiv:
License:
Paul Röttger
commited on
Commit
•
65cd5e7
1
Parent(s):
2fefa23
Update README.md
Browse files
README.md
CHANGED
@@ -24,94 +24,77 @@ task_ids:
|
|
24 |
|
25 |
## Dataset Description
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
- **Point of Contact:**
|
30 |
|
31 |
-
|
32 |
|
33 |
-
|
|
|
|
|
34 |
|
35 |
-
### Supported Tasks and Leaderboards
|
36 |
-
|
37 |
-
[More Information Needed]
|
38 |
-
|
39 |
-
### Languages
|
40 |
-
|
41 |
-
[More Information Needed]
|
42 |
|
43 |
## Dataset Structure
|
44 |
|
45 |
-
|
46 |
-
|
47 |
-
[More Information Needed]
|
48 |
-
|
49 |
-
### Data Fields
|
50 |
-
|
51 |
-
[More Information Needed]
|
52 |
-
|
53 |
-
### Data Splits
|
54 |
-
|
55 |
-
[More Information Needed]
|
56 |
-
|
57 |
-
## Dataset Creation
|
58 |
-
|
59 |
-
### Curation Rationale
|
60 |
-
|
61 |
-
[More Information Needed]
|
62 |
-
|
63 |
-
### Source Data
|
64 |
|
65 |
-
|
|
|
66 |
|
67 |
-
|
|
|
68 |
|
69 |
-
|
|
|
70 |
|
71 |
-
|
|
|
72 |
|
73 |
-
|
|
|
74 |
|
75 |
-
|
|
|
76 |
|
77 |
-
|
|
|
78 |
|
79 |
-
|
|
|
80 |
|
81 |
-
|
|
|
|
|
82 |
|
83 |
-
|
|
|
84 |
|
85 |
-
|
|
|
86 |
|
87 |
-
## Considerations for Using the Data
|
88 |
-
|
89 |
-
### Social Impact of Dataset
|
90 |
-
|
91 |
-
[More Information Needed]
|
92 |
-
|
93 |
-
### Discussion of Biases
|
94 |
-
|
95 |
-
[More Information Needed]
|
96 |
-
|
97 |
-
### Other Known Limitations
|
98 |
-
|
99 |
-
[More Information Needed]
|
100 |
-
|
101 |
-
## Additional Information
|
102 |
-
|
103 |
-
### Dataset Curators
|
104 |
-
|
105 |
-
[More Information Needed]
|
106 |
-
|
107 |
-
### Licensing Information
|
108 |
-
|
109 |
-
[More Information Needed]
|
110 |
|
111 |
### Citation Information
|
112 |
|
113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
114 |
|
115 |
-
### Contributions
|
116 |
|
117 |
-
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
|
|
24 |
|
25 |
## Dataset Description
|
26 |
|
27 |
+
HateCheck is a suite of functional test for hate speech detection models.
|
28 |
+
3
|
|
|
29 |
|
30 |
+
t contains ,728 test cases in 29 functional tests
|
31 |
|
32 |
+
- **Paper:** Röttger et al. (2021) - HateCheck: Functional Tests for Hate Speech Detection Model. https://aclanthology.org/2021.acl-long.4/
|
33 |
+
- **Repository:** https://github.com/paul-rottger/hatecheck-data
|
34 |
+
- **Point of Contact:** [email protected]
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
## Dataset Structure
|
38 |
|
39 |
+
"test.csv" contains all 3,728 validated test cases. Each test case (row) has the following attributes:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
**functionality**
|
42 |
+
The shorthand for the functionality tested by the test case.
|
43 |
|
44 |
+
**case_id**
|
45 |
+
The unique ID of the test case (assigned to each of the 3,901 cases we initially generated)
|
46 |
|
47 |
+
**test_case**
|
48 |
+
The text of the test case.
|
49 |
|
50 |
+
**label_gold**
|
51 |
+
The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.
|
52 |
|
53 |
+
**target_ident**
|
54 |
+
Where applicable, the protected group targeted or referenced by the test case. We cover seven protected groups in the test suite: women, trans people, gay people, black people, disabled people, Muslims and immigrants.
|
55 |
|
56 |
+
**direction**
|
57 |
+
For hateful cases, the binary secondary label indicating whether they are *directed* at an individual as part of a protected group or aimed at the group in *general*.
|
58 |
|
59 |
+
**focus_words**
|
60 |
+
Where applicable, the key word or phrase in a given test case (e.g. "cut their throats").
|
61 |
|
62 |
+
**focus_lemma**
|
63 |
+
Where applicable, the corresponding lemma (e.g. "cut sb. throat").
|
64 |
|
65 |
+
**ref_case_id**
|
66 |
+
For hateful cases, where applicable, the ID of the simpler hateful case which was perturbed to generate them.
|
67 |
+
For non-hateful cases, where applicable, the ID of the hateful case which is contrasted.
|
68 |
|
69 |
+
**ref_templ_id**
|
70 |
+
The equivalent, but for template IDs.
|
71 |
|
72 |
+
**templ_id**
|
73 |
+
The unique ID of the template from which the test case was generated (assigned to each of the 866 cases and templates from which we generated the 3,901 initial cases).
|
74 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
### Citation Information
|
77 |
|
78 |
+
When using HateCheck, please cite our ACL paper:
|
79 |
+
|
80 |
+
@inproceedings{rottger-etal-2021-hatecheck,
|
81 |
+
title = "{H}ate{C}heck: Functional Tests for Hate Speech Detection Models",
|
82 |
+
author = {R{\"o}ttger, Paul and
|
83 |
+
Vidgen, Bertie and
|
84 |
+
Nguyen, Dong and
|
85 |
+
Waseem, Zeerak and
|
86 |
+
Margetts, Helen and
|
87 |
+
Pierrehumbert, Janet},
|
88 |
+
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
|
89 |
+
month = aug,
|
90 |
+
year = "2021",
|
91 |
+
address = "Online",
|
92 |
+
publisher = "Association for Computational Linguistics",
|
93 |
+
url = "https://aclanthology.org/2021.acl-long.4",
|
94 |
+
doi = "10.18653/v1/2021.acl-long.4",
|
95 |
+
pages = "41--58",
|
96 |
+
abstract = "Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.",
|
97 |
+
}
|
98 |
+
|
99 |
|
|
|
100 |
|
|