language:
- en
- yo
- ha
- ig
- pcm
size_categories:
- 10K<n<100K
task_categories:
- text-classification
extra_gated_prompt: >-
You agree to not use the dataset to conduct any activity that causes harm to
human subjects.
extra_gated_fields:
Please provide more information on why you need this dataset and how you plan to use it:
type: text
Dataset Card for NaijaHate
NaijaHate is a hate speech dataset tailored to the Nigerian context. It contains 35,976 annotated Nigerian tweets, including 29,999 tweets randomly sampled from Nigerian Twitter. For a complete description of the data, please refer to the reference paper.
Source Data
This dataset was sourced from a large Twitter dataset of 2.2 billion tweets posted between March 2007 and July 2023 and forming the timelines of 2.8 million Twitter users with a profile location in Nigeria.
Dataset Structure
The dataset is made of four components detailed in the dataset
column: two components used for training a hate speech model (stratified
and al
) and two components for model evaluation (eval
and random
). We detail each component below:
stratified
: 1,607 tweets collected through stratified sampling. We use both hate-related and community-related keywords as seeds.al
: 2,405 tweets sampled through active learningeval
: 1,965 tweets from the general Nigerian Twitter dataset and with high likelihood of being hateful according to 10 benchmarked hate speech classifiersrandom
: 29,999 tweets randomly sampled from the general Nigerian Twitter dataset
Annotation
We recruited a team of four Nigerian annotators, two female and two male, each of them from one of the four most populated Nigerian ethnic groups -- Hausa, Yoruba, Igbo and Fulani. We followed a prescriptive approach by instructing annotators to strictly adhere to extensive annotation guidelines describing our taxonomy of hate speech (see reference paper for full guidelines). Tweets are annotated as belonging to one of three classes:
- hateful (
2
in theclass
column) if it contains an attack on an individual or a group based on the perceived possession of a certain characteristic (e.g., gender, race) - offensive (
1
in theclass
column), if it contains a personal attack or an insult that does not target an individual based on their identity - neutral (
0
in theclass
column) if it is neither hateful nor offensive.
If a tweet is labeled as hateful, it is also annotated for the communities being targeted. The possible target communities in our dataset are:
- Christians (
christian
column) - Muslims (
muslim
) - Northerners (
northerner
) - Southerners (
southerner
) - Hausas (
hausa
) - Fulanis (
fulani
) - Yorubas (
yoruba
) - Igbos (
igbo
) - Women (
women
) - LGBTQ+ (
lgbtq+
) - Herdsmen (
herdsmen
) - Biafra (
biafra
)
Each tweet was labeled by three annotators. For the three-class annotation task, the 3 annotators agreed on 90% of labeled tweets, 2 out of 3 agreed in 9.5% of cases, and all three of them disagreed in 0.5% of cases (Krippendorff's alpha = 0.7).
Language Composition
We further detail the share of each language by dataset component below:
Stratified + active learning sets (%) | Random set (%) | |
---|---|---|
English | 74.2 | 77 |
English & Nigerian Pidgin | 11 | 1.5 |
English & Yoruba | 4.2 | - |
Nigerian Pidgin | 3.6 | 7.3 |
English & Hausa | 2.2 | - |
Hausa | 1 | 1.2 |
Yoruba | - | 1 |
URLs | - | 6 |
Emojis | - | 2.3 |
BibTeX entry and citation information
Please cite the reference paper if you use this dataset.
@inproceedings{tonneau-etal-2024-naijahate,
title = "{N}aija{H}ate: Evaluating Hate Speech Detection on {N}igerian {T}witter Using Representative Data",
author = "Tonneau, Manuel and
Quinta De Castro, Pedro and
Lasri, Karim and
Farouq, Ibrahim and
Subramanian, Lakshmi and
Orozco-Olvera, Victor and
Fraiberger, Samuel",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.488",
pages = "9020--9040",
abstract = "To address the global issue of online hate, hate speech detection (HSD) systems are typically developed on datasets from the United States, thereby failing to generalize to English dialects from the Majority World. Furthermore, HSD models are often evaluated on non-representative samples, raising concerns about overestimating model performance in real-world settings. In this work, we introduce NaijaHate, the first dataset annotated for HSD which contains a representative sample of Nigerian tweets. We demonstrate that HSD evaluated on biased datasets traditionally used in the literature consistently overestimates real-world performance by at least two-fold. We then propose NaijaXLM-T, a pretrained model tailored to the Nigerian Twitter context, and establish the key role played by domain-adaptive pretraining and finetuning in maximizing HSD performance. Finally, owing to the modest performance of HSD systems in real-world conditions, we find that content moderators would need to review about ten thousand Nigerian tweets flagged as hateful daily to moderate 60{\%} of all hateful content, highlighting the challenges of moderating hate speech at scale as social media usage continues to grow globally. Taken together, these results pave the way towards robust HSD systems and a better protection of social media users from hateful content in low-resource settings.",
}