Datasets:
Tasks:
Text Classification
Formats:
csv
Languages:
Portuguese
Size:
10K - 100K
ArXiv:
Tags:
hate-speech-detection
License:
FpOliveira
commited on
Commit
•
088ace6
1
Parent(s):
3e73d86
Update README.md
Browse files
README.md
CHANGED
@@ -61,52 +61,10 @@ To safeguard user identity and uphold the integrity of this dataset, all user me
|
|
61 |
|
62 |
## Annotation and voting process
|
63 |
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
The following table offers a brief summary of the annotators' profiles and qualifications:
|
69 |
-
|
70 |
-
#### Table 1 – Annotators
|
71 |
-
|
72 |
-
| Annotator | Gender | Education | Political | Color |
|
73 |
-
|--------------|--------|-----------------------------------------------|------------|--------|
|
74 |
-
| Annotator 1 | Female | Ph.D. Candidate in civil engineering | Far-left | White |
|
75 |
-
| Annotator 2 | Male | Master's candidate in human rights | Far-left | Black |
|
76 |
-
| Annotator 3 | Female | Master's degree in behavioral psychology | Liberal | White |
|
77 |
-
| Annotator 4 | Male | Master's degree in behavioral psychology | Right-wing | Black |
|
78 |
-
| Annotator 5 | Female | Ph.D. Candidate in behavioral psychology | Liberal | Black |
|
79 |
-
| Annotator 6 | Male | Ph.D. Candidate in linguistics | Far-left | White |
|
80 |
-
| Annotator 7 | Female | Ph.D. Candidate in civil engineering | Liberal | White |
|
81 |
-
| Annotator 8 | Male | Ph.D. Candidate in civil engineering | Liberal | Black |
|
82 |
-
| Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White |
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
User
|
89 |
-
make more scientific:
|
90 |
-
TuPyE, the expanded version of TuPy, includes 43,668 annotated documents curated for hate speech detection across various social networks.
|
91 |
-
This extended dataset incorporates additional annotations and merges with datasets from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022) and 10,000 (ten thousand) of orignal came from TuPy.
|
92 |
-
The combination of new annotations and integration with cited datasets enhances TuPyE's effectiveness for developing advanced hate speech detection models using
|
93 |
-
ML and NLP techniques.
|
94 |
-
ChatGPT
|
95 |
-
TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically selected for the purpose of hate speech detection within diverse social network contexts. This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022), complemented by an infusion of 10,000 original documents from the TuPy corpus.
|
96 |
-
|
97 |
-
The amalgamation of these new annotations and the seamless integration with the aforementioned cited datasets significantly amplifies the efficacy of TuPyE. This augmentation facilitates the development of sophisticated hate speech detection models, leveraging Machine Learning (ML) and Natural Language Processing (NLP) techniques.
|
98 |
-
|
99 |
-
User
|
100 |
-
mkae it more professional :
|
101 |
-
To consolidate data from these prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:
|
102 |
-
|
103 |
-
Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification;
|
104 |
-
|
105 |
-
The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: (i) homophobia, (ii) racism, (iii) xenophobia, (iv) offensive language, (v) obscene language, and (vi) misogyny. To align with our definition of hate speech, texts containing solely offensive and/or obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification;
|
106 |
-
|
107 |
-
Vargas et al. (2022) compiled a collection of 7,000 comments extracted from the Instagram platform, which were labeled by three annotators. These data had previously undergone a simple majority-voting process, eliminating the need for additional text classification procedures.
|
108 |
-
ChatGPT
|
109 |
-
In the pursuit of advancing the field of automatic hate speech detection in Portuguese, our team undertook the meticulous task of creating a comprehensive database. This endeavor involved the integration of labeled document sets from seminal studies in the domain, specifically those conducted by Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022). To ensure the highest degree of consistency and compatibility within our dataset, we adhered to stringent guidelines for text integration, detailed as follows:
|
110 |
|
111 |
1. Fortuna et al. (2019): This study presented a dataset of 5,670 tweets, each annotated by three independent evaluators to ascertain the presence or absence of hate speech. In our integration process, we adopted a simple majority-voting mechanism to classify each document, ensuring a consistent approach to hate speech identification across the dataset.
|
112 |
|
|
|
61 |
|
62 |
## Annotation and voting process
|
63 |
|
64 |
+
In the pursuit of advancing the field of automatic hate speech detection in Portuguese, our team undertook the meticulous task of creating a comprehensive database.
|
65 |
+
This endeavor involved the integration of labeled document sets from seminal studies in the domain, specifically those conducted by Fortuna et al. (2019),
|
66 |
+
Leite et al. (2020), and Vargas et al. (2022). To ensure the highest degree of consistency and compatibility within our dataset,
|
67 |
+
we adhered to stringent guidelines for text integration, detailed as follows:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
|
69 |
1. Fortuna et al. (2019): This study presented a dataset of 5,670 tweets, each annotated by three independent evaluators to ascertain the presence or absence of hate speech. In our integration process, we adopted a simple majority-voting mechanism to classify each document, ensuring a consistent approach to hate speech identification across the dataset.
|
70 |
|