|
--- |
|
tags: |
|
- text-classification |
|
- toxicity |
|
- Twitter |
|
base_model: cardiffnlp/twitter-roberta-base-sentiment |
|
widget: |
|
- text: I love AutoTrain |
|
license: mit |
|
language: |
|
- es |
|
pipeline_tag: text-classification |
|
library_name: transformers |
|
--- |
|
|
|
# Fined-tuned roBERTa for Toxicity Classification in Spanish |
|
|
|
This is a fine-tuned roBERTa-base model trained using as a base model Twitter-roBERTa-base for Sentiment Analysis, which was trained on ~58M tweets. The dataset used for training is a gold standard for protest events for toxicity and incivility in Spanish. |
|
|
|
The dataset comprises ~5M data points from three Latin American protest events: (a) protests against the coronavirus and judicial reform measures in Argentina during August 2020; (b) protests against education budget cuts in Brazil in May 2019; and (c) the social outburst in Chile stemming from protests against the underground fare hike in October 2019. We are focusing on interactions in Spanish to elaborate a gold standard for digital interactions in this language, therefore, we prioritise Argentinian and Chilean data. |
|
|
|
- [GitHub repository](https://github.com/training-datalab/gold-standard-toxicity). |
|
- [Dataset on Zenodo](https://zenodo.org/doi/10.5281/zenodo.12574288). |
|
- [Reference paper](https://arxiv.org/abs/2409.09741) |
|
|
|
**Labels: NONTOXIC and TOXIC.** |
|
|
|
## Example of Classification |
|
|
|
WIP |
|
|
|
## Validation Metrics |
|
|
|
- Accuracy: 0.790 |
|
- Precision: 0.920 |
|
- Recall: 0.657 |
|
- F1-Score: 0.767 |