{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:49.720351Z" }, "title": "ToxCCIn: Toxic Content Classification with Interpretability", "authors": [ { "first": "Tong", "middle": [], "last": "Xiang", "suffix": "", "affiliation": { "laboratory": "IR Lab", "institution": "Georgetown University", "location": { "country": "USA" } }, "email": "" }, { "first": "Sean", "middle": [], "last": "Macavaney", "suffix": "", "affiliation": { "laboratory": "IR Lab", "institution": "Georgetown University", "location": { "country": "USA" } }, "email": "sean.macavaney@glasgow.ac.uk" }, { "first": "Eugene", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "IR Lab", "institution": "Georgetown University", "location": { "country": "USA" } }, "email": "eugene@ir.cs.georgetown.edu" }, { "first": "Nazli", "middle": [], "last": "Goharian", "suffix": "", "affiliation": { "laboratory": "IR Lab", "institution": "Georgetown University", "location": { "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Despite the recent successes of transformerbased models in terms of effectiveness on a variety of tasks, their decisions often remain opaque to humans. Explanations are particularly important for tasks like offensive language or toxicity detection on social media because a manual appeal process is often in place to dispute automatically flagged content. In this work, we propose a technique to improve the interpretability of these models, based on a simple and powerful assumption: a post is at least as toxic as its most toxic span. We incorporate this assumption into transformer models by scoring a post based on the maximum toxicity of its spans and augmenting the training process to identify correct spans. We find this approach effective and can produce explanations that exceed the quality of those provided by Logistic Regression analysis (often regarded as a highly-interpretable model), according to a human study.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Despite the recent successes of transformerbased models in terms of effectiveness on a variety of tasks, their decisions often remain opaque to humans. Explanations are particularly important for tasks like offensive language or toxicity detection on social media because a manual appeal process is often in place to dispute automatically flagged content. In this work, we propose a technique to improve the interpretability of these models, based on a simple and powerful assumption: a post is at least as toxic as its most toxic span. We incorporate this assumption into transformer models by scoring a post based on the maximum toxicity of its spans and augmenting the training process to identify correct spans. We find this approach effective and can produce explanations that exceed the quality of those provided by Logistic Regression analysis (often regarded as a highly-interpretable model), according to a human study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The rapidly increasing usage of social media has made communication easier but also has enabled users to spread questionable content (Matamoros-Fern\u00e1ndez, 2017), which sometimes even leads to real-world crimes (Johnson et al., 2019; Committee et al., 2017; Center, 2017) . To prevent this type of speech from jeopardizing others' ability to express themselves in online communities, many platforms prohibit content that is considered abusive, hate speech, or more generally, toxic. To enforce such policies, some platforms employ automatic content moderation, which uses machine learning techniques to detect and flag violating content (Djuric et al., 2015; Nobata et al., 2016; MacAvaney et al., 2019) .", "cite_spans": [ { "start": 210, "end": 232, "text": "(Johnson et al., 2019;", "ref_id": "BIBREF14" }, { "start": 233, "end": 256, "text": "Committee et al., 2017;", "ref_id": "BIBREF6" }, { "start": 257, "end": 270, "text": "Center, 2017)", "ref_id": "BIBREF4" }, { "start": 636, "end": 657, "text": "(Djuric et al., 2015;", "ref_id": "BIBREF10" }, { "start": 658, "end": 678, "text": "Nobata et al., 2016;", "ref_id": "BIBREF25" }, { "start": 679, "end": 702, "text": "MacAvaney et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Leveraging the development of pre-trained language models (Devlin et al., 2019; Liu et al., 2019b) and domain transfer learning (Gururangan et al., 2020; Sotudeh et al., 2020) , many models (Wiedemann et al., 2020) achieved high performance on toxicity detection (Zampieri et al., 2020) . However, directly deploying such systems could be problematic for the following reasons. Despite being highly effective, one major problem is that the decisions of the systems are largely opaque, i.e., it can be difficult to reason why the model made its decision (Waseem and Hovy, 2016) . This interpretability problem is especially apparent when compared to prior models, such as Logistic Regression, that transparently assign scores to each input feature (here, words) that can be used to justify the model's decision. Knowing how one decision is made is important in toxicity detection. On one hand, recent laws such as General Data Protection Regulation 1 highlighted the significance of interpretable models for users; on the other hand, interpretable models can assist online community moderators in reducing their time spent on checking each potentially problematic post. Since the purpose of these explanations is for human consumption, we consider a model to be interpretable if it can produce a set of words from the input text that humans would consider a reasonable justification for the model's decision.", "cite_spans": [ { "start": 58, "end": 79, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF9" }, { "start": 80, "end": 98, "text": "Liu et al., 2019b)", "ref_id": null }, { "start": 128, "end": 153, "text": "(Gururangan et al., 2020;", "ref_id": "BIBREF12" }, { "start": 154, "end": 175, "text": "Sotudeh et al., 2020)", "ref_id": "BIBREF29" }, { "start": 190, "end": 214, "text": "(Wiedemann et al., 2020)", "ref_id": "BIBREF33" }, { "start": 263, "end": 286, "text": "(Zampieri et al., 2020)", "ref_id": "BIBREF41" }, { "start": 553, "end": 576, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose a technique to improve the interpretability of transformer-based models like BERT (Devlin et al., 2019) and ELEC-TRA (Clark et al., 2020) for the task of toxicity detection in social media posts. We base our technique on a simple and powerful assumption: A post is at least as toxic as its most toxic span. In other words, the toxicity of a piece of text should be associated with the most toxic span identified in the text. To this end, we propose using neural multi-task model that is trained on (1) toxicity detection over (a) When the input sequence is toxic, the toxicity of the most toxic span is picked to represent the toxicity of the sentence.", "cite_spans": [ { "start": 107, "end": 128, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 142, "end": 162, "text": "(Clark et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(b) When the input sequence is not toxic, none of the spans are toxic, and thus the whole sequence is predicted as non-toxic. Figure 1 : Illustration of our proposed approach. We showcase applying our proposed approach to the transformerbased model, BERT, in this case. In the linear layer, darker color denotes a more toxic span.", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 134, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the entire piece of text and (2) toxic span detection (i.e., identifying individual tokens in the text that are toxic). Rather than the typical transformer classification approach, our model predicts the toxicity of each individual term in the text and aggregates them via max pooling to predict the toxicity of the entire text (see Fig. 1 ). Through experiments on the Civil Comment Dataset (Borkan et al., 2019) , we find our proposed approach not only improves the classification effectiveness compared to models that are only trained on the classification task, but also helps when transferring the model to a similar task. More importantly, however, the structure of our model inherently generates explanations of the decision by selecting the terms with the highest toxicity scores. We find through a human study that these explanations exceed the quality of those provided by Logistic Regression-a model often regarded as highly interpretable. An error analysis has shown multiple insights into utilizing contextualized models in toxicity detection and lead future directions.", "cite_spans": [ { "start": 392, "end": 413, "text": "(Borkan et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 333, "end": 339, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Due to the ubiquity of online conversations, the need for automatic online toxicity detection has become crucial to promote healthy online discussions. Early research relied on surface-level features, e.g., bag-of-words approaches (Warner and Hirschberg, 2012; Waseem and Hovy, 2016; , and traditional machine learning methods. Although reported to be highly predictive (Schmidt and Wiegand, 2017) and easily interpretable, they suffer from the problem of false positives as the presence of certain patterns could lead to misclassification (Kwok and Wang, 2013) . For example, some slurs that frequently appear in African American English are usually picked as strong evidence of toxicity (Xia et al., 2020) ; these words are indeed innocuous and only confined within the black online community. These features also require such predictive terms to appear in both training and testing set to work effectively. Later on, neural textual representations have shown effectiveness in toxicity detection. Djuric et al. (2015) proposed using sentence-level embedding (Le and Mikolov, 2014) to represent the textual information and has shown great improvement against average over word-level embedding (Nobata et al., 2016) . These representations are usually utilized together with either linear classifiers such as Logistic Regression (Djuric et al., 2015) , or neural classifiers such as Convolutional Neural Networks (CNN) (Gamb\u00e4ck and Sikdar, 2017) and Long Short-Term Memory Networks (LSTM) (Badjatiya et al., 2017) . More recently, large-scale pre-trained language models such as Bidirectional Encoder Representation from Transformer (BERT) (Devlin et al., 2019) have shown great advantages in toxicity detection (Zampieri et al., 2019b (Zampieri et al., , 2020 , by learning contextual word embeddings instead of static embeddings.", "cite_spans": [ { "start": 231, "end": 260, "text": "(Warner and Hirschberg, 2012;", "ref_id": "BIBREF30" }, { "start": 261, "end": 283, "text": "Waseem and Hovy, 2016;", "ref_id": "BIBREF32" }, { "start": 370, "end": 397, "text": "(Schmidt and Wiegand, 2017)", "ref_id": "BIBREF28" }, { "start": 540, "end": 561, "text": "(Kwok and Wang, 2013)", "ref_id": "BIBREF16" }, { "start": 689, "end": 707, "text": "(Xia et al., 2020)", "ref_id": "BIBREF37" }, { "start": 999, "end": 1019, "text": "Djuric et al. (2015)", "ref_id": "BIBREF10" }, { "start": 1060, "end": 1082, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF17" }, { "start": 1194, "end": 1215, "text": "(Nobata et al., 2016)", "ref_id": "BIBREF25" }, { "start": 1329, "end": 1350, "text": "(Djuric et al., 2015)", "ref_id": "BIBREF10" }, { "start": 1419, "end": 1445, "text": "(Gamb\u00e4ck and Sikdar, 2017)", "ref_id": "BIBREF11" }, { "start": 1489, "end": 1513, "text": "(Badjatiya et al., 2017)", "ref_id": "BIBREF0" }, { "start": 1640, "end": 1661, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 1712, "end": 1735, "text": "(Zampieri et al., 2019b", "ref_id": "BIBREF40" }, { "start": 1736, "end": 1760, "text": "(Zampieri et al., , 2020", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Despite the fact that systems can achieve astonishing performance on given datasets, they suffer from the problem of lacking interpretability. One way of providing interpretability is to explain predictions (Belinkov and Glass, 2019) . For textual data, such explanations could be those sub-strings (i.e., spans) that significantly influence the models' judgments, which were named rationales in Zaidan et al. (2007) . Zhang et al. (2016) proposed a CNN model that exploits document labels and associated rationales for text classification. On toxicity detection, while most of the works were done focusing on improving the model performances, less attention was paid to interpretability. SemEval-2021 Task 5 provided the Toxic Spans Detection Dataset (TSDD) 2 where each sample is annotated with both post-level label and rationales. Mathew et al. (2020) also provided a hate speech dataset where samples are annotated with rationales and other labels; though they also provided baseline models that can incorporate rationales, these models were built to evaluate the effectiveness of their proposed dataset, and thus cannot be easily transferred to other settings.", "cite_spans": [ { "start": 207, "end": 233, "text": "(Belinkov and Glass, 2019)", "ref_id": "BIBREF1" }, { "start": 396, "end": 416, "text": "Zaidan et al. (2007)", "ref_id": "BIBREF38" }, { "start": 419, "end": 438, "text": "Zhang et al. (2016)", "ref_id": "BIBREF42" }, { "start": 835, "end": 855, "text": "Mathew et al. (2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We propose a neural multi-task model that can predict the toxicity and explain its prediction at the same time by providing a set of words that can justify its prediction. In this section, we introduce the assumption that empowers our model with interpretability. Then we present the proposed model's architecture and the multi-task training paradigm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "We begin with the following assumption: A post is at least as toxic as its most toxic span. This assumption suggests that if there is a word or phrase in a piece of text that is toxic, i.e., with a level of toxicity that is over a certain threshold, the toxicity level of the entire text is certainly over such threshold, and, therefore, should be considered toxic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assumption", "sec_num": "3.1" }, { "text": "This assumption can be formalized as follows. Let x = {x 1 , x 2 , ..., x n } denote the input sequence where n is the length of the sequence. Given the input x, we can define y as the toxicity label for the sequence and y as the toxicity labels for individual token. Let s = {s 1 , s 2 , ..., s n } be a model's prediction of toxicity for each token. By our assumption, we apply a max pooling operation over s:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assumption", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = max(s)", "eq_num": "(1)" } ], "section": "Assumption", "sec_num": "3.1" }, { "text": "where s represents the predicted toxicity of the entire sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assumption", "sec_num": "3.1" }, { "text": "We acknowledge that this assumption may not always hold. In some cases, toxicity can be expressed in subtle or implicit ways, such as through sarcasm or metaphor (MacAvaney et al., 2019; Waseem et al., 2017) . In such cases, there is often not a clearly identifiable span that is toxic. However, these cases are difficult for any model to identify, and through our experimental results in Section 5, we find that this does not hinder the effectiveness of our model.", "cite_spans": [ { "start": 162, "end": 186, "text": "(MacAvaney et al., 2019;", "ref_id": "BIBREF20" }, { "start": 187, "end": 207, "text": "Waseem et al., 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Assumption", "sec_num": "3.1" }, { "text": "To detect the toxicity and learn the toxic spans at the same time, we propose to use a neural multitask learning framework (Caruana, 1997) . In our settings, we jointly train the model with two related tasks: (1) Toxicity Detection (at the sequence level), and (2) Toxic Span Detection (at the token level). These two tasks share all the parameters in the model. Our approach can be applied to any sequence encoder model (e.g., LSTM or transformer). Given the input sequence x, let the output of the sequence encoder be ", "cite_spans": [ { "start": 123, "end": 138, "text": "(Caruana, 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Architecture & Methodology", "sec_num": "3.2" }, { "text": "H = {h 1 , h 2 , ..., h n }, H \u2208 R n\u00d7d . Here h i \u2208 R d denotes the i-th", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture & Methodology", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = H \u2022 W", "eq_num": "(2)" } ], "section": "Architecture & Methodology", "sec_num": "3.2" }, { "text": "For the classification task, we use s as the predicted toxicity where s is calculated following the procedure mentioned in Eq. (1). For the span detection task, we directly leverage the output toxicity sequence s. This setup ensures that the model learns to predict the text as toxic if a span is toxic. For the purposes of training, let D 1 be the dataset for toxicity detection task, and D 2 be the dataset for toxic spans detection task. We construct the loss of the model L to be the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture & Methodology", "sec_num": "3.2" }, { "text": "L = \u03bb (x,y)\u2208D 1 L C (x, y) Loss for toxicity detection + (1 \u2212 \u03bb) (x,y)\u2208D 2 L S (x, y) Loss for toxic spans detection (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture & Methodology", "sec_num": "3.2" }, { "text": "where L C is the loss for the toxicity detection task and L S is the loss for the toxic spans detection task. \u03bb denotes a hyperparameter specifying the weight for each task. Here the toxicity detection task is a sequence classification task and the toxic spans detection task is a token classification task. We jointly train the model across tasks in an end-toend fashion, minimizing the Mean Square Error (MSE) loss for both tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture & Methodology", "sec_num": "3.2" }, { "text": "We conduct experiments to answer the following research questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "RQ1 Does our aforementioned assumption affect the model's performance at detecting toxic content?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "RQ2 Is our approach applicable to different transformer models?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "RQ3 Does our approach produce models that can generalize to different domains?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "RQ4 Does our model identify spans that improve the interpretability of model decisions?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We primarily train and evaluate our system using the Civil Comment Dataset (CCD) (Borkan et al., 2019) . For interpretability, we leverage the Toxic Spans Detection Dataset (TSDD) 2 in a multitask training paradigm. We also use the Offensive Language Identification Dataset (OLID) (Zampieri et al., 2019a) for the cross-domain evaluation. CCD The CCD (Borkan et al., 2019 ) is a largescale dataset with crowd-sourced post-level annotations for toxicity, provided by the Civil Comment platform. 3 Posts that are rude, disrespectful, or unreasonable (Borkan et al., 2019) are considered toxic based on the rating guidelines as published by the Perspective API . Through the platform, crowd-sourced raters were asked to rate comments as \"Very Toxic\", \"Toxic\", \"Hard to say\", or \"Not Toxic\". A toxicity score between zero and one of a post is the fraction of raters considering it to be toxic. We further cast the scores to binary labels by setting a threshold of 0.5 (i.e., at least half of the raters consider the post toxic). The dataset contains around 1.8 million posts in total, and 8% of them are labeled as toxic. TSDD TSDD 2 is a 10,000-sample subset of CCD, containing only toxic comments, marked up with individual spans that are toxic. Each post is annotated by three annotators. 528 posts have no annotated span since the annotators believe they are toxic as a whole without any explicit span.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Borkan et al., 2019)", "ref_id": "BIBREF2" }, { "start": 281, "end": 305, "text": "(Zampieri et al., 2019a)", "ref_id": "BIBREF39" }, { "start": 351, "end": 371, "text": "(Borkan et al., 2019", "ref_id": "BIBREF2" }, { "start": 494, "end": 495, "text": "3", "ref_id": null }, { "start": 548, "end": 569, "text": "(Borkan et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "We use the OLID offensive language dataset (Zampieri et al., 2019b) to examine the domain-transferability. This dataset contains web posts with hierarchical annotation (Waseem et al., 2017) . We use its first layer, where the annotations indicate whether the content is offensive (32%) or non-offensive (67%). For our evaluation, we use the official 860-sample OLID test set.", "cite_spans": [ { "start": 43, "end": 67, "text": "(Zampieri et al., 2019b)", "ref_id": "BIBREF40" }, { "start": 168, "end": 189, "text": "(Waseem et al., 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "OLID", "sec_num": null }, { "text": "For the purpose of training and evaluation, we constructed a 30,000-sample curated CCD by mixing the clear-cut examples with the ambiguous ones, showcased in Table 1 . Here, 14,000 samples are used for training and the rest are for testing.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Curated CCD", "sec_num": null }, { "text": "We first sampled 7,000 highly toxic posts (toxicity score greater than 0.8) and 7,000 non-toxic posts (toxicity score less than 0.1) from CCD. Note that 3,000 of the toxic posts sampled were drawn from TSDD, which is still a subset of CDD, for the span annotations. These posts are considered to be easy since a great portion of the raters agreed on the judgment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curated CCD", "sec_num": null }, { "text": "We further sampled another 8,000 ambiguous posts that have toxicity scores between 0.1 and 0.3 and contain terms that frequently appear in the toxic posts. Terms annotated at least 20 times as part of toxic spans in TSDD are considered frequent, resulting in a list of 62. The top 20 terms are presented in Table 2 . We believe that these toxic terms are used in a non-toxic way in these posts, and, therefore, are good adversarial examples for the models to learn from the context instead of memorizing the frequent terms. To maintain an even proportion of toxic and non-toxic posts, we sample an additional 8,000 highly toxic posts.", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 314, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Curated CCD", "sec_num": null }, { "text": "Here, we describe the implementation details of our baselines and the proposed models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "4.2" }, { "text": "We use Logistic Regression (LR) classifier with lemmatized uni-gram and bi-gram features and L2 regularization as our baseline for both effectiveness evaluation and interpretability eval- uation. LR is shown to be effective in toxicity detection (Djuric et al., 2015; Waseem and Hovy, 2016) while naturally providing explanations for its predictions if utilized together with bag-of-tokens features. While there are other models that rely on the attention mechanisms to provide various interpretability (Rogers et al., 2020) , we choose to use LR instead of them. On the one hand, they usually need postprocessing to perform interpretation, which is less straightforward and intuitive compared to LR; on the other hand, there are still debates about whether attention mechanisms can produce meaningful explanations (Wiegreffe and Pinter, 2019; Jain and Wallace, 2019) . For our LR baseline, NLTK is used for preprocessing and scikit-learn is used for building the classification model. For the following transformer-based models, we utilize the transformers (Wolf et al., 2020) library with Adam (Kingma and Ba, 2015) optimizer. The learning rate is by default set to 2 \u00d7 10 \u22125 and the number of training epochs is tuned on the validation set. We also utilized the pre-trained BERT-base (Devlin et al., 2019) and ELECTRA-base (Clark et al., 2020) that are available in the Huggingface community. Input sen-tences are trimmed to a max length of 256 tokens.", "cite_spans": [ { "start": 246, "end": 267, "text": "(Djuric et al., 2015;", "ref_id": "BIBREF10" }, { "start": 268, "end": 290, "text": "Waseem and Hovy, 2016)", "ref_id": "BIBREF32" }, { "start": 503, "end": 524, "text": "(Rogers et al., 2020)", "ref_id": "BIBREF27" }, { "start": 815, "end": 843, "text": "(Wiegreffe and Pinter, 2019;", "ref_id": "BIBREF34" }, { "start": 844, "end": 867, "text": "Jain and Wallace, 2019)", "ref_id": "BIBREF13" }, { "start": 1058, "end": 1077, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF35" }, { "start": 1287, "end": 1308, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 1326, "end": 1346, "text": "(Clark et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "LR", "sec_num": null }, { "text": "As baselines, we evaluate typical sentence classification (CLS) architectures tuned on only the post-level labels by adding a linear layer on top of the [CLS] token. Both BERT and ELECTRA are trained with the crossentropy loss, which is the default setting in the transformers library.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT/ELECTRA-CLS", "sec_num": null }, { "text": "We also evaluate models that are only trained for the span detection (SP) task. It follows the architecture and methodology mentioned in Section 3.2, except only optimizing the toxic spans detection loss L S . Since only a portion of the samples contains toxic span annotations, the models are only trained on that subset. Beyond the toxic spans detection task, we also evaluate the toxicity detection performance for these models leveraging our proposed assumption, even though these models are not trained for toxicity detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT/ELECTRA-SP", "sec_num": null }, { "text": "Finally, we describe the implementation of our proposed Multi-Task (MT) models. The models are built with the architecture described in Section 3.2 and optimized with the joint loss L shown in Eq. (3). Since not all the input posts have the labels for toxic spans, the multi-task models are trained by interleaving samples with and without span information. Joint loss L is calculated for those samples with labels for both tasks; for samples with only post-level labels, we calculate only the classification loss L C . During training, we interleave the update with each kind of loss to ensure a balance update on the parameters. We specify the hyperparameter \u03bb to be 0.5 (i.e., weighting both tasks equally).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT/ELECTRA-MT", "sec_num": null }, { "text": "In this section, we answer the first three research questions by presenting and analyzing the classification effectiveness of the proposed models in various settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results on Toxicity Detection", "sec_num": "5" }, { "text": "The in-domain toxicity detection performance is Table 3 : In-domain evaluation results for toxicity classification on the Curated CCD dataset. We report Precision (P), Recall (R), and F1 for each model on all categories. We also report the Macro-F1 for all models. The best F1 performance is indicated in bold.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results on Toxicity Detection", "sec_num": "5" }, { "text": "shown in Table 3 . Both of our proposed multi-task models BERT-MT and ELECTRA-MT achieve the best performance among all models we evaluated. Models using the [CLS] tokens perform the worst among others, even worse than the ones that are only trained on the span detection task. This suggests that the span information is capable of predicting the toxicity of the entire content and combining it with the post-level supervised information further improves the effectiveness. This answers RQ1: our proposed model based on our assumption improves the classification effectiveness. Furthermore, BERT-MT and ELECTRA-MT are equally effective in respect to various evaluation metrics; therefore, we validate RQ2: our approach generalizes to at least two pre-trained transformer models. Interestingly, LR has the highest precision among the baselines but the lowest recall, suggesting that the model may be relying on high-precision features such as racial slurs. This matches previous observations by MacAvaney et al. (2019) .", "cite_spans": [ { "start": 1004, "end": 1017, "text": "et al. (2019)", "ref_id": null } ], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results on Toxicity Detection", "sec_num": "5" }, { "text": "In a domain transfer setting, i.e., training on CDD and testing on OLID, our approach is competitive with models trained on OLID. In Table 4 , both BERT-MT and ELECTRA-MT obtain a competitive 0.77 macro-F1 without training on any example in OLID, even outperforming some leading systems reported on this dataset. Therefore, we confirm that our models remain effective in a domain-transfer setting (RQ3).", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results on Toxicity Detection", "sec_num": "5" }, { "text": "To better understand the limitations of our proposed models, we qualitatively analyze the predictions against the gold labels using BERT-MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.1" }, { "text": "One major source of false positives comes from treating negative words, usually", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Positives", "sec_num": null }, { "text": "Macro-F1 SVM (Zampieri et al., 2019a) 0.69 BiLSTM (Zampieri et al., 2019a) 0.75 BERT-FT (Liu et al., 2019a) 0.83 BERT-MT (ours, transfer) 0.77 ELECTRA-MT (ours, transfer) 0.77 Table 4 : Evaluation on the OLID. BERT-FT stands for the BERT model fine-tuned on the OLID data. Our system performs competitive in a completely transfer setting (only trained on CCD data). We report Macro-F1 for all models here. The best performance is in bold.", "cite_spans": [ { "start": 13, "end": 37, "text": "(Zampieri et al., 2019a)", "ref_id": "BIBREF39" }, { "start": 50, "end": 74, "text": "(Zampieri et al., 2019a)", "ref_id": "BIBREF39" }, { "start": 88, "end": 107, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF18" }, { "start": 113, "end": 137, "text": "BERT-MT (ours, transfer)", "ref_id": null } ], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "adjectives, such as disgusting, lazy, incompetent, as strong signals for toxicity. Also, our proposed models tend to treat certain sub-words that frequently appear in toxic contexts as toxic spans. For example, the sub-word ##nt of magnificient is predicted as toxic (0.979 4 ) in the following sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Should have had the magnificient Doug Ford stump for Smith....LOL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "This could be attributed to its high frequency in explicitly toxic words such as ignorant and arrogant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We also suspect that the model is over-leveraging expression patterns or co-occurrences for toxicity classification. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "I have never seen a suspect identified as a \"brown\" man.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Here, the model misclassifies it as toxic and picks the word brown as the most toxic span with a toxicity score of 0.527. We first rule out the possibility that the toxicity comes from the negation by experimentally removing the word never from the sentence. We find that the toxicity score of the sentence increases to 0.762 with the word brown still as the most toxic span. We then examine by replacing the word suspect and keep the rest; when we replace suspect with girl or mom, the toxicity score for brown decreases to 0.334 and 0.141 respectively; when changing to prisoner or scammer, the toxicity score for word brown becomes 0.717 and 0.972. Among all the cases that we have experimented with, the word brown is consistently the most toxic word in the sentence. It seems that the model is learning the correlation between the noun suspect (or any other word positioned here) and the adjective brown. Though it might lead to false positives in rare cases, the predictive power that enables this phenomenon could potentially be the reason that our model is doing better in picking unseen words as toxic spans than the LR model since it infers the prediction without leveraging on the lexical information but the syntactic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Some comments are inherently hard for both human and machine classifiers to identify. There are cases where the sentences can reasonably be considered as non-toxic where our model is also predicting as such. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Negatives", "sec_num": null }, { "text": "Ignore the trolls Sheema......You are great and I always enjoy your pieces. LOL....I keep hearing Garland never got a hearing blah,blah,blah....It sucks being in the minority....Go win some elections.. When the headline reads \"Steve Bannon's porn and meth house\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Negatives", "sec_num": null }, { "text": "In the first and second example, the speaker is not intended to be toxic even with the appearance of the word trolls and sucks. For the third example, the toxicity lies in the quoted text, which is also not the intention of the speaker. It is debatable that whether our model is indeed making mistakes on these naturally ambiguous comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Negatives", "sec_num": null }, { "text": "Another major source of false negatives comes from the unawareness of the outside context, e.g., the target of the comment. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Negatives", "sec_num": null }, { "text": "Degenerate comment. I'm beginning to think the left lacks the mental capacity to reason.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False Negatives", "sec_num": null }, { "text": "SD-P SD-R SD-F1 LR 0.111 0.195 0.120 BERT-SP 0.836 0.798 0.792 ELECTRA-SP 0.840 0.807 0.798 BERT-MT 0.837 0.785 0.784 ELECTRA-MT 0.842 0.788 0.789 Table 5 : Evaluation results on toxic spans detection task. The best performance is bolded.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 154, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "These words/phrases such as degenerate, and lacks the mental capacity to, which could also be utilized in the neutral descriptions, are used here for expressing toxicity. It is easy for a human to reason out the target mentioned in the sentence and thus be aware of the toxicity raised; this is usually not the case for machine classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Also, many users intentionally modify the explicitly toxic words (e.g., replacing or removing characters) to obfuscate automatic detection while still keeping their intent clear to human (Djuric et al., 2015) , e.g., changing word idiots into I.d.i.o.t.s or modifying word asshole to a-hole. Even for advanced transformer models, they still need to learn deeper information or be incorporated with more human supervision to be fully aware of these cases.", "cite_spans": [ { "start": 187, "end": 208, "text": "(Djuric et al., 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Finally, we examine the interpretability of our models. We first leverage the existing span annotations to evaluate but discovered that they limit our analysis. To overcome the limitation, we further conduct a user study to evaluate the interpretability directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretability", "sec_num": "6" }, { "text": "We select a balanced 8,000-sample set from the test split of the Curated CCD. Here, non-toxic posts have toxicities between 0 and 0.1 and therefore are all considered to have no toxic spans; toxic samples are all from TSDD. We follow the ad-hoc evaluation metrics which are introduced in Da San Martino et al. (2019) and utilized in SemEval 2021 Task 5 2 -Span Detection Precision (SD-P), Recall (SD-R), and F1 (SD-F1). Due to the nature of the toxicity span detection task where instances span from single tokens to multiple sentences, the ad-hoc evaluation metrics give partial credits to imperfect matches at the character level. Given a post t, let the ground truth be a set of character offsets S t G and let one certain system A i return a set of character offsets S t A i . With the system A i and ground truth, the SD-P and SP-R on post t are then defined as follow:", "cite_spans": [ { "start": 295, "end": 316, "text": "Martino et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Span Detection as Interpretation", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P t = S t A i \u2229 S t G S t A i R t = S t A i \u2229 S t G S t G", "eq_num": "(4)" } ], "section": "Span Detection as Interpretation", "sec_num": "6.1" }, { "text": "With SD-P and SD-R defined, the SD-F1 is also defined:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Span Detection as Interpretation", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F t 1 = 2 \u2022 P t \u2022 R t P t + R t", "eq_num": "(5)" } ], "section": "Span Detection as Interpretation", "sec_num": "6.1" }, { "text": "We report the average values over all samples. The evaluation results are shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 90, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Span Detection as Interpretation", "sec_num": "6.1" }, { "text": "Models trained for the span detection task (*-SP) achieve the highest F1 scores with no surprise. However, our multi-task models provide nearly equal span detection effectiveness with far better performance on toxicity detection (See Table 3 ). Also, BERT-MT and ELECTRA-MT perform similarly here, further confirming that our approach can be used in various transformer models (RQ2).", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Span Detection as Interpretation", "sec_num": "6.1" }, { "text": "We also compared with LR, which is widely considered to be interpretable. The predicted span from the LR model is reconstructed from the bagof-token features. Features that contribute to the positive score for the sample are considered as the interpretation of the model. Our proposed models were shown to strongly outperform LR by a large margin. Note that LR's performance is hindered in this evaluation, as it was not trained for sequence classification (e.g., what CRF would do), despite being considered an interpretable model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Span Detection as Interpretation", "sec_num": "6.1" }, { "text": "The labeled toxic spans used for the evaluation in Section 6.1 are not annotated to be the interpretation for the toxicity of the post. We argue that it is not sufficient to evaluate the interpretability of prediction along with other known issues in automated evaluation Novikova et al., 2017) .", "cite_spans": [ { "start": 272, "end": 294, "text": "Novikova et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "Therefore, we select 400 toxic samples from the test split of Curated CCD and 237 offensive samples from the testing set of OLID for the interpretability study. For each sample, three words with the highest predicted score from the models are picked as the explanation and are postprocessed to the same form to avoid identifying the models based on the types of token preprocessing. Annotators are asked to annotate for the toxicity (e.g., whether the sample is toxic) of the samples and pick the model with a better explanation. The order and the name of the models are hidden from the annotators to avoid biases. Although ELECTRA-MT and BERT-MT perform comparably in terms of F1, we found that qualitatively BERT-MT is better and therefore picked for the user study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "Despite the subjectivity of the annotation task, the annotators agreed in 78% of the cases. For our quantitative analysis, we use the samples that the annotators agree on. We also filter out 22 samples that both of the annotators considered non-toxic. Table 6 : Aggregated human preference during the interpretability experiment for CCD and OLID. We see that the annotators prefer the explanations from the BERT-MT over those from LR by a hefty margin.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "On average, our annotators prefer BERT-MT over LR on more than half of the samples. As shown in Table 6 , BERT-MT is considered to be more interpretable in both datasets by a wide margin. This result not only suggests that our proposed assumption provides interpretability to transformer models (RQ4) but also better explanations than the widely-known interpretable models.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "We now take a closer look at how BERT-MT model's explanations compare to those from LR. Some examples are shown in Table 7 . We find that the predictions from BERT-MT are more polarized, while those from LR tend to be neutral. This is because the usage of MSE loss in BERT-MT penalizes values for not being close to 0 or 1; for LR, the L2 regularization penalizes large weight values. It is also worth noting that these models differ in how term scores are aggregated over the post; BERT-MT takes the maximum, whereas LR takes the sum. For analysis, we highlight three cases here: When BERT-MT is preferred.", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 122, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "BERT-MT can pick out the toxic spans more accurately by leveraging the context. In comparison, the LR model suffers from picking words based on frequency even with a non-toxic usage, resulting in a bias toward some entities, such as certain groups of people (See example A in Table 7 ). BERT-MT also takes advantage of the WordPiece tokenization which re- tains the toxic word pieces when users are altering words to avoid censorship; while LR lemmatizes the tokens and resulting in losing this information. When LR is preferred.", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 283, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "We observe that LR tends to predict more spans than BERT-MT, leading to higher recall, resulting in more comprehensive explanations. Take the example B in Table 7 for instance, the LR is able to pick out the word coward and clown which are considered to be non-toxic by BERT-MT. We conclude that BERT-MT is more cautious in predicting words as toxic. When both models are equal.", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "We find that both models are doing poorly if the sentence is implicitly toxic. Such indirect toxicity is carried either by negation (Example C in Table 7) or adversarially-modified toxic words (Example D in Table 7 ). Finally, there are also many cases where both models are doing equally well. This category of samples is generally easier to detect with explicit terms or slurs without ambiguity.", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 154, "text": "Table 7)", "ref_id": "TABREF6" }, { "start": 207, "end": 214, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "User Study", "sec_num": "6.2" }, { "text": "In this paper, we proposed a toxicity detection approach that builds in the interpretability by predicting the toxicity of a piece of text based on the toxicity level of its spans. We showed that our approach is more effective in both in-domain and cross-domain evaluation than baselines that were shown to be effective. By conducting a user study, we further showed that our approach generates better explanations of the classification decisions than what Logistic Regression produces, which is known to be interpretable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "7" }, { "text": "In the future, we plan to extend our current work in several ways. We plan to take the implicit toxicity into consideration to make our assumption more robust. Besides, we will dig more into toxicity detection with long sequences. We also plan to investigate methods that consider more subtle context and actors in the content to better distinguish different usages of the toxic terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "7" }, { "text": "https://gdpr-info.eu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://sites.google.com/view/ toxicspans", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The platform was shut down by the end of 2017: https://medium.com/@aja_15265/sayinggoodbye-to-civil-comments-41859d3a2b1d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Toxicity score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the ARCS Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Deep learning for hate speech detection in tweets", "authors": [ { "first": "Pinkesh", "middle": [], "last": "Badjatiya", "suffix": "" }, { "first": "Shashank", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web Companion, WWW '17 Companion", "volume": "", "issue": "", "pages": "759--760", "other_ids": { "DOI": [ "10.1145/3041021.3054223" ] }, "num": null, "urls": [], "raw_text": "Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, WWW '17 Companion, page 759-760, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analysis methods in neural language processing: A survey", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "49--72", "other_ids": { "DOI": [ "10.1162/tacl_a_00254" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Nuanced metrics for measuring unintended bias with real data for text classification", "authors": [ { "first": "Daniel", "middle": [], "last": "Borkan", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vasserman", "suffix": "" } ], "year": 2019, "venue": "Companion Proceedings of The 2019 World Wide Web Conference, WWW '19", "volume": "", "issue": "", "pages": "491--500", "other_ids": { "DOI": [ "10.1145/3308560.3317593" ] }, "num": null, "urls": [], "raw_text": "Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced met- rics for measuring unintended bias with real data for text classification. In Companion Proceedings of The 2019 World Wide Web Conference, WWW '19, page 491-500, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multitask learning. Machine learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1997, "venue": "", "volume": "28", "issue": "", "pages": "41--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The year in hate and extremism", "authors": [ { "first": "", "middle": [], "last": "Southern Poverty Law", "suffix": "" }, { "first": "", "middle": [], "last": "Center", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Southern Poverty Law Center. 2017. The year in hate and extremism.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Electra: Pretraining text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hate crime: Abuse, hate and extremism online. Fourteenth Report of Session", "authors": [ { "first": "", "middle": [], "last": "Home Affairs Committee", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Home Affairs Committee et al. 2017. Hate crime: Abuse, hate and extremism online. Fourteenth Re- port of Session 2016, 17.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Fine-grained analysis of propaganda in news article", "authors": [ { "first": "Giovanni", "middle": [], "last": "Da San", "suffix": "" }, { "first": "Seunghak", "middle": [], "last": "Martino", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rostislav", "middle": [], "last": "Barr\u00f3n-Cede\u00f1o", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5636--5646", "other_ids": { "DOI": [ "10.18653/v1/D19-1565" ] }, "num": null, "urls": [], "raw_text": "Giovanni Da San Martino, Seunghak Yu, Alberto Barr\u00f3n-Cede\u00f1o, Rostislav Petrov, and Preslav Nakov. 2019. Fine-grained analysis of propaganda in news article. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 5636-5646, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17", "volume": "", "issue": "", "pages": "512--515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Confer- ence on Web and Social Media, ICWSM '17, pages 512-515.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Hate speech detection with comment embeddings", "authors": [ { "first": "Nemanja", "middle": [], "last": "Djuric", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Mihajlo", "middle": [], "last": "Grbovic", "suffix": "" }, { "first": "Vladan", "middle": [], "last": "Radosavljevic", "suffix": "" }, { "first": "Narayan", "middle": [], "last": "Bhamidipati", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on World Wide Web, WWW '15 Companion", "volume": "", "issue": "", "pages": "29--30", "other_ids": { "DOI": [ "10.1145/2740908.2742760" ] }, "num": null, "urls": [], "raw_text": "Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Gr- bovic, Vladan Radosavljevic, and Narayan Bhamidi- pati. 2015. Hate speech detection with comment em- beddings. In Proceedings of the 24th International Conference on World Wide Web, WWW '15 Com- panion, page 29-30, New York, NY, USA. Associa- tion for Computing Machinery.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using convolutional neural networks to classify hatespeech", "authors": [ { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" }, { "first": "Utpal", "middle": [], "last": "Kumar Sikdar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "85--90", "other_ids": { "DOI": [ "10.18653/v1/W17-3013" ] }, "num": null, "urls": [], "raw_text": "Bj\u00f6rn Gamb\u00e4ck and Utpal Kumar Sikdar. 2017. Us- ing convolutional neural networks to classify hate- speech. In Proceedings of the First Workshop on Abusive Language Online, pages 85-90, Vancouver, BC, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8342--8360", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.740" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Attention is not Explanation", "authors": [ { "first": "Sarthak", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3543--3556", "other_ids": { "DOI": [ "10.18653/v1/N19-1357" ] }, "num": null, "urls": [], "raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hidden resilience and adaptive dynamics of the global online hate ecology", "authors": [ { "first": "", "middle": [], "last": "Nf Johnson", "suffix": "" }, { "first": "Johnson", "middle": [], "last": "Leahy", "suffix": "" }, { "first": "N", "middle": [], "last": "Restrepo", "suffix": "" }, { "first": "M", "middle": [], "last": "Velasquez", "suffix": "" }, { "first": "P", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "P", "middle": [], "last": "Manrique", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Devkota", "suffix": "" }, { "first": "", "middle": [], "last": "Wuchty", "suffix": "" } ], "year": 2019, "venue": "Nature", "volume": "573", "issue": "7773", "pages": "261--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "NF Johnson, R Leahy, N Johnson Restrepo, N Ve- lasquez, M Zheng, P Manrique, P Devkota, and Ste- fan Wuchty. 2019. Hidden resilience and adaptive dynamics of the global online hate ecology. Nature, 573(7773):261-265.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Locate the hate: Detecting tweets against blacks", "authors": [ { "first": "Irene", "middle": [], "last": "Kwok", "suffix": "" }, { "first": "Yuzhou", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, AAAI'13", "volume": "", "issue": "", "pages": "1621--1622", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Kwok and Yuzhou Wang. 2013. Locate the hate: Detecting tweets against blacks. In Proceedings of the Twenty-Seventh AAAI Conference on Artifi- cial Intelligence, AAAI'13, page 1621-1622. AAAI Press.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 31st International Conference on International Conference on Machine Learning", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International Conference on Inter- national Conference on Machine Learning -Volume 32, ICML'14, page II-1188-II-1196. JMLR.org.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "NULI at SemEval-2019 task 6: Transfer learning for offensive language detection using bidirectional transformers", "authors": [ { "first": "Ping", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Zou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "87--91", "other_ids": { "DOI": [ "10.18653/v1/S19-2011" ] }, "num": null, "urls": [], "raw_text": "Ping Liu, Wen Li, and Liang Zou. 2019a. NULI at SemEval-2019 task 6: Transfer learning for of- fensive language detection using bidirectional trans- formers. In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 87- 91, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hate speech detection: Challenges and solutions", "authors": [ { "first": "Sean", "middle": [], "last": "Macavaney", "suffix": "" }, { "first": "", "middle": [], "last": "Hao-Ren", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Katina", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Russell", "suffix": "" }, { "first": "Ophir", "middle": [], "last": "Goharian", "suffix": "" }, { "first": "", "middle": [], "last": "Frieder", "suffix": "" } ], "year": 2019, "venue": "PLoS ONE", "volume": "14", "issue": "", "pages": "1--16", "other_ids": { "DOI": [ "10.1371/journal.pone.0221152" ] }, "num": null, "urls": [], "raw_text": "Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PLoS ONE, 14:1-16.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A human evaluation of AMR-to-English generation systems", "authors": [ { "first": "Emma", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Shira", "middle": [], "last": "Wein", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "4773--4786", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.420" ] }, "num": null, "urls": [], "raw_text": "Emma Manning, Shira Wein, and Nathan Schneider. 2020. A human evaluation of AMR-to-English gen- eration systems. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 4773-4786, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Platformed racism: the mediation and circulation of an australian race-based controversy on twitter, facebook and youtube", "authors": [ { "first": "Ariadna", "middle": [], "last": "Matamoros-Fern\u00e1ndez", "suffix": "" } ], "year": 2017, "venue": "Information, Communication & Society", "volume": "20", "issue": "6", "pages": "930--946", "other_ids": { "DOI": [ "10.1080/1369118X.2017.1293130" ] }, "num": null, "urls": [], "raw_text": "Ariadna Matamoros-Fern\u00e1ndez. 2017. Platformed racism: the mediation and circulation of an aus- tralian race-based controversy on twitter, facebook and youtube. Information, Communication & Soci- ety, 20(6):930-946.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hatexplain: A benchmark dataset for explainable hate speech detection", "authors": [ { "first": "Binny", "middle": [], "last": "Mathew", "suffix": "" }, { "first": "Punyajoy", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Seid Muhie Yimam", "suffix": "" }, { "first": "P", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Animesh", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "", "middle": [], "last": "Mukherjee", "suffix": "" } ], "year": 2020, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, P. Goyal, and Animesh Mukherjee. 2020. Hatexplain: A benchmark dataset for explain- able hate speech detection. ArXiv, abs/2012.10289.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Wikipedia talk labels: Toxicity", "authors": [ { "first": "Thain", "middle": [], "last": "Nithum", "suffix": "" }, { "first": "Dixon", "middle": [], "last": "Lucas", "suffix": "" }, { "first": "Wulczyn", "middle": [], "last": "Ellery", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thain Nithum, Dixon Lucas, and Wulczyn Ellery. 2017. Wikipedia talk labels: Toxicity.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Abusive language detection in online user content", "authors": [ { "first": "Chikashi", "middle": [], "last": "Nobata", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "Achint", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International Conference on World Wide Web, WWW '16", "volume": "", "issue": "", "pages": "145--153", "other_ids": { "DOI": [ "10.1145/2872427.2883062" ] }, "num": null, "urls": [], "raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th International Conference on World Wide Web, WWW '16, page 145-153, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Why we need new evaluation metrics for NLG", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Amanda", "middle": [ "Cercas" ], "last": "Curry", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2241--2252", "other_ids": { "DOI": [ "10.18653/v1/D17-1238" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cer- cas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A primer in BERTology: What we know about how BERT works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "842--866", "other_ids": { "DOI": [ "10.1162/tacl_a_00349" ] }, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Associ- ation for Computational Linguistics, 8:842-866.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A survey on hate speech detection using natural language processing", "authors": [ { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media", "volume": "", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/W17-1101" ] }, "num": null, "urls": [], "raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10, Valencia, Spain. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "GUIR at SemEval-2020 task 12: Domain-tuned contextualized models for offensive language detection", "authors": [ { "first": "Sajad", "middle": [], "last": "Sotudeh", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "", "middle": [], "last": "Hao-Ren", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Macavaney", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ophir", "middle": [], "last": "Goharian", "suffix": "" }, { "first": "", "middle": [], "last": "Frieder", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1555--1561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sajad Sotudeh, Tong Xiang, Hao-Ren Yao, Sean MacA- vaney, Eugene Yang, Nazli Goharian, and Ophir Frieder. 2020. GUIR at SemEval-2020 task 12: Domain-tuned contextualized models for offensive language detection. In Proceedings of the Four- teenth Workshop on Semantic Evaluation, pages 1555-1561, Barcelona (online). International Com- mittee for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Detecting hate speech on the world wide web", "authors": [ { "first": "William", "middle": [], "last": "Warner", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Second Workshop on Language in Social Media", "volume": "", "issue": "", "pages": "19--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Me- dia, pages 19-26, Montr\u00e9al, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Understanding abuse: A typology of abusive language detection subtasks", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "78--84", "other_ids": { "DOI": [ "10.18653/v1/W17-3012" ] }, "num": null, "urls": [], "raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Lan- guage Online, pages 78-84, Vancouver, BC, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL Student Research Workshop", "volume": "", "issue": "", "pages": "88--93", "other_ids": { "DOI": [ "10.18653/v1/N16-2013" ] }, "num": null, "urls": [], "raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "UHH-LT at SemEval-2020 task 12: Fine-tuning of pre-trained transformer networks for offensive language detection", "authors": [ { "first": "Gregor", "middle": [], "last": "Wiedemann", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Seid Muhie Yimam", "suffix": "" }, { "first": "", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1638--1644", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregor Wiedemann, Seid Muhie Yimam, and Chris Biemann. 2020. UHH-LT at SemEval-2020 task 12: Fine-tuning of pre-trained transformer networks for offensive language detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1638-1644, Barcelona (online). International Com- mittee for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Attention is not not explanation", "authors": [ { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Pinter", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "11--20", "other_ids": { "DOI": [ "10.18653/v1/D19-1002" ] }, "num": null, "urls": [], "raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Ex machina: Personal attacks seen at scale", "authors": [ { "first": "Ellery", "middle": [], "last": "Wulczyn", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2017, "venue": "Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee", "volume": "", "issue": "", "pages": "1391--1399", "other_ids": { "DOI": [ "10.1145/3038912.3052591" ] }, "num": null, "urls": [], "raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, WWW '17, page 1391-1399, Re- public and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Demoting racial bias in hate speech detection", "authors": [ { "first": "Mengzhou", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Anjalie", "middle": [], "last": "Field", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media", "volume": "", "issue": "", "pages": "7--14", "other_ids": { "DOI": [ "10.18653/v1/2020.socialnlp-1.2" ] }, "num": null, "urls": [], "raw_text": "Mengzhou Xia, Anjalie Field, and Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detection. In Proceedings of the Eighth International Work- shop on Natural Language Processing for Social Me- dia, pages 7-14, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Using \"annotator rationales\" to improve machine learning for text categorization", "authors": [ { "first": "Omar", "middle": [], "last": "Zaidan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Piatko", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", "volume": "", "issue": "", "pages": "260--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using \"annotator rationales\" to improve machine learning for text categorization. In Human Lan- guage Technologies 2007: The Conference of the North American Chapter of the Association for Com- putational Linguistics; Proceedings of the Main Conference, pages 260-267, Rochester, New York. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Predicting the type and target of offensive posts in social media", "authors": [ { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Noura", "middle": [], "last": "Farra", "suffix": "" }, { "first": "Ritesh", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1415--1420", "other_ids": { "DOI": [ "10.18653/v1/N19-1144" ] }, "num": null, "urls": [], "raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (Of-fensEval)", "authors": [ { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Noura", "middle": [], "last": "Farra", "suffix": "" }, { "first": "Ritesh", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "75--86", "other_ids": { "DOI": [ "10.18653/v1/S19-2010" ] }, "num": null, "urls": [], "raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. SemEval-2019 task 6: Identifying and cat- egorizing offensive language in social media (Of- fensEval). In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 75- 86, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "SemEval-2020 task 12: Multilingual offensive language identification in social media (Offen-sEval 2020)", "authors": [ { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Pepa", "middle": [], "last": "Atanasova", "suffix": "" }, { "first": "Georgi", "middle": [], "last": "Karadzhov", "suffix": "" }, { "first": "Hamdy", "middle": [], "last": "Mubarak", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1425--1447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offen- sive language identification in social media (Offen- sEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425- 1447, Barcelona (online). International Committee for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Rationale-augmented convolutional neural networks for text classification", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Iain", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "795--804", "other_ids": { "DOI": [ "10.18653/v1/D16-1076" ] }, "num": null, "urls": [], "raw_text": "Ye Zhang, Iain Marshall, and Byron C. Wallace. 2016. Rationale-augmented convolutional neural networks for text classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 795-804, Austin, Texas. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "hidden state of the final layer, where d denotes the length of the hidden embedding. Unlike what Devlin et al. (2019) do for sequence classification where the linear layer is only stacked on the [CLS] token, we place it on top of the whole output sequence H. Let W \u2208 R d\u00d71 denote the parameters of the linear layer, then we have:", "uris": null }, "TABREF0": { "num": null, "html": null, "type_str": "table", "text": "We need to tax the clueless, irresponsible idiots . 0.898 Toxic Don't take the bait of the troll . It's what they want.", "content": "
CommentToxicityLabel
Like me flagging your comments.0Non-toxic
0.167Non-toxic
" }, "TABREF1": { "num": null, "html": null, "type_str": "table", "text": "Three samples from Curated CCD. We highlight the toxic spans in the toxic sample with purple underline .We mark the potentially toxic words in the non-toxic sample with pink double underline .", "content": "
WordFreq. \u2193 WordFreq. \u2193
stupid1085idiotic119
idiot572ridiculous 103
idiots378ass102
ignorant 270fools100
stupidity 253damn100
dumb185racist97
moron163loser91
fool163morons 88
pathetic 145hypocrite 65
crap127shit62
" }, "TABREF2": { "num": null, "html": null, "type_str": "table", "text": "", "content": "" }, "TABREF5": { "num": null, "html": null, "type_str": "table", "text": "#neck followers are doing all they can to make america white again .LRredneck follower are doing all they can to make america white again .BBERT-MT not only is he a liar , he ' s a coward too ! the clown is going down ! LR not only is he a liar , he ' s a coward too ! the clown is going down !", "content": "
#ModelExample
A red #C BERT-MT BERT-MT wow you ' re un -smart . LR wow you ' re un-smart .
BERT-MTyour constant ability at being an a -hole liar doesn ' t make you great
DLRyour constant ability at being an a-hole liar doe n't make you great
" }, "TABREF6": { "num": null, "html": null, "type_str": "table", "text": "Examples for qualitative analysis on interpretability. Here red color indicates toxicity, blue color indicates innocuousness, and white color represents neutral. Darker colors indicate more polarity.", "content": "" } } } }