ACL-OCL / Base_JSON /prefixW /json /wassa /2021.wassa-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:29.846110Z"
},
"title": "Hate Towards the Political Opponent: A Twitter Corpus Study of the 2020 US Elections on the Basis of Offensive Speech and Stance Detection",
"authors": [
{
"first": "Lara",
"middle": [],
"last": "Grimminger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"addrLine": "Pfaffenwaldring 5b",
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"addrLine": "Pfaffenwaldring 5b",
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The 2020 US Elections have been, more than ever before, characterized by social media campaigns and mutual accusations. We investigate in this paper if this manifests also in online communication of the supporters of the candidates Biden and Trump, by uttering hateful and offensive communication. We formulate an annotation task, in which we join the tasks of hateful/offensive speech detection and stance detection, and annotate 3000 Tweets from the campaign period, if they express a particular stance towards a candidate. Next to the established classes of favorable and against, we add mixed and neutral stances and also annotate if a candidate is mentioned without an opinion expression. Further, we annotate if the tweet is written in an offensive style. This enables us to analyze if supporters of Joe Biden and the Democratic Party communicate differently than supporters of Donald Trump and the Republican Party. A BERT baseline classifier shows that the detection if somebody is a supporter of a candidate can be performed with high quality (.89 F 1 for Trump and .91 F 1 for Biden), while the detection that somebody expresses to be against a candidate is more challenging (.79 F 1 and .64 F 1 , respectively). The automatic detection of hate/offensive speech remains challenging (with .53 F 1). Our corpus is publicly available and constitutes a novel resource for computational modelling of offensive language under consideration of stances.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The 2020 US Elections have been, more than ever before, characterized by social media campaigns and mutual accusations. We investigate in this paper if this manifests also in online communication of the supporters of the candidates Biden and Trump, by uttering hateful and offensive communication. We formulate an annotation task, in which we join the tasks of hateful/offensive speech detection and stance detection, and annotate 3000 Tweets from the campaign period, if they express a particular stance towards a candidate. Next to the established classes of favorable and against, we add mixed and neutral stances and also annotate if a candidate is mentioned without an opinion expression. Further, we annotate if the tweet is written in an offensive style. This enables us to analyze if supporters of Joe Biden and the Democratic Party communicate differently than supporters of Donald Trump and the Republican Party. A BERT baseline classifier shows that the detection if somebody is a supporter of a candidate can be performed with high quality (.89 F 1 for Trump and .91 F 1 for Biden), while the detection that somebody expresses to be against a candidate is more challenging (.79 F 1 and .64 F 1 , respectively). The automatic detection of hate/offensive speech remains challenging (with .53 F 1). Our corpus is publicly available and constitutes a novel resource for computational modelling of offensive language under consideration of stances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media are indispensable to political campaigns ever since Barack Obama used them so successfully in 2008 (Tumasjan et al., 2010) . Twitter in particular is a much-frequented form of communication with monthly 330 million active users This paper contains offensive language. (Clement, 2019) . The microblogging platform was credited to have played a key role in Donald Trump's rise to power (Stolee and Caton, 2018) . As Twitter enables users to express their opinions about topics and targets, the insights gained from detecting stance in political tweets can help monitor the voting base.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Tumasjan et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 281,
"end": 296,
"text": "(Clement, 2019)",
"ref_id": "BIBREF3"
},
{
"start": 397,
"end": 421,
"text": "(Stolee and Caton, 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to the heated election of Trump in 2016, the world has also seen an increase of hate speech (Gao and Huang, 2017) . Defined as \"any communication that disparages a target group of people based on some characteristic such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristic\" (Nockelby, 2000) , hate speech is considered \"a particular form of offensive language\" (Warner and Hirschberg, 2012) . However, some authors also conflate hateful and offensive speech and define hate speech as explicitly or implicitly degrading a person or group (Gao and Huang, 2017) . Over the years, the use of hate speech in social media has increased (de Gibert et al., 2018) . Consequently, there is a growing need for approaches that detect hate speech automatically (Gao and Huang, 2017) .",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Gao and Huang, 2017)",
"ref_id": "BIBREF8"
},
{
"start": 337,
"end": 353,
"text": "(Nockelby, 2000)",
"ref_id": "BIBREF17"
},
{
"start": 424,
"end": 453,
"text": "(Warner and Hirschberg, 2012)",
"ref_id": "BIBREF25"
},
{
"start": 600,
"end": 621,
"text": "(Gao and Huang, 2017)",
"ref_id": "BIBREF8"
},
{
"start": 697,
"end": 717,
"text": "Gibert et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 811,
"end": 832,
"text": "(Gao and Huang, 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From the perspective of natural language processing (NLP), the combination of political stance and hate speech detection provides promising classification tasks, namely determining the attitude a text displays towards a pre-determined target and the presence of hateful and offensive speech. In contrast to prior work on stance detection (Somasundaran and Wiebe, 2010; Mohammad et al., 2016, i.a.), we not only annotate if a text is favorable, against or does not mention the target at all (neither), but include whether the text of the tweet displays a mixed (both favorable and against) or neutral stance towards the targets. With this formulation we are also able to mark tweets that mention a target without taking a clear stance. To annotate hateful and offensive tweets, we follow the def-inition of Gao and Huang (2017) and adapt our guidelines to political discourse.",
"cite_spans": [
{
"start": 338,
"end": 368,
"text": "(Somasundaran and Wiebe, 2010;",
"ref_id": "BIBREF20"
},
{
"start": 369,
"end": 391,
"text": "Mohammad et al., 2016,",
"ref_id": "BIBREF15"
},
{
"start": 806,
"end": 826,
"text": "Gao and Huang (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We publish a Twitter-corpus that is annotated both for stance and hate speech detection. We make this corpus of 3000 Tweets publicly available at https://www.ims.uni-stuttgart.de/ data/stance hof us2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Based on a manual analysis of these annotations, our results suggest that Tweets that express a stance against Biden contain more hate speech than those against Trump.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our baseline classification experiments show that the detection of the stance that somebody is in-favor of a candidate performs better than that somebody is against a candidate. Further, the detection of hate/offensive speech on this corpus remains challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In early work on hate speech detection, Spertus (1997) described various approaches to detect abusive and hostile messages occurring during online communication. More recent work also considered cyberbullying (Dinakar et al., 2012) and focused on the use of stereotypes in harmful messages (Warner and Hirschberg, 2012) . Most of the existing hate speech detection models are supervised learning approaches. Davidson et al. (2017) created a data set by collecting tweets that contained hate speech keywords from a crowd-sourced hate speech lexicon. They then categorized these tweets into hate speech, offensive language, and neither. Mandl et al. (2019) sampled their data from Twitter and partially from Facebook and experimented with binary as well as more fine-grained multi-class classifications. Their results suggest that systems based on deep neural networks performed best. Waseem and Hovy (2016) used a feature-based approach to explore several feature types. Burnap and Williams (2014) collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. The authors examined different classification methods with various features including n-grams, restricted n-grams, typed dependencies, and hateful terms. Schmidt and Wiegand (2017) outlined that the lack of a benchmark data set based on a commonly accepted definition of hate speech is challenging. Ro\u00df et al. (2016) found that there is low agreement among users when identifying hateful messages.",
"cite_spans": [
{
"start": 209,
"end": 231,
"text": "(Dinakar et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 290,
"end": 319,
"text": "(Warner and Hirschberg, 2012)",
"ref_id": "BIBREF25"
},
{
"start": 408,
"end": 430,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 635,
"end": 654,
"text": "Mandl et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 883,
"end": 905,
"text": "Waseem and Hovy (2016)",
"ref_id": "BIBREF26"
},
{
"start": 1228,
"end": 1254,
"text": "Schmidt and Wiegand (2017)",
"ref_id": "BIBREF19"
},
{
"start": 1373,
"end": 1390,
"text": "Ro\u00df et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech and Offensive Language",
"sec_num": "2.1"
},
{
"text": "For the SemEval 2019 Task 5, Basile et al. (2019) proposed two hate speech detection tasks on Spanish and English tweets which contained hateful messages against women and immigrants. Next to a binary classification, participating systems had to extract further features in harmful messages such as target identification. None of the submissions for the more fine-grained classification task in English could outperform the baseline of the task organizers. In case of Spanish, the best results were achieved by a linear-kernel SVM. The authors found that it was harder to detect further features than the presence of hate speech. The recent shared task on offensive language identification organized by Zampieri et al. (2020) was featured in five languages. For a more detailed overview, we refer to the surveys by Mladenovi\u0107 et al. (2021) ; Fortuna and Nunes (2018) ; Schmidt and Wiegand (2017) .",
"cite_spans": [
{
"start": 29,
"end": 49,
"text": "Basile et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 703,
"end": 725,
"text": "Zampieri et al. (2020)",
"ref_id": "BIBREF28"
},
{
"start": 815,
"end": 839,
"text": "Mladenovi\u0107 et al. (2021)",
"ref_id": "BIBREF14"
},
{
"start": 842,
"end": 866,
"text": "Fortuna and Nunes (2018)",
"ref_id": "BIBREF7"
},
{
"start": 869,
"end": 895,
"text": "Schmidt and Wiegand (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech and Offensive Language",
"sec_num": "2.1"
},
{
"text": "In contrast to this previous work, we provide data for a specific recent use case, and predefine two targets of interest to be analyzed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech and Offensive Language",
"sec_num": "2.1"
},
{
"text": "Related work on stance detection includes stance detection on congressional debates (Thomas et al., 2006) , online forums (Somasundaran and Wiebe, 2010) , Twitter (Mohammad et al., 2016 (Mohammad et al., , 2017 Aker et al., 2017; K\u00fc\u00e7\u00fck and Can, 2018; Lozhnikov et al., 2020) and comments on news (Lozhnikov et al., 2020) . Thomas et al. (2006) used a corpus of speeches from the US Congress and modeled their support/oppose towards a proposed legislation task. Somasundaran and Wiebe (2010) conducted experiments with sentiment and arguing expressions and used features based on modal verbs and sentiments for stance classification. For the SemEval 2016 Task 6 organized by Mohammad et al. (2016) , stance was detected from tweets. The task contained two stance detection subtasks for supervised and weakly supervised settings. In both classification tasks, tweet-target pairs needed to be classified as either Favor, Against or Neither. The baseline of the task organizers outperformed all systems' results that were submitted by task participants.",
"cite_spans": [
{
"start": 84,
"end": 105,
"text": "(Thomas et al., 2006)",
"ref_id": "BIBREF23"
},
{
"start": 122,
"end": 152,
"text": "(Somasundaran and Wiebe, 2010)",
"ref_id": "BIBREF20"
},
{
"start": 163,
"end": 185,
"text": "(Mohammad et al., 2016",
"ref_id": "BIBREF15"
},
{
"start": 186,
"end": 210,
"text": "(Mohammad et al., , 2017",
"ref_id": "BIBREF16"
},
{
"start": 211,
"end": 229,
"text": "Aker et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 230,
"end": 250,
"text": "K\u00fc\u00e7\u00fck and Can, 2018;",
"ref_id": "BIBREF10"
},
{
"start": 251,
"end": 274,
"text": "Lozhnikov et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 296,
"end": 320,
"text": "(Lozhnikov et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 323,
"end": 343,
"text": "Thomas et al. (2006)",
"ref_id": "BIBREF23"
},
{
"start": 674,
"end": 696,
"text": "Mohammad et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "2.2"
},
{
"text": "In hope that sentiment features would have the same effect on stance detection as they have on sentiment prediction, Mohammad et al. (2017) concurrently annotated a set of tweets for both stance and sentiment. Although sentiment labels proved to be beneficial for stance detection, they were not sufficient. Instead of a target-specific stance classification, Aker et al. (2017) described an open stance classification approach to identify rumors on Twitter. The authors experimented with different classifiers and task-specific features which measured the level of confidence in a tweet. With the additional features, their approach outperformed state-of-the-art results on two benchmark sets.",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "Mohammad et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 360,
"end": 378,
"text": "Aker et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "2.2"
},
{
"text": "In addition to this previous work, we opted for a more fine-grained stance detection and not only annotated favor, against and neither towards a target but also whether the stance of the text was mixed or neutral. Further, we combine stance detection with hate/offensive speech detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "2.2"
},
{
"text": "Our goal is on the one side to create a new Twitter data set that combines stance and hate/offensive speech detection in the political domain. On the other side, we create this corpus to investigate the question how hate/offensive speech is distributed among different stances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "We used the Twitter API v 1.1. to fetch tweets for 6 weeks leading to the presidential election, on the election day and for 1 week after the election. As search terms, we use the mention of the presidential and vice presidential candidates and the outsider West; the mention of hashtags that show a voter's alignment such as the campaign slogans of the candidate websites, and further nicknames of the candidates. The list of search terms is: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "#Trump2020, #TrumpPence2020, #Biden2020, #BidenHarris2020, #Kanye2020, #MAGA2020, #BattleForTheSoulOfTheNation, #2020Vision, #VoteRed2020, #VoteBlue2020, Trump,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "From the 382.210 tweets, we sampled 3000 tweets for annotation. Given the text of a tweet, we rated the stance towards the targets Trump, Biden, and West in the text. The detected stance can be from one of the following labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "\u2022 Favor: Text argues in favor of the target",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "\u2022 Against: Text argues against the target",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "\u2022 Neither: Target is not mentioned; neither implicitly nor explicitly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "\u2022 Mixed: Text mentions positive as well as negative aspects about the target",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "\u2022 Neutral: Text states facts or recites quotes; unclear, whether text holds any position towards the target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "The default value shown in the annotation environment is Neither.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "The text was further annotated as being hateful and non-hateful. We did not separate if a group or a single person was targeted by hateful language. Further, we adapted the guidelines on hate speech annotation to be able to react to namecalling and down talking of the political opponent. Thus, we rated expressions such as \"Dementia Joe\" and \"DonTheCon\" as hateful/offensive (HOF).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2.1"
},
{
"text": "To evaluate the annotation guidelines (which we make available together with the data) we perform multiple annotation iterations with three annotators. Annotator 1 is a 22 year old male undergraduate student of computational linguistics who speaks German, English, Catalan, and Spanish. Annotator 2 is a 26 year old female undergraduate student of computational linguistics who speaks German and English. Annotator 3 is a 29 year old female graduate student of computational linguistics who speaks German and English. Annotator 1 and 2 annotated 300 tweets in three iterations with 100 tweets per iteration. After each iteration, the annotators discussed the tweets they rated differently and complemented the existing guidelines. Finally, Annotator 2 and 3 annotated 100 tweets with the improved guidelines to check whether the rules are clear and understandable, especially if read for the first time. Table 1 shows the result of Cohen's \u03ba of each iteration. In the first iteration, the agreement for HOF is purely random (\u22120.02\u03ba), the stance annotations show acceptable agreement (.83, .81\u03ba, respectively for Trump and Biden). West has not been mentioned in any of the 100 tweets. In a group discussion to identify the reasons for the substantial lack of agreement for HOF, we developed guidelines which described hateful and offensive speech in more detail and added further examples to our guidelines. We particularly stressed to annotate name-calling as hateful and offensive. This showed success in a second iteration with .42\u03ba for HOF agreement. The scores for Trump and Biden decreased slightly but still represented substantial agreement. We carried out another group discussion to discuss tweets where Annotator 1 and 2 chose different classes. We particularly refined the guidelines for class Neutral mentions and included offensive and hateful abbreviations such as \"POS\" (\"piece of shit\") and \"BS\" (\"bullshit\") which have been missed before. This led to a HOF agreement of .73\u03ba, while the stance agreement remained on similar levels (.81, .88).",
"cite_spans": [],
"ref_spans": [
{
"start": 904,
"end": 911,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.2.2"
},
{
"text": "As a concluding step, Annotator 2 and 3 rated 100 tweets. The annotators were provided with the guidelines established during the iterations between Annotator 1 and 2. 4 Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.2.2"
},
{
"text": "We now analyze the corpus for the targets Trump and Biden to answer the question if supporters of Trump (and Pence) use more hateful and offensive speech than supporters of Biden (and Harris). Tables 2 and 3 show the distribution of the classes Favor, Against, Neither, Mixed, Neutral mentions and how often each class was labeled as HOF or Non-HOF (\u00acHOF). The data set is unbalanced: only 11.7% of the tweets are hateful/offensive. Furthermore, there are more tweets labeled as Favor, Against, and Neither than Mixed, or Neutral mentions for target Trump. In case of target Biden, more tweets are labeled as Favor and as Neither than as Against, Mixed, or Neutral mentions. In total, there were only 9 tweets about Kanye West in the annotated data set, which is why we do not present statistics about him.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.1"
},
{
"text": "Did Trump supporters use more hateful and offensive speech than supporters of Biden? A comparison of Tables 2 and 3 suggests that supporters of team Trump use slightly more often harmful and offensive speech with 12.9% than supporters of team Biden, with 11.4%. This indicates that Trump supporters use more hateful speech than supporters of Biden, yet, this difference is only minor. This is arguable a result of the aspect that HOF is also often expressed without naming the target explicitly. Furthermore, given the fact that we added offensive nicknames such as \"Sleepy Joe\" to our search terms, this result is biased.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 115,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.1"
},
{
"text": "By means of pointwise mutual information we identified the top 10 words that are unlikely to occur in a tweet labeled as Non-Hateful. As Table 4 shows, these words are offensive and promote hate. This list also mirrors the limitations of our search terms as the adjective \"sleepy\" is part of the top 10.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.1"
},
{
"text": "Likewise, we identified the top 10 words that are unlikely to appear in a tweet labeled as Favor towards Trump and thus, argue against him. Next to hashtags that express a political preference for Biden, the top 10 list contains words that refer to Trump's taxes and a demand to vote. Similarly, the top 10 words that are unlikely to occur in a tweet labeled as Favoring Biden and therefore express the stance Against him, consist of adjectives Trump mocked him with (creepy, sleepy) as well as a reference to his son Hunter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.1"
},
{
"text": "Who is more targeted by hateful and offensive speech, Biden and the Democratic party or Trump and the Republican Party? We note that 26.7% of the tweets against target Biden contain hateful/offensive language, whereas only 18.5% of the tweets against target Trump are hateful/offensive. Thus, our results suggest that Biden and the Democratic Party are more often targets of hateful and offensive tweets than Trump and the Republican Party.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.1"
},
{
"text": "However, from this analysis we cannot draw that the offensive stems from supporters of the other party. Due to the limitations in our search terms we also note that there might be an unknown correlation of the search terms to HOF which we cannot entirely avoid. Further, these results should be interpreted with a grain of salt, given that the sampling procedure of the Twitter API is not entirely transparent. Table 5 : Precision, Recall, and F 1 of stance detection baseline for targets Trump and Biden",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 418,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.1"
},
{
"text": "Next to the goal to better understand the distribution of hate/offensive speech during the election in 2020, the data set constitutes an interesting resource valuable for the development of automatic detection systems. To support such development, we provide results of a baseline classifier. We used the pretrained BERT base model 1 (Devlin et al., 2019) and its TensorFlow implementation provided by HuggingFace 2 (Wolf et al., 2020) . Our data set was divided into 80% for training and 20% for testing.",
"cite_spans": [
{
"start": 334,
"end": 355,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 416,
"end": 435,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Classification Experiments",
"sec_num": "4.2"
},
{
"text": "Each model was trained with a batch size of 16, a learning rate (Adam) of 5 \u2022 10 \u22125 , a decay of 0.01, a maximal sentence length of 100 and a validation split of 0.2. Further, we set the number of epochs to 10 and saved the best model on the validation set for testing. Table 5 shows the results for stance detection prediction. We observe that not all classes can be predicted equally well. The two best predicted classes for Trump are Neither and Favor with a F 1 score of 0.95 and 0.89, respectively. These scores are followed by class Against with a F 1 score of 0.79. However, our model had difficulties to correctly predict the class Neutral, with a more limited precision and recall (.58 and .49). The class Mixed could not be predicted at all.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 277,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stance Classification Experiments",
"sec_num": "4.2"
},
{
"text": "These results only partially resemble for the target Biden: The classes Neither and Favor have the highest F 1 score with 0.96 and 0.91, respectively. In contrast to target Trump, the performance of our model to predict the class Against is much lower (.64 F 1 ). The F 1 score of class Neutral is low again with .59 and class Mixed could not be predicted. We conclude that stance can be detected from tweets. Yet, our results suggest that it Test data Davidson Mandl Ours",
"cite_spans": [
{
"start": 453,
"end": 461,
"text": "Davidson",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Classification Experiments",
"sec_num": "4.2"
},
{
"text": "Class P R F 1 P R F 1 P R F 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Classification Experiments",
"sec_num": "4.2"
},
{
"text": "Train data Table 6 : F1 scores of the hate speech detection baseline model trained and tested on different corpora is more challenging to predict fine-grained stance classes such as Mixed and Neutral mentions than the classes Favor, Against and Neither. This result is, at least partially, a consequence of the data distribution. The Mixed label has very few instances (20+47); the Neutral label is the second most seldomly annotated class, though it is substantially more frequent (341+326).",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stance Classification Experiments",
"sec_num": "4.2"
},
{
"text": "Similar to the stance detection baseline results, we now report results of a classifier (configured the same as the one in Section 4.2). To obtain an understanding how challenging the prediction is on our corpus, and how different the concept of hate/offensive speech is from existing resources, we perform this analysis across a set of corpora, as well as inside of each corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Corpus Hate Speech Detection Experiments",
"sec_num": "4.3"
},
{
"text": "To that end, we chose the following hate speech/offensive speech corpora:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Corpus Hate Speech Detection Experiments",
"sec_num": "4.3"
},
{
"text": "1. Data Set 1 by Davidson et al. (2017) .",
"cite_spans": [
{
"start": 17,
"end": 39,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Corpus Hate Speech Detection Experiments",
"sec_num": "4.3"
},
{
"text": "This corpus contains 24.783 tweets, categorized into into hateful, offensive, and neither. In our study, we we only use two classes, hateful/offensive and non-hateful. Therefore, we conflate the two classes, hateful and offensive, into one. We randomly split their data, available at https://github.com/t-davidson/ hate-speech-and-offensive-language, into 80% for training and 20% for testing. Mandl et al. (2019) . In this study, the authors conducted three classification experiments including a binary one, where 5.852 posts from Twitter and Facebook were classified into hate speech and non-offensive (Sub-task A). From their multi-lingual resource, we only need the English subset. We use the training data available at https://hasocfire.github.io/hasoc/2019/ dataset.html and perform a 80/20% train/test split. Table 6 shows the results for all combinations of training on the data by Davidson et al. (2017) , Mandl et al. (2019) , and ours (presented in this paper). When we only look at the results of the model when trained and tested on subcorpora from the same original source, we observe that there are some noteworthy differences. The recognition of HOF on the data by Davidson et al. (2017) shows a high .98 F 1 (note that this result cannot be compared to their original results, because we conflate two classes). Training and testing our baseline model on the data by Mandl et al. (2019) shows .56 F 1 but with a particularly limited recall. On our corpus, given that it is the smallest one, the model performs still comparably well with .53 F 1 . Precision and recall values are more balanced for the other corpora than for Mandl et al. (2019) . Note that these results are comparably low in comparison to other previously published classification approaches. However, they allow for a comparison of the performances between the different corpora. We particularly observe that the data set size seems to have an impact on the predictive performance.",
"cite_spans": [
{
"start": 394,
"end": 413,
"text": "Mandl et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 891,
"end": 913,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 916,
"end": 935,
"text": "Mandl et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 1182,
"end": 1204,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 1384,
"end": 1403,
"text": "Mandl et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 1641,
"end": 1660,
"text": "Mandl et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 817,
"end": 824,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-Corpus Hate Speech Detection Experiments",
"sec_num": "4.3"
},
{
"text": "When we move to a comparison of models trained on one corpus and tested on another, we see that the subcorpora created for a binary classifica- tion experiment yield better results. The imbalance of labels caused by the conflation of two classes on the data by Davidson et al. (2017) led to weak predictions on the other subcorpora. Therefore, we conclude that the concept of hate/offensive speech between these different resources is not fully comparable, be it due to different instances, settings or annotators. The development of models that generalize across domains, corpora, and annotation guidelines is challenging.",
"cite_spans": [
{
"start": 261,
"end": 283,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set 2 by",
"sec_num": "2."
},
{
"text": "We now take a closer look at the tweets and their predicted classes to explore why tweets have been misclassified. We show examples in Table 7 for stance classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "Our model performed well when predicting the class Favor for both targets. The examples in Table 7 show a common pattern, namely that tweets being in favor of the respective target praise target's achievements and contain words of support such as builds, vote and right choice. Additionally, users often complement their tweets with target-related hashtags, including Trump2020, Trump2020LandslideVictory and Biden/Harris to stress their political preference. However, these hashtags can be misleading as they not always express support of the candidate. The 4th example contains the hashtag #Trump2020 and was therefore predicted to be in favor of Trump, while it actually argues against him. In the 5th example, the irony expressed by the quotation marks placed around the word science and the offensive expression BS for \"bullshit\" were not detected.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "Supporters of both candidates verbally attack each other over who to vote for and use hashtags and expressions to make the opposite side look poorly. Looking at tweets incorrectly labeled as Against, we see that in case of target Trump the string of insults addressing Biden and Harris possi- The democrats are literally the nazis. If they pack the courts and pass the 25th amendment Joe Biden and Kamala Harris will be in the exact same place that hindenburg and hitler were in. The 25th amendment is almost the same exact law hitler got passed in order to take power. bly confused our baseline and led to a misclassification of the tweet. Turning to Biden, the sentence Joe aka 46 was not detected to be positive and supportive. We also show a set of examples for hate/offensive speech detection in Table 8 . As the first tweet exemplifies, tweets correctly predicted as HOF often contain one or more hate and offensive key words, e.g. Creepy Joe. The first example also wishes Joe Biden to fall ill with Covid-19.",
"cite_spans": [],
"ref_spans": [
{
"start": 801,
"end": 808,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "However, although the 2nd example seems to contain offensive words such as \"badass\" and \"Suckit\", it is not meant in a hateful way. On the contrary, this tweet uses slang to express admiration and support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "The 3rd example clearly is hateful, comparing the Democratic Party to the Nazis and the position of Biden and Harris to Hindenburg and Hitler. However, apparently the word Nazis is not sufficient to communicate hate speech, while the other signals in this tweet are presumably infrequent in the corpus as well. These are interesting examples which show that hate/offensive speech detection requires at times world knowledge and common-sense reasoning (which BERT is arguable only capable of to a very limited extent).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "The results in Table 5 show that the disproportion among the classes Against, Favor, Neither, Mixed and Neutral mentions seen in Tables 2 and 3 are presumably influencing the performance. The classes Mixed and Neutral mentions contain less tweets than the other classes. Consequently, the model did not have the same amount of training data for these two classes and tweets that should be categorized as Neither or Neutral were misclassified. In addition to Mixed and Neutral mentions, the class Against of target Biden is also outweighed by the dominant classes Favor and Neither (see Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 5",
"ref_id": null
},
{
"start": 129,
"end": 143,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
},
{
"start": 586,
"end": 593,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "When looking at the distribution of hateful and offensive and non-hateful tweets, we see that our data set contains more non-hateful tweets. As a result, the classification is biased. While Davidson et al. (2017) created their data set with keywords from a hate speech lexicon and Mandl et al. (2019) sampled their data with hashtags and keywords for which hate speech can be expected, our data was collected by using, but not limited to, offensive and hateful mentions. Thus, our hate speech data is more imbalanced but provides interesting insights into how people talk politics on Twitter. We assume that our corpus exhibits a more realistic distribution of hate/offensive speech for a particular topic than a subset of already existing resources.",
"cite_spans": [
{
"start": 190,
"end": 212,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 281,
"end": 300,
"text": "Mandl et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "There may be some possible limitations in this study. Using Twitter as data source provides challenges, because tweets contain noise, spelling mistakes and incomplete sentences. Further, the specified search criteria mentioned above might have had an effect on the results. Next to the nicknames Trump uses for his opponents, most of the keywords used to collect tweets refer to political candidates. Mentions of the respective political parties such as \"Democrats\", \"Republicans\" etc. were not included in the search. Yet, during the annotation we realized that it was not possible to differentiate the candidates from their respective parties. Hence, tweets were annotated for political parties and candidates inferring from hashtags such as \"#VoteBlue2020\" that the tweeter argues in favor of Joe Biden.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "In this paper, we have investigated stance detection on political tweets and whether or not supporters of Trump use more hate speech than supporters of Biden (not significantly). We found that manual annotation is possible with acceptable agreement scores, and that automatic stance detection towards political candidates and parties is possible with good performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "The limitations of this study are twofold -on the one side, future work might want to consider to add the nicknames of all main candidates and explicitly include social media posts about the party, not only about the candidate, as we found a separation is often difficult. Further, we did not perform extensive hyperparameter optimization in our neural approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "We suggest that future work invests in developing computational models that work across corpora and are able to adapt to domain and time-specific as well as societal and situational expressions of hate and offensive language. This is required, as our corpus shows that some references to offensive content are realized by domain-specific and societal expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "This might be realized by combining offensive language detection and stance detection in a joint multi-task learning approach, potentially including other aspects like personality traits or specific emotions. We assume that such concepts can benefit from representations in joint models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "https://huggingface.co/bert-base-uncased 2 https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This project has been partially funded by Deutsche Forschungsgemeinschaft (projects SEAT, KL 2869/1-1 and CEAT, KL 2869/1-2). We thank Anne Kreuter and Miquel Luj\u00e1n for fruitful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Simple open stance classification for rumour analysis",
"authors": [
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "31--39",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_005"
]
},
"num": null,
"urls": [],
"raw_text": "Ahmet Aker, Leon Derczynski, and Kalina Bontcheva. 2017. Simple open stance classification for rumour analysis. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing, RANLP 2017, pages 31-39, Varna, Bulgaria.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hate speech, machine classification and statistical modelling of information flows on twitter: interpretation and communication for policy decision making",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "Internet",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Burnap and Matthew Williams. 2014. Hate speech, machine classification and statistical mod- elling of information flows on twitter: interpretation and communication for policy decision making. In Internet, Policy and Politics, Oxford, United King- dom.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Twitter: monthly active users worldwide",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Clement",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jessica Clement. 2019. Twitter: monthly active users worldwide. https://www.statista.com/statistics/ 282087/number-of-monthly-active-twitter-users/.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"W"
],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eleventh International Conference on Web and Social Media, ICWSM",
"volume": "",
"issue": "",
"pages": "512--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International Confer- ence on Web and Social Media, ICWSM, pages 512- 515, Montr\u00e9al, Qu\u00e9bec, Canada. AAAI Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Common sense reasoning for detection, prevention, and mitigation of cyberbullying",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Dinakar",
"suffix": ""
},
{
"first": "Birago",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Rosalind",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Trans. Interact. Intell. Syst",
"volume": "2",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2362394.2362400"
]
},
"num": null,
"urls": [],
"raw_text": "Karthik Dinakar, Birago Jones, Catherine Havasi, Henry Lieberman, and Rosalind Picard. 2012. Com- mon sense reasoning for detection, prevention, and mitigation of cyberbullying. ACM Trans. Interact. Intell. Syst., 2(3).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Comput. Surv",
"volume": "51",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3232676"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on au- tomatic detection of hate speech in text. ACM Com- put. Surv., 51(4).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Detecting online hate speech using context aware models",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "260--266",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_036"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Gao and Ruihong Huang. 2017. Detecting on- line hate speech using context aware models. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 260-266, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hate speech dataset from a white supremacy forum",
"authors": [
{
"first": "Ona",
"middle": [],
"last": "De Gibert",
"suffix": ""
},
{
"first": "Naiara",
"middle": [],
"last": "Perez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5102"
]
},
"num": null,
"urls": [],
"raw_text": "Ona de Gibert, Naiara Perez, Aitor Garc\u00eda-Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 11-20, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stance detection on tweets: An SVM-based approach",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "K\u00fc\u00e7\u00fck",
"suffix": ""
},
{
"first": "Fazli",
"middle": [],
"last": "Can",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilek K\u00fc\u00e7\u00fck and Fazli Can. 2018. Stance detec- tion on tweets: An SVM-based approach. CoRR, abs/1803.08910.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Stance prediction for russian: Data and analysis",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Lozhnikov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Mazzara",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of 6th International Conference in Software Engineering for Defence Applications",
"volume": "",
"issue": "",
"pages": "176--186",
"other_ids": {
"DOI": [
"10.1007/978-3-030-14687-0_16"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Lozhnikov, Leon Derczynski, and Manuel Maz- zara. 2020. Stance prediction for russian: Data and analysis. In Proceedings of 6th International Conference in Software Engineering for Defence Ap- plications, pages 176-186, Cham. Springer Interna- tional Publishing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overview of the HASOC track at FIRE 2019: Hate speech and offensive content identification in indo-european languages",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "Sandip",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Daksh",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Mohana",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "Chintak",
"middle": [],
"last": "Mandlia",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation, FIRE '19",
"volume": "",
"issue": "",
"pages": "14--17",
"other_ids": {
"DOI": [
"10.1145/3368567.3368584"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019. Overview of the HASOC track at FIRE 2019: Hate speech and offensive content identification in indo-european languages. In Pro- ceedings of the 11th Forum for Information Re- trieval Evaluation, FIRE '19, page 14-17, New",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Association for Computing Machinery",
"authors": [
{
"first": "N",
"middle": [
"Y"
],
"last": "York",
"suffix": ""
},
{
"first": "Usa",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Cyber-aggression, cyberbullying, and cyber-grooming: A survey and research challenges",
"authors": [
{
"first": "Miljana",
"middle": [],
"last": "Mladenovi\u0107",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "O\u0161mjanski",
"suffix": ""
},
{
"first": "Sta\u0161a Vuji\u010di\u0107",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
}
],
"year": 2021,
"venue": "ACM Comput. Surv",
"volume": "54",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3424246"
]
},
"num": null,
"urls": [],
"raw_text": "Miljana Mladenovi\u0107, Vera O\u0161mjanski, and Sta\u0161a Vuji\u010di\u0107 Stankovi\u0107. 2021. Cyber-aggression, cyberbullying, and cyber-grooming: A survey and research challenges. ACM Comput. Surv., 54(1).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "SemEval-2016 task 6: Detecting stance in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "31--41",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1003"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31- 41, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stance and sentiment in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Trans. Internet Technol",
"volume": "17",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3003433"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Trans. Internet Technol., 17(3).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hate speech. Encyclopedia of the American Constitution",
"authors": [
{
"first": "T",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nockelby",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "3",
"issue": "",
"pages": "1277--1279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John T. Nockelby. 2000. Hate speech. Encyclopedia of the American Constitution, 3:1277-1279.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Measuring the reliability of hate speech annotations: The case of the european refugee crisis",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Ro\u00df",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rist",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Cabrera",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Kurowsky",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"Maximilian"
],
"last": "Wojatzki",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.17185/duepublico/42132"
]
},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Ro\u00df, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Max- imilian Wojatzki. 2016. Measuring the reliability of hate speech annotations: The case of the euro- pean refugee crisis. https://duepublico2.uni-due.de/ receive/duepublico mods 00042132.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10, Valencia, Spain. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recognizing stances in ideological on-line debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "116--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran and Janyce Wiebe. 2010. Rec- ognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text, pages 116-124, Los Ange- les, CA. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Smokey: Automatic recognition of hostile messages",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Spertus",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, AAAI'97/IAAI'97",
"volume": "",
"issue": "",
"pages": "1058--1065",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Spertus. 1997. Smokey: Automatic recognition of hostile messages. In Proceedings of the Four- teenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, AAAI'97/IAAI'97, page 1058-1065. AAAI Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Twitter, trump, and the base: A shift to a new form of presidential talk?",
"authors": [
{
"first": "Galen",
"middle": [],
"last": "Stolee",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Caton",
"suffix": ""
}
],
"year": 2018,
"venue": "Signs and Society",
"volume": "6",
"issue": "",
"pages": "147--165",
"other_ids": {
"DOI": [
"10.1086/694755"
]
},
"num": null,
"urls": [],
"raw_text": "Galen Stolee and Steve Caton. 2018. Twitter, trump, and the base: A shift to a new form of presidential talk? Signs and Society, 6:147-165.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "327--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceed- ings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327-335, Sydney, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Predicting elections with twitter: What 140 characters reveal about political sentiment",
"authors": [
{
"first": "Andranik",
"middle": [],
"last": "Tumasjan",
"suffix": ""
},
{
"first": "Timm",
"middle": [],
"last": "Sprenger",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Sandner",
"suffix": ""
},
{
"first": "Isabell",
"middle": [
"Welpe"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Fourth International AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "178--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andranik Tumasjan, Timm Sprenger, Philipp Sandner, and Isabell Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. In Fourth International AAAI Conference on Weblogs and Social Media, pages 178-185.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Detecting hate speech on the world wide web",
"authors": [
{
"first": "William",
"middle": [],
"last": "Warner",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Second Workshop on Language in Social Media",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Me- dia, pages 19-26, Montr\u00e9al, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "SemEval-2020 task 12: Multilingual offensive language identification in social media (Offen-sEval 2020)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1425--1447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offen- sive language identification in social media (Offen- sEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425- 1447, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "Pence, Biden, Harris, Kanye, President, Sleepy Joe, Slow Joe, Phony Kamala, Monster Kamala.After removing duplicate tweets, the final corpus consists of 382.210 tweets. From these, there are 220.941 that contain Trump related hashtags and mentions, 230.629 tweets that carry hashtags and mentions associated with Biden and 1.412 tweets with hashtags and mentions related to Kanye West.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>Stance</td><td/></tr><tr><td colspan=\"3\">Iteration Trump Biden West</td><td>HOF</td></tr><tr><td>1 A1+A2</td><td>0.83</td><td colspan=\"2\">0.81 0.00 \u22120.02</td></tr><tr><td>2 A1+A2</td><td>0.78</td><td>0.78 0.75</td><td>0.42</td></tr><tr><td>3 A1+A2</td><td>0.81</td><td>0.88 0.00</td><td>0.73</td></tr><tr><td>4 A2+A3</td><td>0.61</td><td>0.76 0.75</td><td>0.62</td></tr><tr><td colspan=\"4\">Table 1: Cohen's \u03ba for stance and hate/offensive speech</td></tr><tr><td>(HOF).</td><td/><td/><td/></tr></table>"
},
"TABREF2": {
"text": "Distribution of tweets about target Trump",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"text": "shows that the inter-annotator agreement for HOF is 0.62, for tar-",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Class</td><td colspan=\"2\">HOF \u00acHOF</td><td/><td>%HOF</td></tr><tr><td>Favor</td><td>141</td><td colspan=\"2\">1095 1236</td><td>11.4</td></tr><tr><td>Against</td><td>108</td><td>296</td><td>404</td><td>26.7</td></tr><tr><td>Neither</td><td>87</td><td>900</td><td>987</td><td>8.8</td></tr><tr><td>Mixed</td><td>6</td><td>41</td><td>47</td><td>12.8</td></tr><tr><td>Neutral</td><td>10</td><td>316</td><td>326</td><td>3.1</td></tr><tr><td>Total</td><td>352</td><td colspan=\"2\">2648 3000</td><td>11.7</td></tr></table>"
},
"TABREF4": {
"text": "Distribution of tweets about target Biden",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>get Trump 0.61, for target Biden 0.76 and for target</td></tr><tr><td>West 0.75. These scores indicate substantial agree-</td></tr><tr><td>ment between Annotator 2 and 3 based on com-</td></tr><tr><td>prehensive guidelines. The final annotation of the</td></tr><tr><td>overall data set has been performed by Annotator 2.</td></tr></table>"
},
"TABREF6": {
"text": "Results of the pointwise mutual information calculation",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF7": {
"text": "Against .77 .81 .79 .67 .62 .64 Favor .88 .90 .89 .90 .93 .91 Mixed .00 .00 .00 .00 .00 .00 Neither .95 .95 .95 .93 .99 .96 Neutral .58 .49 .53 .59 .58 .59",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td colspan=\"3\">Target Trump</td><td colspan=\"3\">Target Biden</td></tr><tr><td>Class</td><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td></tr></table>"
},
"TABREF8": {
"text": "WATCH. IT DID NOT HAVE TO BE LIKE THIS. #BidenHar-ris2020 will take steps to make us safe. Trump is happy to let us burn, and so he",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"4\">Target Pred Gold Text</td></tr><tr><td colspan=\"2\">Trump A</td><td>A</td><td>TWO HUNDRED THOUSAND PEOPLE HAVE DIED OF #COVID19 UN-</td></tr><tr><td/><td/><td/><td>DER Trump's is a #weakloser #VoteBidenHarrisToSaveAmerica #RepublicansForBiden</td></tr><tr><td>Trump</td><td>F</td><td>F</td><td>Trump is making all types of economic/peace deals. While Democrats are</td></tr><tr><td/><td/><td/><td>creating mobs tearing down historical statutes and destroying WHOLE cities.</td></tr><tr><td/><td/><td/><td>It is a NO brainer on who to vote for in 2020. Trump builds! Democrats</td></tr><tr><td/><td/><td/><td>DESTROY! #Trump2020. #Trump2020LandslideVictory</td></tr><tr><td colspan=\"2\">Trump A</td><td>F</td><td>President Trump please don't let this son of a bitch crazy creepy pedophile</td></tr><tr><td/><td/><td/><td>motherfucker of Joe Biden and his brown paper bag bitch of Kamala Harris win</td></tr><tr><td/><td/><td/><td>this election do not let them win.</td></tr><tr><td>Trump</td><td>F</td><td>A</td><td>#Trump2020? Wish full recovery to see him ask forgiveness to the country for</td></tr><tr><td/><td/><td/><td>his incompetence and lack of respect for the American people during Covid-19</td></tr><tr><td/><td/><td/><td>crisis.#VictoryRoad to #Biden/Harris</td></tr><tr><td>Biden</td><td>A</td><td>A</td><td>Joe Biden is a weak weak man in many ways. Jimmy Carter by half. 1/2 of</td></tr><tr><td/><td/><td/><td>America literally can't stand Kamala today. Women will hate on her viciously.</td></tr><tr><td/><td/><td/><td>Enjoy the shit sandwich.</td></tr><tr><td>Biden</td><td>F</td><td>F</td><td>Kamala was my first choice, but I agree at this moment in time. Joe Biden is the</td></tr><tr><td/><td/><td/><td>right choice. We are lucky that he is willing to continue to serve our country.</td></tr><tr><td/><td/><td/><td>When I voted for Biden/Harris I felt good, I felt hopeful, I know this is the right</td></tr><tr><td/><td/><td/><td>team to recover our country/democracy</td></tr><tr><td>Biden</td><td>A</td><td>F</td><td>@KamalaHarris @JoeBiden Dominate and Annihilate trump, Joe aka 46</td></tr><tr><td/><td/><td/><td>#BidenHarris2020 #WinningTeam #PresidentialDebate #TrumpTaxReturns</td></tr><tr><td/><td/><td/><td>#TrumpHatesOurMilitary #TrumpKnew #TrumpLiedPeopleDied GO JOE</td></tr><tr><td>Biden</td><td>F</td><td>A</td><td>While we are here, this type of BS is what Kamala Harris and Joe Biden call</td></tr><tr><td/><td/><td/><td>\"science\".</td></tr></table>"
},
"TABREF9": {
"text": "Examples of correct and incorrect predictions of favor (F) and against (A) stance in the tweets.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF10": {
"text": "Harris staffers have covid-19. Let's hope at least one of them has been recently sniffed by Creepy Joe. HOF \u00acHOF He's a badass!!! #Trump2020 #Suckit #Winning \u00acHOF HOF",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Pred</td><td>Gold</td><td>Text</td></tr><tr><td>HOF</td><td>HOF</td><td>Two Kamala</td></tr></table>"
},
"TABREF11": {
"text": "Examples of correct and incorrect predictions for hateful and offensive speech in the tweets.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}