ACL-OCL / Base_JSON /prefixW /json /wassa /2022.wassa-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:47.301744Z"
},
"title": "Distinguishing In-Groups and Onlookers by Language Use",
"authors": [
{
"first": "Joshua",
"middle": [
"R"
],
"last": "Minot",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Milo",
"middle": [
"Z"
],
"last": "Trujillo",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Samuel",
"middle": [
"F"
],
"last": "Rosenblatt",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Guillermo",
"middle": [],
"last": "De",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Anda",
"middle": [],
"last": "J\u00e1uregui",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Emily",
"middle": [],
"last": "Moog",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Briane",
"middle": [],
"last": "Paul",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "V",
"middle": [],
"last": "Samson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Laurent",
"middle": [],
"last": "H\u00e9bert-Dufresne",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Allison",
"middle": [
"M"
],
"last": "Roth",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Inferring group membership of social media users is of high interest in many domains. Group membership is typically inferred via network interactions with other members, or by the usage of in-group language. However, network information is incomplete when users or groups move between platforms, and ingroup keywords lose significance as public discussion about a group increases. Similarly, using keywords to filter content and users can fail to distinguish between the various groups that discuss a topic-perhaps confounding research on public opinion and narrative trends. We present a classifier intended to distinguish members of groups from users discussing a group based on contextual usage of keywords. We demonstrate the classifier on a sample of community pairs from Reddit and focus on results related to the COVID-19 pandemic.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Inferring group membership of social media users is of high interest in many domains. Group membership is typically inferred via network interactions with other members, or by the usage of in-group language. However, network information is incomplete when users or groups move between platforms, and ingroup keywords lose significance as public discussion about a group increases. Similarly, using keywords to filter content and users can fail to distinguish between the various groups that discuss a topic-perhaps confounding research on public opinion and narrative trends. We present a classifier intended to distinguish members of groups from users discussing a group based on contextual usage of keywords. We demonstrate the classifier on a sample of community pairs from Reddit and focus on results related to the COVID-19 pandemic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online communities today have unprecedented power to impact the course of disease spread (Prandi and Primiero, 2020; Armitage, 2021) , sway elections (Bovet and Makse, 2019; Persily, 2017) , and manipulate global markets (Anand and Pathak, 2022) . However, studies of online communities are often limited to single platforms due, in part, to the * These authors contributed equally to this work fact that the overlap in users across platforms is never explicitly known or because user networks and user behavior may differ across platforms (Hall et al., 2018; Trujillo et al., 2021; Grange, 2018) . Nevertheless, there are some exceptions (inter alia (Yarchi et al., 2021; Alatawi et al., 2021; Horawalavithana et al., 2019) ) and account mapping is an area of active research (inter alia (Chen et al., 2020) ).",
"cite_spans": [
{
"start": 89,
"end": 116,
"text": "(Prandi and Primiero, 2020;",
"ref_id": "BIBREF27"
},
{
"start": 117,
"end": 132,
"text": "Armitage, 2021)",
"ref_id": "BIBREF6"
},
{
"start": 150,
"end": 173,
"text": "(Bovet and Makse, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 174,
"end": 188,
"text": "Persily, 2017)",
"ref_id": "BIBREF26"
},
{
"start": 221,
"end": 245,
"text": "(Anand and Pathak, 2022)",
"ref_id": "BIBREF4"
},
{
"start": 540,
"end": 559,
"text": "(Hall et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 560,
"end": 582,
"text": "Trujillo et al., 2021;",
"ref_id": "BIBREF32"
},
{
"start": 583,
"end": 596,
"text": "Grange, 2018)",
"ref_id": "BIBREF14"
},
{
"start": 651,
"end": 672,
"text": "(Yarchi et al., 2021;",
"ref_id": "BIBREF37"
},
{
"start": 673,
"end": 694,
"text": "Alatawi et al., 2021;",
"ref_id": "BIBREF1"
},
{
"start": 695,
"end": 724,
"text": "Horawalavithana et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 789,
"end": 808,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A powerful alternative to account mapping is to track language rather than users, which only requires data on the content of the platform and not necessarily their user base. There remain important caveats to this approach, however: 1) shifts in language can be hard to differentiate from shifts in user demographics and 2) language about a group of interest can look very similar to the language of the group itself. This is especially true if in-group vocabulary is used by outsiders when discussing the group, or if the in-group's vocabulary percolates into the general lexicon. An example of such language spread involves the word \"incel\", which was popularized in a specific online community before becoming more widely known.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we address the second problem of distinguishing in-group members from onlookers engaged in discussion about the in-group, based on language alone. We introduce a group-classifier, which labels users as being in a group or discussing a group. We train our classifier on Reddit, an online forum broken into explicit sub-communities (i.e., \"subreddits\"). We identify pairs of subreddits, where one subreddit focuses on a particular topic (e.g., COVID conspiracies), and a second subreddit of \"onlookers\" discusses the first community or topic. Consistent user participation in a subreddit implies group membership, providing training labels; we filter outlier users who participate in or \"troll\" their chosen subreddit's counterpart. Our classifier attempts to distinguish users from each community based on their usage of topic words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions in this piece are focused on two main points:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose a framing for in-group and onlooker discussion communities and discuss the value of differentiating between them in downstream analyses. This point is especially important for future work on cross-platform community activity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We collect a novel data set of in-group and onlooker subreddit pairs and present a baseline classification pipeline to demonstrate the feasibility of separating groups of users accounts based on the content of their posts. We go on to present preliminary results on how this automatic labelling of user accounts may affect downstream analyses relative to the ground truth data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this manuscript is organized as follows: in Section 2 we provide an overview of prior work, mainly in the complimentary spaces of stance detection and counter speech. In Section 3 we outline our methods, including the collection of a novel dataset of subreddit pairs. In Section 4 we present the results from our in-group and onlooker classifier along with the impact of automatic labelling on resulting language distributions. We discuss the implications of our work in Section 5 and concluding remarks in Section 6. Finally, in Section 7 we suggest areas for future work which could build upon our in-group and onlooker framing, improve our classification pipeline, and address broader research questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We classify authors as being \"in a group\", or \"discussing a group\", not necessarily in an adversarial way. This closely resembles stance detection (K\u00fc\u00e7\u00fck and Can, 2020; . Research involving stance detection may be divided into two main categories 1. Predicting the likelihood of a rumor being true (i.e., rumor detection) by examining whether the stance of posts is supporting, refuting, commenting on, or questioning the rumor (Zubiaga et al., 2016 (Zubiaga et al., , 2018 Hardalov et al., 2021) .",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "(K\u00fc\u00e7\u00fck and Can, 2020;",
"ref_id": "BIBREF21"
},
{
"start": 428,
"end": 449,
"text": "(Zubiaga et al., 2016",
"ref_id": "BIBREF39"
},
{
"start": 450,
"end": 473,
"text": "(Zubiaga et al., , 2018",
"ref_id": "BIBREF38"
},
{
"start": 474,
"end": 496,
"text": "Hardalov et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "2. Assessing whether the stance of a post is \"pro\", \"against\", or \"neither\" with respect to any given subject (Anand et al., 2011; Augenstein et al., 2016; Joshi et al., 2016; Abercrombie and Batista-Navarro, 2018; .",
"cite_spans": [
{
"start": 110,
"end": 130,
"text": "(Anand et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 131,
"end": 155,
"text": "Augenstein et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 156,
"end": 175,
"text": "Joshi et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 176,
"end": 214,
"text": "Abercrombie and Batista-Navarro, 2018;",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "In some cases, manually labelled datasets are used to evaluate the quality of stance detection pipelines (Joseph et al., 2021) or train stance classifiers using supervised learning (M\u00f8nsted and Lehmann, 2022) .",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "(Joseph et al., 2021)",
"ref_id": null
},
{
"start": 181,
"end": 208,
"text": "(M\u00f8nsted and Lehmann, 2022)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "Similar to the latter category of stance detection, topic-dependent argument classification in argument mining also parallels our classification scheme, as it may work to evaluate whether a sentence argues for a topic, argues against a topic, or is not an argument (Mayer et al., 2018; Reimers et al., 2019; Lawrence and Reed, 2020) .",
"cite_spans": [
{
"start": 265,
"end": 285,
"text": "(Mayer et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 286,
"end": 307,
"text": "Reimers et al., 2019;",
"ref_id": null
},
{
"start": 308,
"end": 332,
"text": "Lawrence and Reed, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "\"Perspective identification\" works to assess an author's point of view, e.g., classifying individuals as \"democrats\" or \"republicans\" based the content of their post (Lin et al., 2006; Wong et al., 2016; Sobhani, 2017; Bhatia and Deepak, 2018) . Our work also relates to the automated identification of \"counter-speech\", in which hateful or uncivil speech is countered in order to establish more civil discourse (Wright et al., 2017; He et al., 2021) .",
"cite_spans": [
{
"start": 166,
"end": 184,
"text": "(Lin et al., 2006;",
"ref_id": "BIBREF23"
},
{
"start": 185,
"end": 203,
"text": "Wong et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 204,
"end": 218,
"text": "Sobhani, 2017;",
"ref_id": "BIBREF30"
},
{
"start": 219,
"end": 243,
"text": "Bhatia and Deepak, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 412,
"end": 433,
"text": "(Wright et al., 2017;",
"ref_id": "BIBREF36"
},
{
"start": 434,
"end": 450,
"text": "He et al., 2021)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "Our work is similar to the form of stance detection that evaluates \"pro\", \"anti\", or \"neither\" attitudes, but the problems of stance detection tend to assume that any discussion about a group are adversarial. However, the problem of distinguishing the language about a group from language of the group is much more general, as people discussing an emerging subculture do not necessarily oppose it. For example, onlookers may talk about nonpolitical groups formed around new music scenes, small social movements or communities surrounding specific activities without holding opposing views to these groups. Political or not, identifying these onlookers can be of critical importance when studying a specific subculture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "Reddit partitions content into \"subreddits\": forums dedicated to a particular topic, with individual community guidelines and moderation policies. We identified seven (7) pairs of subreddits where one subreddit was focused on a highly-specific topic and another subreddit was dedicated to discussion about the first community. We selected clearly distinguishable communities that formed pairs of in-group and onlooking group subreddits. For example, r/NoNewNormal is a COVID-conspiracy and anti-vaccination group, while r/CovIdiots is dedicated to discussing anti-vaccination and COVID conspiracy theories (see Fig. 1 for an overview of 2-gram distributions for these subreddits). We selected this pair as our main case study because of the timeliness of the COVID-19 topic and the volume of conversation in each community. Partially owing to the contentious nature of the communities we were interested in, many of the subreddits we examined had previously been banned. Since data from banned subreddits remains available (Baumgartner et al., 2020) , this did not inhibit our study or reproducibility.",
"cite_spans": [
{
"start": 1023,
"end": 1049,
"text": "(Baumgartner et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 611,
"end": 617,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Selection",
"sec_num": "3.1"
},
{
"text": "Relationships between the primary community and the onlooking community were typically antagonistic. However, this does not mean that the results from standard sentiment analysis would have been able to correctly classify utterances from each group. For example, the r/NoNewNormal community may express negative opinions about vaccines or masking mandates, while r/CovIdiots may express positive sentiment about both topics, but negative sentiment about the opinions held by members of r/NoNewNormal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection",
"sec_num": "3.1"
},
{
"text": "For some of our subreddit pairs, the onlooker subreddit was created specifically to discuss the in-group subreddit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection",
"sec_num": "3.1"
},
{
"text": "For example, r/TheBluePill was created in response to r/TheRedPill. For other pairs, both subreddits discussed the same topic from different viewpoints but were not directly connected. For example, r/ProtectAndServe is a subreddit populated by current and former law enforcement officers, while r/Bad_Cop_No_Donut is a subreddit dedicated to the criticism of law enforcement, but it is not specifically a criticism of r/ProtectAndServe itself. Including both types of subreddit pairs allowed us to measure the effectiveness of our classifier on communities with varying degrees of similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection",
"sec_num": "3.1"
},
{
"text": "The following are qualitative descriptions of each subreddit pair we examined. The size of each subreddit corpus, in terms of users and comments, as well as the mean comment score on each subreddit, can be found in the appendix (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 237,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/NoNewNormal and r/CovIdiots r/NoNewNormal self-described as discussing \"concerns regarding changes in society related to the coronavirus (COVID-19) pandemic, described by some as a 'new normal', and opposition to [those societal changes].\" Most posts focused on perceived government overreach and fear-mongering. Reddit banned the subreddit on September 1st, 2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/CovIdiots is dedicated to \"social shaming\" of covid conspiracy theorists, \"anti-maskers,\" and \"anti-vaxxers.\" r/TheRedPill and r/TheBluePill r/TheRedPill is a \"male dating strategy\" subreddit, commonly associated with extreme misogyny and a broader collection of \"Manosphere\" online communities including incels, men's rights activists, and pick up artists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/TheBluePill is a satirical subreddit targeting content from r/TheRedPill. r/BigMouth and r/BanBigMouth r/BigMouth is an online fan community that discusses the Netflix television series, \"Big Mouth.\" The show often features coming of age topics, including puberty and teen sexuality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/BanBigMouth was a community focused on associating the TV show with pedophilia and child grooming, and petitioning for the show to be discontinued and removed. Reddit banned the subreddit in June, 2021 for promoting hate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/SuperStraight and r/SuperStraightPhobic r/SuperStraight was an anti-trans subreddit that defined \"Super Straight\" as heterosexual individuals who were not attracted to trans people. Reddit banned the subreddit for promoting hate towards marginalized groups in March, 2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/SuperStraightPhobic was an antagonistic subreddit critiquing the users, posts, and intentions of the r/SuperStraight subreddit. It was banned shortly after r/SuperStraight. r/ProtectAndServe and r/Bad_Cop_No_Donut r/ProtectAndServe is self-described as \"a place where the law enforcement professionals of Reddit can communicate with each other and the general public.\" Users who submit documents proving their active law enforcement status have identifying labels next to their usernames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "Bad_Cop_No_Donut is a subreddit for documenting law enforcement abuse of power and misconduct. Most posts are links to news articles, while comments discuss article content and general police behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/LatterDaySaints and r/ExMormon r/LatterDaySaints is an unofficial subreddit for members of the Church of Latter-Day Saints. While non-members of the church are permitted to ask questions and engage in conversation, criticizing church doctrine, policy, or leadership is forbidden, and the subreddit is heavily moderated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/ExMormon is a subreddit for former members of the Mormon church to discuss their experiences. Posts are typically highly critical of the church.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/vegan and r/antivegan r/vegan is a broad vegan community, with topics ranging from cooking tips, to animal cruelty, environmental impacts of meat consumption, and social challenges with veganism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "r/antivegan is ideologically opposed to veganism. Much of the subreddit's content is satirical, or critical discussion about the actions of perceived vegan activists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subreddits Chosen",
"sec_num": "3.2"
},
{
"text": "For each pair of subreddits, we first chose an \"ending date\" for data collection: If either subreddit was banned prior to the start of our study, we used the earliest ban-date as our ending date. Otherwise, we used the date of our data download. We then downloaded all comments made in the subreddit for one year prior to the ending date, using pushshift.io, an archive of all public Reddit posts and comments which is frequently used by researchers (Baumgartner et al., 2020) . We then filtered out comments made by bot users, using a bot list provided by (Trujillo et al., 2021) .",
"cite_spans": [
{
"start": 450,
"end": 476,
"text": "(Baumgartner et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 557,
"end": 580,
"text": "(Trujillo et al., 2021)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.3"
},
{
"text": "We anecdotally observed users from some of our selected subreddits \"raiding\" other selected subreddits. For example, users from subreddits opposed to the r/NoNewNormal COVID-conspiracy group sometimes harassed users in r/NoNewNormal, and vice-versa. We did not want these harassmentcomments to bias our text-analysis, so we filtered out all users who had an average comment-score less than unity for their comments in the subreddit. In other words, we only kept comments from users that the community did not strongly disagree with. This did not filter out coordinated attacks, where many members of one community raided another, upvoted their raiding comments, and downvoted the in-community comments. However, this type of attack (often referred to as \"brigading\") is a bannable offense on Reddit, and we did not observe it in our dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.3"
},
{
"text": "To compare the n-gram distributions of pairs of subreddits we used rank-turbulence divergence (RTD) (Dodds et al., 2020). We used RTD to both summarize overall divergence and highlight specific n-grams that contributed most to this divergence value. We found RTD to be an effective choice when making more nuanced comparisons between the disjoint distributions of subreddit pairs. It avoids construction of the mixed-distribution found in other divergence measures-such as Jensen-Shannon divergence (JSD)-which may be less effective at highlighting salient terms with the subreddit-scale distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining In-Group Vocabulary",
"sec_num": "3.4"
},
{
"text": "The rank-turbulence divergence between two sets, \u2126 1 and \u2126 2 , is calculated as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining In-Group Vocabulary",
"sec_num": "3.4"
},
{
"text": "D R \u03b1 (\u2126 1 ||\u2126 2 ) = \u03b4D R \u03b1,\u03c4 = \u03b1 + 1 \u03b1 \u03c4 1 r \u03b1 \u03c4,1 \u2212 1 r \u03b1 \u03c4,2 1/(\u03b1+1) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining In-Group Vocabulary",
"sec_num": "3.4"
},
{
"text": "where r \u03c4,s is the rank of element \u03c4 (n-grams in our case) in system s and \u03b1 is a tunable parameter that affects the impact of starting and ending ranks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining In-Group Vocabulary",
"sec_num": "3.4"
},
{
"text": "We used a divergence-of-divergence metric (RTD 2 ) to identify n-grams that contributed to disagreement between base-divergence results derived from n-gram distributions. More specifically, we ranked the RTD values calculated from the ranks of the RTD contributions to divergence results for ground truth and predicted distributions (using our classifiers). Said another way, in cases where ngrams had high RTD 2 values, those n-grams would either be over-or under-emphasized in the data re- The central diamond shaped plot shows a rank-rank histogram for 1-grams appearing in each subreddit. The horizontal bar chart on the right shows the individual contribution of each 1-gram to the overall rank-turbulence divergence value (D R 1/3 ). The 3 bars under \"Balances\" represent the total volume of 1-gram occurring in each subreddit, the percentage of all unique words we saw in each subreddit, and the percentage of words that we saw in a subreddit that were unique to that subreddit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining In-Group Vocabulary",
"sec_num": "3.4"
},
{
"text": "sulting from our classification pipeline when compared with the ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining In-Group Vocabulary",
"sec_num": "3.4"
},
{
"text": "We inferred membership of individual users in ingroup or onlooker subreddits using two binary classification models. These models were applied to the entire concatenated comment history of users for a given subreddit. In addition to the data filtering described in Section 3.3, we removed users whose concatenated comment histories contained fewer than 10 1-grams. In order to investigate the effect of comment length on classification performance, we created a second training and evaluation data set-referred to as the \"threshold\" data setwith users whose comment histories contained at least 100 1-grams and who made at least 10 comments on their assigned subreddit. Due to the large class imbalance in most subreddit pairings, we under-sampled the majority class to rebalance the training and testing data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In-group and out-group prediction",
"sec_num": "3.5"
},
{
"text": "To establish a baseline, we trained a logistic regression model on term frequency-inverse docu-ment frequency (TF-IDF) features. For the logistic regression model, we generated TF-IDF features by selecting 1-grams that appeared in at least 10 documents and at most 95% of total documents. We also removed English stopwords before feeding these features to a logistic regression model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In-group and out-group prediction",
"sec_num": "3.5"
},
{
"text": "We compared the performance of the logistic regression model with a Longformer-based classifier (Beltagy et al., 2020 ). The Longformer model uses a sparse attention mechanism to address the quadratic memory scaling of the standard transformers (Vaswani et al., 2017 )-in our cases allowing for the consideration of longer documents (comment histories). For the Longformer model, we used the default Transformers library (Wolf et al., 2020) implementation of a sequence classifier with a maximum sequence length of 2,048.",
"cite_spans": [
{
"start": 96,
"end": 117,
"text": "(Beltagy et al., 2020",
"ref_id": "BIBREF9"
},
{
"start": 245,
"end": 266,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF33"
},
{
"start": 421,
"end": 440,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-group and out-group prediction",
"sec_num": "3.5"
},
{
"text": "For all subreddit pairs, we found that both language classifiers performed better than random, with some variation along subreddit size and community characteristics, as in Figs. 4 and 5. The Longformer model performed better in all cases (as indicated by the Matthews correlation coefficient (MCC) in Table 1 ). However, with sufficient data volume, the logistic regression classifier was able to achieve comparable results, especially notable given the reduced model complexity.",
"cite_spans": [],
"ref_spans": [
{
"start": 302,
"end": 309,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Language classifier",
"sec_num": "4.1"
},
{
"text": "For the Longformer model trained and evaluated on r/NoNewNormal and r/CovIdiots, we achieved precision and recall values of approximately 0.75 for both classes Table 5 . For the other subreddits, precision and recall values ranged between approximately 0.65 and 0.9 with near parity between the classes. See Fig. 2 for receiver operator characteristic (ROC) curves for the Longformer model.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 5",
"ref_id": null
},
{
"start": 308,
"end": 314,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Language classifier",
"sec_num": "4.1"
},
{
"text": "The logistic regression classifier offered lower performance but relatively similar results with the added benefit of interpretable feature importance scores. In the case of r/NoNewNormal and r/CovIdiots, we report feature importance for the logistic regression model in Table 3 . The feature importance results provide some insights on how bag-of-words models are capturing community-specific language. For instance, \"media\", \"doomer\", and \"trump\" are language features highly predictive of the r/NoNewNormal subreddit accounts. On the other hand, \"idiots\", \"crocs\", and \"5g\" are language features highly predictive of the r/CovIdiots accounts.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language classifier",
"sec_num": "4.1"
},
{
"text": "We found that RTD identified salient terms when comparing the 1-gram distributions of r/NoNewNormal and r/CovIdiots. As seen in Fig. 1 , we found that terms relating to specific people and institutions such as \"trump\", \"fda\", and \"fauci\" drove RTD contributions from the r/NoNewNormal distribution. For the same subreddit, we found 1-grams related to vaccines-\"vaccine[s]\", \"dtp\" (Diphtheria-Tetanus-Pertussis), and \"npafp\" (Non-polio Acute Flaccid Paralysis)-which ranked higher than the opposing subreddit. Finally, some 1-grams related to non-pharmaceutical interventions ranked relatively higher in the r/NoNewNormal distribution, including \"lockdown\" and \"passport\". From the r/CovIdiots 1-gram distribution, we saw the eponymous term \"covidiot\" contributing the great-est to RTD followed by insults such as \"stupid\" and \"karen\"-illustrating the insulting critiques that many of the r/CovIdiots posts level at r/NoNewNormal.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 134,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Initial observations",
"sec_num": "4.2.1"
},
{
"text": "The RTD results suggest a few characteristics of each subreddit. Both r/NoNewNormal and r/CovIdiots discussed prominent topics related to the pandemic-as seen by terms such as \"mask\", \"vaccine\", and \"lockdown\" ranking in the top 300 1-grams for each subreddit. The subreddits' focuses constrast each other with r/NoNewNormal appearing more focused on discussion that is critical of pandemic interventions and r/CovIdiots criticizing r/NoNewNormal (as evidenced by a higher degree of insulting language).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial observations",
"sec_num": "4.2.1"
},
{
"text": "Overall RTD values were similar for both the ground truth and predicted distributions (D R 1/3 = 0.286 and 0.274, respectively). In Table 2 we present the top 20 1-grams as highlighted by RTD 2 . We saw fluctuations for terms related to internet memes (e.g., \"gunga\", \"ginga\", and \"boo\"). In other cases, function words like \"he\" and \"be\" are ranked as contributing notably to the RTD 2 resultsthis may be owing to nuanced differences in speech patterns between the two communities that are amplified by the classification and RTD 2 results. For some highly topical 1-grams, such \"trump\", \"covidiot\", and \"influenza\", we found shifts in rank limited to an order of magnitude-in these cases the salient 1-grams contributed more to RTD in the classifier-derived data set, likely owing to the bias of the model.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Effect of classifier on divergence results",
"sec_num": "4.2.2"
},
{
"text": "We expected our classifier to perform better on active users who received praise from a community (as indicated by the voting score on their comments). To confirm this hypothesis, we plotted the likelihood of correctly labeling users that post in r/NoNewNormal compared to their number of comments in the subreddit, total comment-score, and mean comment-score, shown in Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 370,
"end": 376,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy versus user attributes",
"sec_num": "4.3"
},
{
"text": "Our classifier performed most reliably on users with ten to three hundred comments in the subreddit, and ten to five hundred total karma. Performance decayed for users with over 400 comments, but there were only 520 users in this category out of about 58,000 r/NoNewNormal users. Anecdotally, this small subset of users engaged in longer The classifier trained on r/BigMouth and r/BanBigMouth showed the best performance (AUC = 0.93) while our primary case study-r/NoNewNormal and r/CovIdiots-had an AUC value of 0.83. It is worth noting the variation in sample sizes and as described in Table 1. and more general discussions, and as a result, used language that is more common and more difficult to classify compared to their less active peers.",
"cite_spans": [],
"ref_spans": [
{
"start": 588,
"end": 596,
"text": "Table 1.",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Accuracy versus user attributes",
"sec_num": "4.3"
},
{
"text": "To filter out low-activity users, we re-ran our classifier after pruning accounts with less than under 100 one-grams in their comment history or less than 10 total comment in their associated subreddit. This filtering is discussed in Section 3.5 and labeled \"Threshold\" in Table 1 where we present the classification results. The threshold data generally improved the performance of both the logistic regression and Longformer models.",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Accuracy versus user attributes",
"sec_num": "4.3"
},
{
"text": "The work outlined here is motivated by the challenge of accurately classifying communities that discuss the same topics but are distinct in their exact views. Further, we are motivated by the task of identifying these communities in the absence of interaction data that may allow for the construction of a social graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Our methodology addresses the challenge of analyzing online conversation around contentious topics where there may be polarized communities that share similar linguistic features. For instance, when studying online discourse around a specific topic one approach to collecting relevant content is anchor wording (selecting posts based on the presence of key words defined by a researcher). In the case of r/NoNewNormal and r/CovIdiots, \"vaccine\", \"mask\", and \"covid\" share similar rank values in the 1-gram distributions for each subreddit (55, 37; 24, 28; 51, 58; respectively) . A naive anchor-word selection would capture much of the conversation in each of these communities. However, anchor word selection would fail to disambiguate the dramatically differing views held by the majority of users in each community. This has impacts on down stream analysis such as sentiment analysis, tracking narrative diffusion, and topic modelling.",
"cite_spans": [
{
"start": 539,
"end": 577,
"text": "(55, 37; 24, 28; 51, 58; respectively)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Considering our main motivation was a problem description and initial demonstration of a classification pipeline, we did not extensively explore model architectures or hyperparameters. We included n-gram order in the initial hyperparameter sweep when developing the logistic-regression pipeline, and results suggested that 1-grams were most effective. However, including higher order n-grams is still worth exploring more in-depth, and may have benefits for model interpretabilility and down stream results (e.g., feature importance). Further, we selected the word-embedding model (the Longformer) based mainly on considerations related to maximum sequence length and preliminary performance observations. Additional wordembedding models could be considered-choosing models trained on more recent and/or domain specific data may be especially helpful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "As in stance detection , there are several limitations to the methodology we present. First, our data set covers a limited time frame, and past work has demonstrated that models which are trained on old data sets may perform relatively poorly when fed new data . Additionally, our methodology does not account for the fact that users may change opinions throughout time. For example, a user may initially be a member of a group, but a shift in opinion may cause the user to leave the group but still engage in discussion about said group. Lastly, our classifier is only trained on English posts, and we cannot guarantee the same level of performance across languages. Figure 3 : Likelihood of correctly labeling users in in-group subreddits by user attributes. From left to right, correct labeling versus user comments in the subreddit, correct labeling versus total karma in the subreddit, and correct labeling versus mean karma in the subreddit. In all cases, the classifier performed poorly with low-activity users, better with moderate activity. We have pruned the 10% of users with the highest attributes from this plot, to improve legibility. An unabridged version of the plot is in the appendix, with a more detailed explanation. Plots include only users that commented in the primary \"of\" subreddit. Results from base-LR classifier. ",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 676,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In the present study, we frame the research challenge of classifying in-groups and onlookers based on the linguistic features of social media posts. The classification task is made difficult by the significant intersection of terms shared between the two communities, which may confound classification attempts. We collect a data set of seven (7) subreddit pairs that match the in-group and onlooker-group criteria, focusing our efforts on a case study of pro-and anti-COVID mitigation communities. These subreddits provide an appealing proving ground for group identification tasks, because subreddit participation acts as a noisy label in lieu of ground truth for group identity. We identify salient 1-grams that differentiate each communities' language distributions. Using the full collection of subreddit pairs, we train two classifiers to assign users to communities based on their posts. We demonstrate the feasibility of the classi- As a divergence-of-divergences measurement, RTD 2 , shows disagreement between the divergence results derived from 1-gram distributions of generated with ground truth labels and the distribution generated with our classification pipeline. Highly ranked RTD 2 values highlight the 1-grams that have the greatest difference in rank of contribution to the divergence results for each pairing. For instance, \"trump\" is the 1-gram with the 3 rd highest contribution in groundtruth data, whereas the 1-gram is ranked 8 th in the classifier-generated data. We stemmed the 1-grams prior to calculation of divergence results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "fication scheme with these results. In most cases, our classifier recovers 70% or more of a community's users. From these results, we show how our initial language distribution divergence results may be affected by using data labelled by our classifier. In the case of the COVID subreddits, the true and classifier-generated distributions are qualitatively similar, identifying notable 1-grams in each case. We hope the research questions and combined set of results is motivating for future work that leverages training generalizable classifiers on labelled community data that can then be used in a variety of settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We present a first attempt at in-group classification based on contextual language use, in a challenging environment where both the in-group and onlookers discuss many of the same topics. We believe that classifiers in this domain have important applications for cross-platform group detection, where more reliable labels like consistent usernames and network interactions are unavailable. More powerful classifiers may account for additional text features, including user sentiment, shared topics, stance towards those topics, and language style.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "Longer time-span studies should be wary of semantic drift over time (Schlechtweg et al., 2019) , as well as more specific changes in group language and stance on topics. Models of community language style (Tran and Ostendorf, 2016) could also help identify communities across platforms, as long as platform-specific language style features are identified and controlled for. Table 4 indicates the size of each subreddit, in terms of user count and comment count, after pruning bots and low-karma users as specified in our methodology. It also includes the mean karma (comment score) for remaining comments in each subreddit corpus.",
"cite_spans": [
{
"start": 68,
"end": 94,
"text": "(Schlechtweg et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 205,
"end": 231,
"text": "(Tran and Ostendorf, 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 375,
"end": 382,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "If subreddits in a pair have dramatically different activity levels, such as much longer comments in one subreddit than another, these differences in writing style may correlate with classification difficulty. Figs. 4 and 5 show cumulative distributions of comment length and comment count per user, respectively, to illustrate which subreddits are closer in behavior than others.",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Figs. 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Comparison of Subreddit Activity",
"sec_num": null
},
{
"text": "Uniquely Identifying Words Table 3 shows the words that most strongly correlate with membership in r/NoNewNormal and r/CovIdiots. Fig. 1 shows word use divergence between r/NoNewNormaland r/CovIdiotsusing all comments from users in each subreddit. For comparison, Fig. 7 shows the same word use divergence based only on users our classifier predicted as members of each subreddit.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 3",
"ref_id": null
},
{
"start": 130,
"end": 136,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 264,
"end": 270,
"text": "Fig. 7",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Comparison of Subreddit Activity",
"sec_num": null
},
{
"text": "Classifier performance metrics Table 5 shows F1 scores and precision values for the logistic regression and longformer model.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Labeled Language versus Predicted Language",
"sec_num": null
},
{
"text": "Our classifier performs best on accounts with above 10 comments and a minimum comment-karma threshold. However, the classifier cannot reliably label every user in the tail of the distribution. This leads to a misleading visualization, conflating the low-density of users that have high comment counts or karma scores with classifier performance. Therefore, we did not include the tail of each performance graph in Fig. 3 . For posterity, we have included an unabridged version of the graph that includes these misleading tails, in Fig. 6 . Table 3 : Feature importance for logistic regression classifier trained on r/NowNewNormal and r/CovIdiots. The two columns correspond to the text features that are most strongly predictive of each subreddit. et al., 2020) showing the 1-gram rank distributions of predicted users of r/NoNewNormal and r/CovIdiots using our classifier to assign membership. See Fig. 1 for allotaxonograph of actual users. The central diamond shaped plot shows a rank-rank histogram for 1-grams appearing in each subreddit. The horizontal bar chart on the right show the individual contribution of each 1-gram to the overall rank-turbulence divergence value (D R 1/3 ). The 3 bars under \"Balances\" represent the total volume of 1-gram occurring in each subreddit, the percentage of all unique words we see in each subreddit, and the percentage of words that we see in a subreddit that are unique to that subreddit.",
"cite_spans": [
{
"start": 748,
"end": 761,
"text": "et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 414,
"end": 420,
"text": "Fig. 3",
"ref_id": null
},
{
"start": 531,
"end": 537,
"text": "Fig. 6",
"ref_id": "FIGREF5"
},
{
"start": 540,
"end": 547,
"text": "Table 3",
"ref_id": null
},
{
"start": 899,
"end": 905,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Classifier Accuracy versus User Attributes",
"sec_num": null
}
],
"back_matter": [
{
"text": " Table 5 : Data set size and classification performance for logistic regression (LR) and Longformer (LF) models. Subreddit pairs, primary \"of\" community first, \"onlooking\" subreddit second. F1 scores and precision values are calculated using weighted average for the balanced data sets. F1, precision, and recall (not shown) values were all approximately equal for specific models and subreddit pairs in our experiments-partially owing to the balanced datasets. The threshold results refer models trained on a thresholded data set where user comment histories must contain at least 100 1-grams and at least 10 comments. Results excluded due to small sample size are represented with an \"*\".",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Identifying opinion-topics and polarity of parliamentary debate motions",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Abercrombie",
"suffix": ""
},
{
"first": "Riza Theresa",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th workshop on computational approaches to subjectivity, sentiment and social media analysis. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gavin Abercrombie and Riza Theresa Batista-Navarro. 2018. Identifying opinion-topics and polarity of par- liamentary debate motions. In Proceedings of the 9th workshop on computational approaches to sub- jectivity, sentiment and social media analysis. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Detecting white supremacist hate speech using domain specific word embedding with deep learning and bert",
"authors": [
{
"first": "",
"middle": [],
"last": "Hind S Alatawi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Areej",
"suffix": ""
},
{
"first": "Kawthar M",
"middle": [],
"last": "Alhothali",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moria",
"suffix": ""
}
],
"year": 2021,
"venue": "IEEE Access",
"volume": "9",
"issue": "",
"pages": "106363--106374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hind S Alatawi, Areej M Alhothali, and Kawthar M Moria. 2021. Detecting white supremacist hate speech using domain specific word embedding with deep learning and bert. IEEE Access, 9:106363- 106374.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Opinions are made to be changed: Temporally adaptive stance classification",
"authors": [
{
"first": "Rabab",
"middle": [],
"last": "Alkhalifa",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Workshop on Open Challenges in Online Social Networks",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabab Alkhalifa, Elena Kochkina, and Arkaitz Zubiaga. 2021. Opinions are made to be changed: Temporally adaptive stance classification. In Proceedings of the 2021 Workshop on Open Challenges in Online So- cial Networks, pages 27-32.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Capturing stance dynamics in social media: Open challenges and research directions",
"authors": [
{
"first": "Rabab",
"middle": [],
"last": "Alkhalifa",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2109.00475"
]
},
"num": null,
"urls": [],
"raw_text": "Rabab Alkhalifa and Arkaitz Zubiaga. 2021. Captur- ing stance dynamics in social media: Open chal- lenges and research directions. arXiv preprint arXiv:2109.00475.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The role of Reddit in the GameStop short squeeze",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Jalaj",
"middle": [],
"last": "Pathak",
"suffix": ""
}
],
"year": 2022,
"venue": "Economics Letters",
"volume": "211",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhinav Anand and Jalaj Pathak. 2022. The role of Reddit in the GameStop short squeeze. Economics Letters, 211:110249.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cats rule and dogs drool!: Classifying stance in online debate",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "Jean E Fox",
"middle": [],
"last": "Tree",
"suffix": ""
},
{
"first": "Robeson",
"middle": [],
"last": "Bowmani",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Minor",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011)",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Anand, Marilyn Walker, Rob Abbott, Jean E Fox Tree, Robeson Bowmani, and Michael Minor. 2011. Cats rule and dogs drool!: Classifying stance in online debate. In Proceedings of the 2nd Work- shop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011), pages 1-9.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Online 'anti-vax' campaigns and COVID-19: censorship is not the solution",
"authors": [
{
"first": "R",
"middle": [],
"last": "Armitage",
"suffix": ""
}
],
"year": 2021,
"venue": "Public Health",
"volume": "190",
"issue": "",
"pages": "29--30",
"other_ids": {
"DOI": [
"10.1016/j.puhe.2020.12.005"
]
},
"num": null,
"urls": [],
"raw_text": "R. Armitage. 2021. Online 'anti-vax' campaigns and COVID-19: censorship is not the solution. Public Health, 190:e29-e30.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Andreas Vlachos, and Kalina Bontcheva",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
}
],
"year": 2016,
"venue": "Stance detection with bidirectional conditional encoding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.05464"
]
},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Tim Rockt\u00e4schel, Andreas Vla- chos, and Kalina Bontcheva. 2016. Stance detec- tion with bidirectional conditional encoding. arXiv preprint arXiv:1606.05464.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The pushshift Reddit dataset",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baumgartner",
"suffix": ""
},
{
"first": "Savvas",
"middle": [],
"last": "Zannettou",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Keegan",
"suffix": ""
},
{
"first": "Megan",
"middle": [],
"last": "Squire",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "14",
"issue": "",
"pages": "830--839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift Reddit dataset. In Proceedings of the Inter- national AAAI Conference on Web and Social Media, volume 14, pages 830-839.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05150"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Topic-specific sentiment analysis can help identify political ideology",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Deepak",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Bhatia and P Deepak. 2018. Topic-specific sen- timent analysis can help identify political ideology. In Proceedings of the 9th Workshop on Computa- tional Approaches to Subjectivity, Sentiment and So- cial Media Analysis, pages 79-84.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Influence of fake news in Twitter during the 2016 US presidential election",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Bovet",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hern\u00e1n",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Makse",
"suffix": ""
}
],
"year": 2019,
"venue": "Nature Communications",
"volume": "10",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41467-018-07761-2"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandre Bovet and Hern\u00e1n A. Makse. 2019. In- fluence of fake news in Twitter during the 2016 US presidential election. Nature Communications, 10(1):7.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-level graph convolutional networks for crossplatform anchor link prediction",
"authors": [
{
"first": "Hongxu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hongzhi",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Xiangguo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Gabrys",
"suffix": ""
},
{
"first": "Katarzyna",
"middle": [],
"last": "Musial",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "1503--1511",
"other_ids": {
"DOI": [
"10.1145/3394486.3403201"
]
},
"num": null,
"urls": [],
"raw_text": "Hongxu Chen, Hongzhi Yin, Xiangguo Sun, Tong Chen, Bogdan Gabrys, and Katarzyna Musial. 2020. Multi-level graph convolutional networks for cross- platform anchor link prediction. In Proceedings of the 26th ACM SIGKDD International Confer- ence on Knowledge Discovery & Data Mining, page 1503-1511. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "2020. Allotaxonometry and rank-turbulence divergence: A universal instrument for comparing complex systems",
"authors": [
{
"first": "Joshua",
"middle": [
"R"
],
"last": "Peter Sheridan Dodds",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Minot",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Thayer",
"middle": [],
"last": "Arnold",
"suffix": ""
},
{
"first": "Jane",
"middle": [
"Lydia"
],
"last": "Alshaabi",
"suffix": ""
},
{
"first": "David",
"middle": [
"Rushing"
],
"last": "Adams",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dewhurst",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tyler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gray",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Frank",
"suffix": ""
},
{
"first": "Christopher M",
"middle": [],
"last": "Reagan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.09770"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Sheridan Dodds, Joshua R Minot, Michael V Arnold, Thayer Alshaabi, Jane Lydia Adams, David Rushing Dewhurst, Tyler J Gray, Morgan R Frank, Andrew J Reagan, and Christopher M Dan- forth. 2020. Allotaxonometry and rank-turbulence divergence: A universal instrument for comparing complex systems. arXiv preprint arXiv:2002.09770.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The generativity of social media: Opportunities, challenges, and guidelines for conducting experimental research",
"authors": [
{
"first": "Camille",
"middle": [],
"last": "Grange",
"suffix": ""
}
],
"year": 2018,
"venue": "International Journal of Human-Computer Interaction",
"volume": "34",
"issue": "10",
"pages": "943--959",
"other_ids": {
"DOI": [
"10.1080/10447318.2018.1471573"
]
},
"num": null,
"urls": [],
"raw_text": "Camille Grange. 2018. The generativity of so- cial media: Opportunities, challenges, and guide- lines for conducting experimental research. Inter- national Journal of Human-Computer Interaction, 34(10):943-959.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Editorial of the special issue on following user pathways: Key contributions and future directions in cross-platform social media research",
"authors": [
{
"first": "Margeret",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Athanasios",
"middle": [],
"last": "Mazarakis",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chorley",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Caton",
"suffix": ""
}
],
"year": 2018,
"venue": "International Journal of Human-Computer Interaction",
"volume": "34",
"issue": "10",
"pages": "895--912",
"other_ids": {
"DOI": [
"10.1080/10447318.2018.1471575"
]
},
"num": null,
"urls": [],
"raw_text": "Margeret Hall, Athanasios Mazarakis, Martin Chor- ley, and Simon Caton. 2018. Editorial of the spe- cial issue on following user pathways: Key contri- butions and future directions in cross-platform so- cial media research. International Journal of Hu- man-Computer Interaction, 34(10):895-912.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Preslav Nakov, and Isabelle Augenstein. 2021. A survey on stance detection for mis-and disinformation identification",
"authors": [
{
"first": "Momchil",
"middle": [],
"last": "Hardalov",
"suffix": ""
},
{
"first": "Arnav",
"middle": [],
"last": "Arora",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.00242"
]
},
"num": null,
"urls": [],
"raw_text": "Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2021. A survey on stance detec- tion for mis-and disinformation identification. arXiv preprint arXiv:2103.00242.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media during the COVID-19 Crisis",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Caleb",
"middle": [],
"last": "Ziems",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Naren",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '21",
"volume": "",
"issue": "",
"pages": "90--94",
"other_ids": {
"DOI": [
"10.1145/3487351.3488324"
]
},
"num": null,
"urls": [],
"raw_text": "Bing He, Caleb Ziems, Sandeep Soni, Naren Ramakr- ishnan, Diyi Yang, and Srijan Kumar. 2021. Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media during the COVID-19 Crisis. In Pro- ceedings of the 2021 IEEE/ACM International Con- ference on Advances in Social Networks Analysis and Mining, ASONAM '21, page 90-94, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Mentions of security vulnerabilities on reddit, twitter and github",
"authors": [
{
"first": "Sameera",
"middle": [],
"last": "Horawalavithana",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Bhattacharjee",
"suffix": ""
},
{
"first": "Renhao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nazim",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"O"
],
"last": "Hall",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Iamnitchi",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE/WIC/ACM International Conference on Web Intelligence",
"volume": "",
"issue": "",
"pages": "200--207",
"other_ids": {
"DOI": [
"10.1145/3350546.3352519"
]
},
"num": null,
"urls": [],
"raw_text": "Sameera Horawalavithana, Abhishek Bhattacharjee, Renhao Liu, Nazim Choudhury, Lawrence O. Hall, and Adriana Iamnitchi. 2019. Mentions of secu- rity vulnerabilities on reddit, twitter and github. In IEEE/WIC/ACM International Conference on Web Intelligence, page 200-207. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "2021. (Mis) alignment Between Stance Expressed in Social Media Data and Public Opinion Surveys",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Shugars",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Gallagher",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Alexi",
"middle": [
"Quintana"
],
"last": "Math\u00e9",
"suffix": ""
},
{
"first": "Zijian",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lazer",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "312--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Joseph, Sarah Shugars, Ryan Gallagher, Jon Green, Alexi Quintana Math\u00e9, Zijian An, and David Lazer. 2021. (Mis) alignment Between Stance Ex- pressed in Social Media Data and Public Opinion Surveys. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 312-324.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Political issue extraction model: A novel hierarchical topic model that uses tweets by political and non-political authors",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Carman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark Car- man. 2016. Political issue extraction model: A novel hierarchical topic model that uses tweets by political and non-political authors. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analy- sis, pages 82-90.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stance detection: A survey",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "K\u00fc\u00e7\u00fck",
"suffix": ""
},
{
"first": "Fazli",
"middle": [],
"last": "Can",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "53",
"issue": "1",
"pages": "1--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilek K\u00fc\u00e7\u00fck and Fazli Can. 2020. Stance detection: A survey. ACM Computing Surveys (CSUR), 53(1):1- 37.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Argument mining: A survey",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Linguistics",
"volume": "45",
"issue": "4",
"pages": "765--818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lawrence and Chris Reed. 2020. Argument mining: A survey. Computational Linguistics, 45(4):765-818.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Which side are you on? Identifying perspectives at the document and sentence levels",
"authors": [
{
"first": "Wei-Hao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"G"
],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)",
"volume": "",
"issue": "",
"pages": "109--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander G Hauptmann. 2006. Which side are you on? Identifying perspectives at the document and sentence levels. In Proceedings of the Tenth Confer- ence on Computational Natural Language Learning (CoNLL-X), pages 109-116.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Argument mining on clinical trials",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Mayer",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lippi",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Torroni",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2018,
"venue": "COMMA",
"volume": "",
"issue": "",
"pages": "137--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Mayer, Elena Cabrio, Marco Lippi, Paolo Tor- roni, and Serena Villata. 2018. Argument mining on clinical trials. In COMMA, pages 137-148.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Characterizing polarization in online vaccine discourse-a large-scale study",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "M\u00f8nsted",
"suffix": ""
},
{
"first": "Sune",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2022,
"venue": "PloS one",
"volume": "17",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bjarke M\u00f8nsted and Sune Lehmann. 2022. Charac- terizing polarization in online vaccine discourse-a large-scale study. PloS one, 17(2):e0263746.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The 2016 US Election: Can democracy survive the internet",
"authors": [
{
"first": "Nathaniel",
"middle": [],
"last": "Persily",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of democracy",
"volume": "28",
"issue": "2",
"pages": "63--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel Persily. 2017. The 2016 US Election: Can democracy survive the internet? Journal of democ- racy, 28(2):63-76.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Effects of misinformation diffusion during a pandemic",
"authors": [
{
"first": "Lorenzo",
"middle": [],
"last": "Prandi",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Primiero",
"suffix": ""
}
],
"year": 2020,
"venue": "Applied Network Science",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s41109-020-00327-6"
]
},
"num": null,
"urls": [],
"raw_text": "Lorenzo Prandi and Giuseppe Primiero. 2020. Effects of misinformation diffusion during a pandemic. Ap- plied Network Science, 5(1):82.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.09821"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. arXiv preprint arXiv:1906.09821.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A wind of change: Detecting and evaluating lexical semantic change across times and domains",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "H\u00e4tty",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Del"
],
"last": "Tredici",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "732--746",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg, Anna H\u00e4tty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. A wind of change: Detecting and evaluating lexical semantic change across times and domains. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 732-746.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Stance detection and analysis in social media",
"authors": [
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parinaz Sobhani. 2017. Stance detection and anal- ysis in social media. Ph.D. thesis, Universite d'Ottawa/University of Ottawa.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Characterizing the language of online communities and its relation to community reception",
"authors": [
{
"first": "Trang",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1030--1035",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1108"
]
},
"num": null,
"urls": [],
"raw_text": "Trang Tran and Mari Ostendorf. 2016. Characterizing the language of online communities and its relation to community reception. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1030-1035, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "When the echo chamber shatters: Examining the use of community-specific language post-subreddit ban",
"authors": [
{
"first": "Milo",
"middle": [],
"last": "Trujillo",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Rosenblatt",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "De Anda",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "J\u00e1uregui",
"suffix": ""
},
{
"first": "Briane",
"middle": [],
"last": "Moog",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Samson",
"suffix": ""
},
{
"first": "Allison M",
"middle": [],
"last": "H\u00e9bert-Dufresne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
"volume": "",
"issue": "",
"pages": "164--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milo Trujillo, Sam Rosenblatt, Guillermo de Anda J\u00e1uregui, Emily Moog, Briane Paul V Samson, Laurent H\u00e9bert-Dufresne, and Allison M Roth. 2021. When the echo chamber shatters: Examining the use of community-specific language post-subreddit ban. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 164-178.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Attention is all you need. Advances in neural information processing systems",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Quantifying political leaning from tweets, retweets, and retweeters",
"authors": [
{
"first": "Felix Ming Fai",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Chee",
"middle": [],
"last": "Wei Tan",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Mung",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE transactions on knowledge and data engineering",
"volume": "28",
"issue": "8",
"pages": "2158--2172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Ming Fai Wong, Chee Wei Tan, Soumya Sen, and Mung Chiang. 2016. Quantifying political lean- ing from tweets, retweets, and retweeters. IEEE transactions on knowledge and data engineering, 28(8):2158-2172.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Vectors for counterspeech on Twitter",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "Haji",
"middle": [
"Mohammad"
],
"last": "Dillon",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Benesch",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "57--62",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3009"
]
},
"num": null,
"urls": [],
"raw_text": "Lucas Wright, Derek Ruths, Kelly P Dillon, Haji Mo- hammad Saleem, and Susan Benesch. 2017. Vec- tors for counterspeech on Twitter. In Proceedings of the First Workshop on Abusive Language Online, pages 57-62, Vancouver, BC, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Political polarization on the digital sphere: A cross-platform, over-time analysis of interactional, positional, and affective polarization on social media",
"authors": [
{
"first": "Moran",
"middle": [],
"last": "Yarchi",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Baden",
"suffix": ""
},
{
"first": "Neta",
"middle": [],
"last": "Kligler-Vilenchik",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "38",
"issue": "",
"pages": "98--139",
"other_ids": {
"DOI": [
"10.1080/10584609.2020.1785067"
]
},
"num": null,
"urls": [],
"raw_text": "Moran Yarchi, Christian Baden, and Neta Kligler- Vilenchik. 2021. Political polarization on the dig- ital sphere: A cross-platform, over-time analysis of interactional, positional, and affective polariza- tion on social media. Political Communication, 38(1-2):98-139.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Discourseaware rumour stance classification in social media using sequential classifiers",
"authors": [
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Lukasik",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Information Processing & Management",
"volume": "54",
"issue": "2",
"pages": "273--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018. Discourse- aware rumour stance classification in social media using sequential classifiers. Information Processing & Management, 54(2):273-290.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Analysing how people orient to and spread rumours in social media by looking at conversational threads",
"authors": [
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
}
],
"year": 2016,
"venue": "PloS one",
"volume": "11",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geral- dine Wong Sak Hoi, and Peter Tolmie. 2016. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one, 11(3):e0150989.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "An allotaxonograph (Dodds et al., 2020) showing the 1-gram rank distributions of r/NoNewNormal and r/CovIdiots along with rank-turbulence divergence results."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Receiver operator characteristic curves for classification models evaluated on the subreddit pairs. For each subreddit pair we trained a binary classifier based on the Longformer language model."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Cumulative distribution of comments made by each user in each examined subreddit pair. Distribution taken after filtering."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Cumulative distribution of comment length in each examined subreddit pair. Distribution taken after filtering."
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Likelihood of correctly labeling users in in-group subreddits by user attributes. This is the unabridged version ofFig. 3, including unstable long-tail behavior when classifying the small minority of highactivity accounts."
},
"FIGREF6": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "An allotaxonograph (Dodds"
},
"TABREF0": {
"type_str": "table",
"text": "Data",
"content": "<table><tr><td>set size and classification per-formance for logistic regression (LR) and Longformer (LF) models. Subreddit pairs, primary \"of\" community first, \"on-looking\" subreddit second. Matthews correlation coefficient (MCC) refers to performance on the test set. The threshold results refer models trained on a thresholded data set where user comment histories must contain at least 100 1-grams and at least 10 comments. Results excluded due to represented with an \"*\". small sample size are</td><td>Subreddits r/NoNewNormal v. r/Covidiots r/TheRedPill v. r/TheBluePill r/BigMouth v. r/BanBigMouth r/SuperStraight v. r/SuperStraightPhobic r/ProtectAndServe v. r/BadCopNoDonut r/LatterDaySaints v. r/ExMormon r/vegan v. r/antivegan</td><td>MCC Threshold Base Data set size Threshold LR LF LR LF Base 0.41 0.48 0.57 0.60 44185 6778 0.55 0.65 * * 4680 402 0.64 0.80 * * 1394 140 0.35 0.43 * * 3310 584 0.50 0.55 0.65 0.76 41158 6930 0.65 0.72 0.80 0.83 15062 4122 0.49 0.56 0.65 0.72 6896 1692</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Rank-turbulence divergence (RTD) of divergence results from actual and predicted 1-gram distributions.",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}