{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:08.427294Z"
},
"title": "Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning",
"authors": [
{
"first": "Chiyu",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "Deep Learning & Natural Language Processing Group",
"institution": "The University of British Columbia",
"location": {}
},
"email": "chiyuzh@mail.ubc.ca"
},
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": "",
"affiliation": {
"laboratory": "Deep Learning & Natural Language Processing Group",
"institution": "The University of British Columbia",
"location": {}
},
"email": "muhammad.mageed@ubc.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Masked language models (MLMs) are pretrained with a denoising objective that is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two complementing strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. We test our models on 15 different Twitter datasets for social meaning detection. Our methods achieve 2.34% F 1 over a competitive baseline, while outperforming domain-specific language models pre-trained on large datasets. Our methods also excel in few-shot learning: with only 5% of training data (severely few-shot), our methods enable an impressive 68.54% average F 1. The methods are also language agnostic, as we show in a zero-shot setting involving six datasets from three different languages. 1",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Masked language models (MLMs) are pretrained with a denoising objective that is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two complementing strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. We test our models on 15 different Twitter datasets for social meaning detection. Our methods achieve 2.34% F 1 over a competitive baseline, while outperforming domain-specific language models pre-trained on large datasets. Our methods also excel in few-shot learning: with only 5% of training data (severely few-shot), our methods enable an impressive 68.54% average F 1. The methods are also language agnostic, as we show in a zero-shot setting involving six datasets from three different languages. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Masked language models (MLMs) such as BERT (Devlin et al., 2019) have revolutionized natural language processing (NLP). These models exploit the idea of self-supervision where sequences of unlabeled text are masked and the model is tasked to reconstruct them. Knowledge acquired during this stage of denoising (called pretraining) can then be transferred to downstream tasks through a second stage (called fine-tuning). Although pre-training is general, does not require labeled data, and is task agnostic, fine-tuning is narrow, requires labeled data, and is task-specific. For a class of tasks \u03c4 , some of which we may not know in the present but which can become desirable in the future, it is unclear how we can bridge the learning objective mismatch between these two stages. In particular, how can we (i) make pre-training 1 Our code is available at: https://github.com/ chiyuzhang94/PMLM-SFT. more tightly related to downstream task learning objective; and (ii) focus model pre-training representation on an all-encompassing range of concepts of general affinity to various downstream tasks?",
"cite_spans": [
{
"start": 43,
"end": 64,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 829,
"end": 830,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We raise these questions in the context of learning a cluster of tasks to which we collectively refer as social meaning. We loosely define social meaning as meaning emerging through human interaction such as on social media. Example social meaning tasks include emotion, irony, and sentiment detection. We propose two main solutions that we hypothesize can bring pre-training and finetuning closer in the context of learning social meaning: First, we propose a particular type of guided masking that prioritizes learning contexts of tokens crucially relevant to social meaning in interactive discourse. Since the type of \"meaning in interaction\" we are interested in is the domain of linguistic pragmatics (Thomas, 2014), we will refer to our proposed masking mechanism as pragmatic masking. We explain pragmatic masking in Section 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, we propose an additional novel stage of fine-tuning that does not depend on gold labels but instead exploits general data cues possibly relevant to all social meaning tasks. More precisely, we leverage proposition-level user assigned tags for intermediate fine-tuning of pre-trained language models. In the case of Twitter, for example, hashtags naturally assigned by users at the end of posts can carry discriminative power that is by and large relevant to a wide host of tasks. Although cues such as hashtags and emojis have been previously used as surrogate lables before for one task or another, we put them to a broader use that is not focused on a particular (usually narrow) task that learns from a handful of cues. In other words, our goal is to learn extensive concepts carried by tens of thousands of cues. A model endowed with such a knowledge-base of social concepts can then be further fine-tuned on any narrower task in the ordinary way. We refer to this method as surrogate fine-tuning (Section 3.2). Another migration from previous work is that our methods excel not only in the full-data setting but also for few-shot learning, as we will explain below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to evaluate our methods, we present a social meaning benchmark composed of 15 different datasets crawled from previous research sources. We perform an extensive series of methodical experiments directly targeting our proposed methods. Our experiments set new state-of-the-art (SOTA) in the supervised setting across different datasets. Moreover, our experiments reveal a striking capacity of our models in improving downstream task performance in few-shot and severely few-shot settings (i.e., as low as 1% of gold data), and even the zero-shot setting on languages other than English (i.e., as evaluated on six different datasets from three languages in Section 6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize, we make the following contributions: (1) We propose a novel pragmatic masking strategy that makes use of social media cues akin to improving social meaning detection. (2) We introduce a new effective surrogate fine-tuning method suited to social meaning that exploits the same simple cues as our pragmatic masking strategy. 3We report new SOTA on eight out of 15 supervised datasets in the full-data setting. (4) Our methods are remarkably effective for few-shot and zero-and learning. We now review related work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Masked Language models. Devlin et al. (2019) introduced BERT, a language representation model pre-trained by joint conditioning on both left and right context in all layers with the Transformer encoder (Vaswani et al., 2017) . BERT's pre-training introduces a self-supervised learning objective, i.e., masked language modeling (MLM), to train the Transformer encoder. MLM predicts masked tokens in input sequences exploiting bi-directional context. RoBERTa optimizes BERT performance by removing the next sentence prediction objective and by pre-training on a larger corpus using a bigger batch size. In the last few years, several variants of LMs with different masking methods were proposed. Examples are XL-Net (Yang et al., 2019) and MASS (Song et al., 2019) . To incorporate more domain specific knowledge into LMs, some works introduce knowledgeenabled masking strategies. For example, ; ; Lin et al. (2021) propose to mask tokens of named entities, while Tian et al. 2020and Ke et al. (2020) select sentimentrelated words to mask during pre-training. Gu et al. (2020) and Kawintiranon and Singh (2021) propose selective masking methods to mask the more important tokens for downstream tasks (e.g., sentiment analysis and stance detection). However, these masking strategies depend on external resources and/or annotations (e.g., a lexicon or labeled corpora). Corazza et al. (2020) investigate the utility of hybrid emoji-based masking for enhancing abusive language detection. Previous works, therefore, only focus on one or another particular task (e.g., sentiment, abusive language detection) rather than the type of broad representations we target.",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 202,
"end": 224,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF46"
},
{
"start": 714,
"end": 733,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF53"
},
{
"start": 743,
"end": 762,
"text": "(Song et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 896,
"end": 913,
"text": "Lin et al. (2021)",
"ref_id": "BIBREF20"
},
{
"start": 982,
"end": 998,
"text": "Ke et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 1058,
"end": 1074,
"text": "Gu et al. (2020)",
"ref_id": "BIBREF16"
},
{
"start": 1079,
"end": 1108,
"text": "Kawintiranon and Singh (2021)",
"ref_id": "BIBREF17"
},
{
"start": 1367,
"end": 1388,
"text": "Corazza et al. (2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "Intermediate Fine-Tuning. Although pretrained language models (PLM) have shown significant improvements on NLP tasks, intermediate training of the PLM on one or more data-rich tasks can further improve performance on a target downstream task. Most previous work (e.g., Pruksachatkun et al., 2020; Phang et al., 2020; Chang and Lu, 2021; Poth et al., 2021) ) focus on intermediate fine-tuning on a given goldlabeled dataset related to a downstream target task. Different to these works, our surrogate fine-tuning method is agnostic to narrow downstream tasks and fine-tunes an PLM on large-scale data with tens of thousands of surrogate labels that may be relevant to all social meaning. We now introduce our methods.",
"cite_spans": [
{
"start": 269,
"end": 296,
"text": "Pruksachatkun et al., 2020;",
"ref_id": "BIBREF30"
},
{
"start": 297,
"end": 316,
"text": "Phang et al., 2020;",
"ref_id": "BIBREF30"
},
{
"start": 317,
"end": 336,
"text": "Chang and Lu, 2021;",
"ref_id": "BIBREF8"
},
{
"start": 337,
"end": 355,
"text": "Poth et al., 2021)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "MLMs employ random masking, and so are not guided to learn any particular type of information during pre-training. Several attempts have been",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatic Masking",
"sec_num": "3.1"
},
{
"text": "(1) Just got chased through my house with a bowl of tuna fish. ing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatic Masking",
"sec_num": "3.1"
},
{
"text": "[Disgust]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatic Masking",
"sec_num": "3.1"
},
{
"text": "(2) USER thanks for this cold you gave me #sarcasm [Sarcastic] (3) USER Awww CUPCAKES SUCK IT UP. SHE made to employ task-specific masking where the objective is to predict information relevant to a given downstream task. Task relevant information is usually identified based on world knowledge (e.g., a sentiment lexicon (Gu et al., 2020; Ke et al., 2020) , part-of-speech (POS) tags ) or based on some other type of modeling such as pointwise mutual information (Tian et al., 2020) with supervised data. Although task-specific masking is useful, it is desirable to identify a more general masking strategy that does not depend on external information that may not be available or hard to acquire (e.g., costly annotation). For example, there are no POS taggers for some languages and so methods based on POS tags would not be applicable. Motivated by the fact that random masking is intrinsically sub-optimal (Ke et al., 2020; Kawintiranon and Singh, 2021) and this particular need for a more general and dependency-free masking method, we introduce our novel pragmatic masking mechanism that is suited to a wide range of social meaning tasks. To illustrate, consider the tweet samples in Table 1: In example (1), the emoji \" \" combined with the suffix \"-ing\" in \" ing\" is a clear signal indicating the disgust emotion. In example (2) the emoji \" \" and the hashtag \"#sarcasm\" communicate sarcasm. In example (3) the combination of the emojis \" \" and \" \" accompany 'hard' emotions characteristic of offensive language. We hypothesize that by simply masking cues such as emojis and hashtags, we can bias the model to learn about different shades of social meaning expression. This masking method can be performed in a self-supervised fashion since hashtags and emojis can be automatically identified. We call the resulting language model pragmatically masked language model (PMLM). Specifically, when we choose tokens for masking, we prioritize hashtags and emojis as Figure 1a shows. The pragmatic masking strategy follows several steps: (1) Prag-matic token selection. We randomly select up to 15% of input sequence, giving masking priority to hashtags or emojis. The tokens are selected by whole word masking (i.e., whole hashtag or emoji).",
"cite_spans": [
{
"start": 51,
"end": 62,
"text": "[Sarcastic]",
"ref_id": null
},
{
"start": 322,
"end": 339,
"text": "(Gu et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 340,
"end": 356,
"text": "Ke et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 911,
"end": 928,
"text": "(Ke et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 929,
"end": 958,
"text": "Kawintiranon and Singh, 2021)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 1968,
"end": 1977,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Pragmatic Masking",
"sec_num": "3.1"
},
{
"text": "LOST GET OVER IT [Offensive]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatic Masking",
"sec_num": "3.1"
},
{
"text": "(2) Regular token selection. If the pragmatic tokens are less than 15%, we then randomly select regular BPE tokens to complete the percentage of masking to the 15%. (3) Masking. This is the same as the RoBERTa MLM objective where we replace 80% of selected tokens with the [MASK] token, 10% with random tokens, and we keep 10% unchanged.",
"cite_spans": [
{
"start": 273,
"end": 279,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatic Masking",
"sec_num": "3.1"
},
{
"text": "The current transfer learning paradigm of first pretraining then fine-tuning on particular tasks is limited by how much labeled data is available for downstream tasks. In other words, this existing set up works only given large amounts of labeled data. We propose surrogate fine-tuning where we intermediate fine-tune PLMs to predict thousands of example-level cues (i.e., hashtags occurring at the end of tweets) as Figure 1b shows. This method is inspired by previous work that exploited hashtags (Riloff et al., 2013; Pt\u00e1\u010dek et al., 2014; Rajadesingan et al., 2015; Sintsova and Pu, 2016; Abdul-Mageed and Ungar, 2017; Barbieri et al., 2018) or emojis (Wood and Ruder, 2016; Felbo et al., 2017; Wiegand and Ruppenhofer, 2021) as proxy for labels in a number of social meaning tasks. However, instead of identifying a small specific set of hashtags or emojis for a single task and using them to collect a dataset of distant labels, we diverge from the literature in proposing to use data with any hashtag or emoji as a surrogate labeling approach suited for any (or at least most) social meaning task. As explained, we refer to our method as surrogate fine-tuning (SFT).",
"cite_spans": [
{
"start": 499,
"end": 520,
"text": "(Riloff et al., 2013;",
"ref_id": "BIBREF37"
},
{
"start": 521,
"end": 541,
"text": "Pt\u00e1\u010dek et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 542,
"end": 568,
"text": "Rajadesingan et al., 2015;",
"ref_id": "BIBREF35"
},
{
"start": 569,
"end": 591,
"text": "Sintsova and Pu, 2016;",
"ref_id": "BIBREF39"
},
{
"start": 592,
"end": 621,
"text": "Abdul-Mageed and Ungar, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 622,
"end": 644,
"text": "Barbieri et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 655,
"end": 677,
"text": "(Wood and Ruder, 2016;",
"ref_id": "BIBREF52"
},
{
"start": 678,
"end": 697,
"text": "Felbo et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 698,
"end": 728,
"text": "Wiegand and Ruppenhofer, 2021)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 417,
"end": 426,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Surrogate Fine-tuning",
"sec_num": "3.2"
},
{
"text": "4.1 Pre-training Data TweetEnglish Dataset. We extract 2.4B English tweets 2 from a larger in-house dataset collected between 2014 and 2020. We lightly normalize tweets by removing usernames and hyperlinks and add white space between emojis to help our model identify individual emojis. We keep all the tweets, retweets, and replies but remove the 'RT USER:' string in front of retweets. To ensure each tweet contains sufficient context for modeling, we filter out tweets shorter than 5 English words (not counting the special tokens hashtag, emoji, USER, URL, and RT). We call this dataset TweetEng. Exploring the distribution of hashtags and emojis within TweetEng, we find that 18.5% of the tweets include at least one hashtag but no emoji, 11.5% have at least one emoji but no hashtag, and 2.2% have both at least one hashtag and at least one emoji. Investigating the hashtag and emoji location, we observe that 7.1% of the tweets use a hashtag as the last term, and that the last term of 6.7% of tweets is an emoji. We will use TweetEng as a general pool of data from which we derive for both our PMLM and SFT methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "PM Datasets. We extract five different subsets from TweetEng to explore the utility of our proposed PMLM method. Each of these five datasets comprises 150M tweets as follows: Naive. A randomly selected tweet set. Based on the distribution of hashtags and emojis in TweetEng, each sample in Naive still has some likelihood to include one or more hashtags and/or emojis. We are thus still able to perform our PM method on Naive. Naive-Remove. To isolate the utility of using pragmatic cues, we construct a dataset by removing all hashtags and emojis from Naive. Hashtag_any. Tweets with at least one hashtag anywhere but no emojis. Emoji_any. Tweets with at least one emoji anywhere but no hashtags. Hashtag_end. Tweets with a hashtag as the last term but no emojis. Emoji_end. Tweets with an emoji at the end of the tweet but no hashtags. 3 SFT Datasets. We experiment with two SFT settings, one based on hashtags (SFT-H) and another based on emojis (SFT-E). For SFT-H, we utilize the Hashtag_end dataset mentioned above. The dataset includes 5M unique hashtags (all occurring at the end of tweets), but the majority of these are low frequency. We remove any hashtags occurring < 200 times, which gives us a set of 63K hashtags in 126M tweets. We split the tweets into Train (80%), Dev (10%), and Test (10%). For each sample, we use the end hashtag as the sample label. 4 We refer to this resulting dataset as Hashtag_pred. For emoji SFT, we work with the emoji_end dataset. Similar to SFT-H, we remove low-frequence emojis (< 200 times), extract the same number of tweets as Hashtag_pred, and follow the same data splitting method. We acquire a total of 1, 650 unique emojis in final positions, which we assign as class labels and remove them from the original tweet body. We refer to this dataset as Emoji_pred.",
"cite_spans": [
{
"start": 1369,
"end": 1370,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We collect 15 datasets representing eight different social meaning tasks to evaluate our models, as follows: 5 Crisis awareness. We use Crisis Oltea (Olteanu et al., 2014) , a corpus for identifying whether a tweet is related to a given disaster or not. Emotion. We utilize Emo Moham , introduced by Mohammad et al. 2018, for emotion recognition. We use the version adapted in Barbieri et al. (2020) . Hateful and offensive language.",
"cite_spans": [
{
"start": 149,
"end": 171,
"text": "(Olteanu et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 377,
"end": 399,
"text": "Barbieri et al. (2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Benchmark",
"sec_num": "4.2"
},
{
"text": "We use Hate Waseem (Waseem and Hovy, 2016) , Hate David (Davidson et al., 2017) , and Offense Zamp (Zampieri et al., 2019a) . Humor. We use the humor detection datasets Humor Potash (Potash et al., 2017) and Humor Meaney (Meaney et al., 2021) . Irony. We utilize Irony Hee-A and Irony Hee-B from Van Hee et al. (2018).",
"cite_spans": [
{
"start": 19,
"end": 42,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF49"
},
{
"start": 56,
"end": 79,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 99,
"end": 123,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF54"
},
{
"start": 182,
"end": 203,
"text": "(Potash et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 221,
"end": 242,
"text": "(Meaney et al., 2021)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Benchmark",
"sec_num": "4.2"
},
{
"text": "We use four sarcasm datasets from Sarc Riloff (Riloff et al., 2013) , Sarc Ptacek (Pt\u00e1\u010dek et al., 2014) , Sarc Rajad (Rajadesingan et al., 2015), and Sarc Bam (Bamman and Smith, 2015) . Sentiment. We employ the three-way sentiment analysis dataset from Senti Rosen (Rosenthal et al., 2017) . Stance. We use Stance Moham , a stance detection dataset from Mohammad et al. (2016) . The task is to identify the position of a given tweet towards a target of interest.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Riloff et al., 2013)",
"ref_id": "BIBREF37"
},
{
"start": 82,
"end": 103,
"text": "(Pt\u00e1\u010dek et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 159,
"end": 183,
"text": "(Bamman and Smith, 2015)",
"ref_id": "BIBREF2"
},
{
"start": 265,
"end": 289,
"text": "(Rosenthal et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 354,
"end": 376,
"text": "Mohammad et al. (2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sarcasm.",
"sec_num": null
},
{
"text": "We use the Twitter API 6 to crawl datasets which are available only in tweet ID form. We note that we could not download all tweets since some tweets get deleted by users or become inaccessible for some other reason. Since some datasets are old (dating back to 2013), we are only able to retrieve 73% of the tweets on average (i.e., across the different datasets). We normalize each tweet by re-placing the user names and hyperlinks to the special tokens 'USER' and 'URL', respectively. For datasets collected based on hashtags by original authors (i.e., distant supervision), we also remove the seed hashtags from the original tweets. For datasets originally used in cross-validation, we acquire 80% Train, 10% Dev, and 10% Test via random splits. For datasets that had training and test splits but not development data, we split off 10% from training data into Dev. The data splits of each dataset are presented in Table 2 . To test our models under the few-shot setting, we conduct few-shot experiments on varying percentages of the Train set of each task (i.e., 1%, 5%, 10%, 20% . . . 90%). For each of these sizes, we randomly sample three times with replacement (as we report the average of three runs in our experiments) and evaluate each model on the original Dev and Test sets. We also evaluate our models on the zero-shot setting utilizing data from Arabic: Emo Mageed (Abdul-Mageed et al., 2020) , Irony Ghan (Ghanem et al., 2019) ; Italian: Emo Bian (Bianchi et al., 2021) and Hate Bosco (Bosco et al., 2018) ; and Spanish: Emo Moham (Mohammad et al., 2018) and Hate Bas (Basile et al., 2019) .",
"cite_spans": [
{
"start": 1379,
"end": 1406,
"text": "(Abdul-Mageed et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 1420,
"end": 1441,
"text": "(Ghanem et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1462,
"end": 1484,
"text": "(Bianchi et al., 2021)",
"ref_id": "BIBREF6"
},
{
"start": 1500,
"end": 1520,
"text": "(Bosco et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 1546,
"end": 1569,
"text": "(Mohammad et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 1583,
"end": 1604,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 917,
"end": 924,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Sarcasm.",
"sec_num": null
},
{
"text": "For both our experiments on PMLM (Section 5.1) and SFT (Section 5.2), we use the pre-trained English RoBERTa Base model as the initial checkpoint model. We use this model, rather than a larger language model, since we run a large number of experiments and needed to be efficient with GPUs. We use the RoBERTa 7 tokenizer to process each input sequence and pad or truncate the sequence to a maximal length of 64 BPE tokens. We continue training RoBERTa with our proposed methods for five epochs with a batch size of 8, 192 and then fine-tune the further trained models on downstream datasets. We provide details about our hyper-parameters in AppendixA. Our baseline (1) fine-tunes original pre-trained RoBERTa on downstream datsets without any further training. Our baseline (2) fine-tunes a SOTA Transformer-based PLM for English tweets, i.e., BERTweet (Nguyen et al., 2020), on downstream datasets. For PMLM experiments, we provide baseline (3), which further pre-trains RoBERTa on Naive-Remove dataset with the random masking strategy and MLM objectives. We refer to this model as RM-NR. We now present our results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation and Baselines",
"sec_num": "4.3"
},
{
"text": "We report performance of our models trained with our PM strategy in Section 5.1, where we investigate two types of pragmatic signals (i.e., hashtag and emoji) and the effect of their locations (anywhere vs. at the end). Section 5.2 shows the results of our SFT method with hashtags and emojis. Moreover, we combine our two proposed methods and compare our models to the SOTA models in Sections 5.3 and 5.4, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "PM on Naive. We further pre-train RoBERTa on the Naive dataset with our pragmatic masking strategy (PM) and compare to a model trained on the same dataset with random masking (RM). As Table 3 shows, PM-N outperforms RM-N with an average improvement of 0.69 macro F 1 points across the 15 tasks. We also observe that PM-N improves over RM-N in 12 out of the 15 tasks, thus reflecting the effectiveness of our PM strategy even when working with a dataset such as Naive where it is not guaranteed (although likely) that a tweet has hashtags and/or emojis. Moreover, RM-N outperforms RM-NR on eight tasks with improvement of 0.12 average F 1 . This indicates that pragmatic cues (i.e., emoji and hashtags) are essential for learning social media data. PM of Hashtags. To study the effect of PM on the controlled setting where we guarantee each sam- ple has at least one hashtag anywhere, we further pre-train RoBERTa on the Hashtag_any dataset with PM (PM-HA in Table 3 ) and compare to a model further pre-trained on the same dataset with the RM (RM-HA). As Table 3 shows, PM-HA does not improve over RM-HA. Rather, PM-HA results are marginally lower than those of RM-HA. We suspect that the degradation is due to confusions when a hashtag is used as a word of a sentence. Thus, we investigate the effectiveness of hashtag location.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 958,
"end": 965,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1055,
"end": 1062,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "PMLM Experiments",
"sec_num": "5.1"
},
{
"text": "Previous studies (Ren et al., 2016 ; Abdul-Mageed and Ungar, 2017) use hashtags as a proxy to label data with social meaning concepts, indicating that hashtags occuring at the end of posts are reliable cues. Hence, we further pre-train RoBERTa on the Hashtag_end dataset with PM and RM, respectively. As Table 3 shows, PM exploiting hashtags in the end (PM-HE) outperforms random masking (RM-HE) with an average improvement of 1.08 F 1 across the 15 tasks. It is noteworthy that PM-HE shows improvements over RM-HE in the majority of tasks (12 tasks), and both of them outperform the baselines (1) and (3). Compared to RM-HA and PM-HA, the results demonstrate the utility of end-location hashtags on training a LM.",
"cite_spans": [
{
"start": 17,
"end": 34,
"text": "(Ren et al., 2016",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 304,
"end": 311,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Effect of Hashtag Location.",
"sec_num": null
},
{
"text": "PM of Emojis. Again, in order to study the impact of PM of emojis under a controlled condition where we guarantee each sample has at least one emoji, we further pre-train RoBERTa on the Emoji_any dataset with PM and RM, respectively. As Table 3 shows, both methods result in sizable im-provements on most of tasks. PM-EA outperforms the random masking method (RM-EA) (macro F 1 =0.38 improvement) and also exceeds the baseline (1), (2), and (3) with 1.52, 0.20, and 1.50 average F 1 , respectively. PM-EA thus obtains the best overall performance (macro F 1 = 77.30) and also achieves the best performance on Crisis Oltea-14 , two irony detection tasks, Offense Zamp , and Sarc Ptacek across all settings of our PM. This indicates that emojis carry important knowledge for social meaning tasks and demonstrates the effectiveness of our PM mechanism to distill and transfer this knowledge to diverse tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Hashtag Location.",
"sec_num": null
},
{
"text": "We analyze whether learning is sensitive to emoji location: we further pre-train RoBERTa on Emoji_end dataset with PM and RM and refer to these two models as PM-EE and RM-EE, respectively. Both models perform better than our baselines (1) and (3), and PM-EE achieves the best performance on four datasets across all settings of our PM. Unlike the case of hashtags, the location of the masked emoji is not sensitive for the learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Emoji Location.",
"sec_num": null
},
{
"text": "Overall, results show the effectiveness of our PMLM method in improving the self-supervised LM. All models trained with PM on emoji data obtain better performance than those pre-trained on hashtag data. It suggests that emoji cues are somewhat more helpful than hashtag cues for this type of guided model pre-training in the context of social meaning tasks. This implies emojis are more relevant to many social meaning tasks than hashtags are. In other words, in addition to them being cues for social meaning, hashtags can also stand for general topical categories to which different social meaning concepts can apply (e.g., #lunch can be accompanied by both happy and disgust emotions).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Emoji Location.",
"sec_num": null
},
{
"text": "We conduct SFT using hashtags and emojis. We continue training the original RoBERTa on the Hashtag_pred and Emoji_pred dataset for 35 epochs and refer to these trained models as SFT-H and SFT-E, respectively. To evaluate SFT-H and SFT-E, we further fine-tune the obtained models on 15 task-specific datasets. As Table 4 shows, SFT-E outperforms the first baseline (i.e., RoBERTa) with 1.16 F 1 scores. Comparing SFT-E and PMLM trained with the same dataset (PM-EE), we observe that the two models perform similarly (76.94 for SFT-E vs. 76.96 for PM-EE). Our proposed SFT-H method is also highly effective. On average, SFT-H achieves 2.19 and 0.87 F 1 improvement over our baseline (1) and (2), respectively. SFT-H also yields sizeable improvements on datasets with smaller training samples, such Irony Hee-B (improvement of 7.84 F 1 ) and Sarc Riloff (improvement of 6.65 F 1 ). Comparing SFT-H to the PMLM model trained with the same dataset (i.e., PM-HE), we observe that SFT-H also outperforms PM-H with 1.38 F 1 . This result indicate that SFT can more effectively utilize the information from tweets with hashtags. Table 4 : Surrogate fine-tuning (SFT). Baselines: RB (RoBERTa) and BTw (BERTweet). SFT-H: SFT with hashtags. SFT-E: SFT with emojis. PragS1: PMLM with Hashtag_end (best hashtag PM condition) followed by SFT-H. PragS2: PMLM with Emoji_any (best emoji PM condition) followed by SFT-E.",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table 4",
"ref_id": null
},
{
"start": 1120,
"end": 1127,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "SFT Experiments",
"sec_num": "5.2"
},
{
"text": "To further improve the PMLM with SFT, we take the best hashtag-based model (i.e., PM-HE in Table 3 ) and fine-tune on Emoji_pred (i.e., SFT-E) for 35 epochs. We refer to this last setting as PM-HE+SFT-E but use the easier alias PragS1 in Table 4 . We observe that PragS1 outperforms both, reaching an average F 1 of 77.43 vs. 75.78 for the baseline (1) and 76.94 for SFT-E. Similarly, we also take the best emoji-based PMLM (i.e., PM-EA in Table 3 ) and fine-tune on Hashtag_pred SFT (i.e., SFT-H) for 35 epochs. This last setting is referred to as PM-EA+SFT-H, but we again use the easier alias PragS2. Our best result is achieved with a combination of PM with emojis and SFT on hashtags (the PragS2 condition). This last model achieves an average F 1 of 78.12 and is 2.34 and 1.02 average points higher than baselines of RoBERTa and BERTweet, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 238,
"end": 245,
"text": "Table 4",
"ref_id": null
},
{
"start": 440,
"end": 447,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Combining PM and SFT",
"sec_num": "5.3"
},
{
"text": "The purpose of our work is to produce representations effective across all social meaning tasks, rather than a single given task. However, we still compare our best model (i.e., PragS2) on each dataset to the SOTA of that particular dataset and the published results on a Twitter evaluation benchmark (Barbieri et al., 2020) . All our reported results are an average of three runs, and we report using the same respective metric adopted by original authors on each dataset. As Table 5 shows, our model achieves the best performance on eight out of 15 datasets. On average, our models are 0.97 points higher than the closest baseline, i.e., BERTweet. This shows the superiority of our methods, even when compared to models trained simply with MLM with \u223c 3\u00d7 more data (850M tweets for BERTweet vs. only 276M for our best method). We also note that some SOTA models adopt taskspecific approaches and/or require task-specific resources. For example, Bamman and Smith (2015) utilize Stanford sentiment analyzer to identify the sentiment polarity of each word. In addition, taskspecific methods can still be combined with our proposed approaches to improve performance on individual tasks.",
"cite_spans": [
{
"start": 301,
"end": 324,
"text": "(Barbieri et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 946,
"end": 969,
"text": "Bamman and Smith (2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 477,
"end": 484,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Model Comparisons",
"sec_num": "5.4"
},
{
"text": "6 Zero-and Few-Shot Learning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparisons",
"sec_num": "5.4"
},
{
"text": "Since our methods exploit general cues in the data for pragmatic masking and learn a broad range of social meaning concepts, we hypothesize they should be particularly effective in few-shot learning. To test this hypothesis, we fine-tune our best models (i.e., PragS1 and PragS2) on varying percentages of the Train set of each task as explained in Section 4.2. Figure 2 shows that our two mod- BERTweet (Nguyen et al., 2020) . We compare using the same metrics employed on each dataset. Metrics:",
"cite_spans": [
{
"start": 395,
"end": 425,
"text": "BERTweet (Nguyen et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 362,
"end": 370,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Comparisons",
"sec_num": "5.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M-F 1 : macro F 1 , W-F 1 : weighted F 1 , F (i) 1 : F 1 irony class, F (i) 1 : F 1 irony class, F",
"eq_num": "(s)"
}
],
"section": "Model Comparisons",
"sec_num": "5.4"
},
{
"text": "1 : F 1 sarcasm class, M-Rec: macro recall, Avg(a,f): Average F 1 of the against and in-favor classes (three-way dataset). els always achieve better average macro F 1 scores than each of the RoBERTa and BERTweet baselines across all data size settings. Strikingly, our PragS1 and PragS2 outperform RoBERTa with an impressive 11.16 and 10.55 average macro F 1 , respectively, when we fine-tune them on only 1% of the downstream gold data. If we use only 5% of gold data, our PragS1 and PragS2 improve over the RoBERTa baseline with 5.50% and 5.08 points, respectively. This demonstrates that our proposed methods most effectively alleviate the challenge of labeled data even under the severely few-shot setting. In addition, we observe that the domain-specific LM, BERTweet, is outperformed by RoBERTa when labeled training data is severely scarce (\u2264 20%) (although it achieves superior performance when it is fine-tuned on the full dataset). These results suggest that, for the scarce data setting, it may be better to further pre-train and surrogate fine-tune an PLM than pre-train a domainspecific LM from scratch. We provide model performance on each downstream task and various few-shot settings in Section B in Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparisons",
"sec_num": "5.4"
},
{
"text": "Our proposed methods are language agnostic, and may fare well on languages other than English. Although we do not test this claim directly in this work, we do score our English-language best models on six datasets from three other languages (zero-shot setting). We fine-tune our best English model (i.e., PragS2 in Table 4 ) on the English dataset Emo Moham , Irony Hee-A , and Hate David and, then, evaluate on the Test set of emotion, irony, and hate speech datasets from other languages, respectively. We compare these models against the English RoBERTa baseline fine-tuned on the same English data. As Table 6 shows, our models outperform the baseline in the zero-shot setting on five out of six dataset with an average improvement of 5.96 F 1 . These results emphasize the effectiveness of our methods even in the zero-shot setting across different languages and tasks, and motivate future work further extending our methods to other languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 322,
"text": "Table 4",
"ref_id": null
},
{
"start": 606,
"end": 613,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Model Comparisons",
"sec_num": "5.4"
},
{
"text": "To better understand model behavior, we carry out both a qualitative and a quantitative analysis. For the qualitative analysis, we encode all the Dev and Test samples from one emotion downstream task using two PLMs (RoBERTa and BERTweet) and our two best models (i.e., PragS1 and PragS2) 8 . We then use the hidden state of the [CLS] token from the last Transformer encoder layer as the representation of each input. We then map these tweet representation vectors (768 dimensions) to a 2-D space through t-SNE technique (Van der Maaten and Hinton, 2008) and visualize the results. Comparing our models to the original RoBERTa and BERTweet, we observe that the representations from our models give sensible clustering of emotions before fine-tuning on downstream dataset. Recent research (Ethayarajh, 2019; Li et al., 2020; Gao et al., 2021) has identified an anisotropy problem with the sentence embedding from PLMs, i.e., learned representations occupy a narrow cone, which significantly undermines their expressiveness. Hence, several concurrent studies (Gao et al., 2021; Liu et al., 2021a) seek to improve uniformity of PLMs. However, Wang and Liu (2021) reveal a uniformity-tolerance dilemma, where excessive uniformity makes a model intolerant to semantically similar samples, thereby breaking its underlying semantic structure. Following Wang and Liu (2021) , we investigate the uniformity and tolerance of our models. The uniformity metric indicates the embedding distribution in a unit hypersphere, and the tolerance metric is the mean similarities of samples belonging to the same class. Formulas of uniformity and tolerance are defined in Section C in appendix. We calculate these two metrics for each model using development data from our 8 Note that we use these representation models without downstream fine-tuning.",
"cite_spans": [
{
"start": 520,
"end": 553,
"text": "(Van der Maaten and Hinton, 2008)",
"ref_id": "BIBREF44"
},
{
"start": 787,
"end": 805,
"text": "(Ethayarajh, 2019;",
"ref_id": "BIBREF12"
},
{
"start": 806,
"end": 822,
"text": "Li et al., 2020;",
"ref_id": "BIBREF19"
},
{
"start": 823,
"end": 840,
"text": "Gao et al., 2021)",
"ref_id": "BIBREF14"
},
{
"start": 1056,
"end": 1074,
"text": "(Gao et al., 2021;",
"ref_id": "BIBREF14"
},
{
"start": 1075,
"end": 1093,
"text": "Liu et al., 2021a)",
"ref_id": "BIBREF21"
},
{
"start": 1345,
"end": 1364,
"text": "Wang and Liu (2021)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Analyses",
"sec_num": "7"
},
{
"text": "13 downstream datasets (excluding Crisis Oltea and Stance Moham ). As Table 7 shows, RoBERTa obtains a low uniformity and high tolerance score with its representations are located at a narrow cone where the cosine similarities of data points are extremely high. Results reveal that none of MLMs (i.e., pragmatic masking and random masking models) improves the spatial anisotropy. Nevertheless, surrogate fine-tuning is able to alleviate the anisotropy improving the uniformity. SFT-H achieves best uniformity (at 3.00). Our hypothesis is that fine-tuning on our extremely fine-grained hashtag prediction task forces the model to learn a more uniform representation where hashtag classes are separable. Finally, we observe that our best model, Prag2, makes a balance between uniformity and tolerance (uniformity= 2.36, tolerance= 0.35). Table 7 : Comparison of uniformity and tolerance. For both metrics, higher is better.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 7",
"ref_id": null
},
{
"start": 836,
"end": 843,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Analyses",
"sec_num": "7"
},
{
"text": "We proposed two novel methods for improving transfer learning with PLMs, pragmatic masking and surrogate fine-tuning, and demonstrated the effectiveness of these methods on a wide range of social meaning datasets. Our models exhibit remarkable performance in the few-shot setting and even the severely few-shot setting. Our models also establish new SOTA on eight out of fifteen datasets when compared to tailored, task-specific models with access to external resources. Our proposed methods are also language independent, and show promising performance when applied in zeroshot settings on six datasets from three different languages. In future research, we plan to further test this language independence claim. We hope our methods will inspire new work on improving language models without use of much labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "state of [CLS] token from the last Transformerencoder layer through a non-linear layer to predict. Cross-Entropy calculates the training loss. We then use Adam with a weight decay of 0.01 to optimize the model and fine-tune each task for 20 epochs with early stop (patience = 5 epochs). We fine-tune the peak learning rate in a set of {1e \u2212 5, 5e \u2212 6} and batch size in a set of {8, 32, 64}. We find the learning rate of 5e \u2212 6 performs best across all the tasks. For the downstream tasks whose Train set is smaller than 15, 000 samples, the best mini-batch size is eight. The best batch size of other larger downstream tasks is 64.",
"cite_spans": [
{
"start": 9,
"end": 14,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "For fine-tuning BERTweet, we use the hyperparameters identified in Nguyen et al. (2020), i.e., a fixed learning rate of 1e \u2212 5 and a batch size of 32. We use the same hyperparameters to run three times with random seeds for all downstream finetuning (unless otherwise indicated). All downstream task models are fine-tuned on four Nvidia V100 GPUs (32G each). At the end of each epoch, we evaluate the model on the Dev set and identify the model that achieved the highest performance on Dev as our best model. We then test the best model on the Test set. In order to compute the model's overall performance across 15 tasks, we use same evaluation metric (i.e., macro F 1 ) for all tasks. We report the average Test macro F 1 of the best model over three runs. We also average the macro F 1 scores across 15 tasks to present the model's overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Tables B.1, B.2, B.3, and B.4 respectively, present the performance of RoBERTa, BERTweet, PragS1, and PragS2 on all our 15 English downstream datasets and various few-shot settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Few-Shot Experiment",
"sec_num": null
},
{
"text": "Wang and Liu (2021) investigate representation quality measuring the uniformity of an embedding distribution and the tolerance to semantically similar samples. Given a dataset D and an encoder \u03a6, the uniformity metric is based on a gaussian potential kernel and is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Uniformity and Tolerance",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "U nif ormity = log E x i ,x j \u2208D [e t||\u03a6(x i )\u2212\u03a6(x j )|| 2 2 ],",
"eq_num": "(1)"
}
],
"section": "C Uniformity and Tolerance",
"sec_num": null
},
{
"text": "where t = 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Uniformity and Tolerance",
"sec_num": null
},
{
"text": "The tolerance metric measures the mean of similarities of samples belonging to the same class, which defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Uniformity and Tolerance",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T olerance = log E x i ,x j \u2208D [(\u03a6(xi) T \u03a6(xj)) \u2022 I l(x i )=l(x j ) ],",
"eq_num": "(2)"
}
],
"section": "C Uniformity and Tolerance",
"sec_num": null
},
{
"text": "where l(x i ) is the supervised label of sample x i . I l(x i )=l(x j ) is an indicator function, giving the value of 1 for l(x i ) = l(x j ) and the value of 0 for l(x i ) \u0338 = l(x j ). In our experiments, we use gold development samples from 13 our social meaning datasets. Task 1 5 20 30 40 50 60 70 80 90 ",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 319,
"text": "Task 1 5 20 30 40 50 60 70 80 90",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "C Uniformity and Tolerance",
"sec_num": null
},
{
"text": "We select English tweets based on the Twitter language tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We perform an analysis based on two 10M random samples of tweets from Hashtag_any and Emoji_any, respectively. We find that on average there are 1.83 hashtags per tweet in Hashtag_any and 1.88 emojis per tweet in Emoji_any.4 We use the last hashtag as the label if there are more than one hashtag in the end of a tweet. Different from PMLM, SFT is a multi-class single-label classification task. We plan to explore the multi-class multi-label SFT in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To facilitate reference, we give each dataset a name. 6 https://developer.twitter.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For short, we refer to the official released English RoBERTaBase as RoBERTa in the rest of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Compute Canada (CC), 9 , and UBC ARC-Sockeye. 10 Any opinions, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSERC, SSHRC, CFI, CC, or UBC ARC-Sockeye. We thank AbdelRahim ElMadany for help with data preparation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "A Hyper-parameters and Procedure Pragmatic Masking. For pragmatic masking, we use the Adam optimizer with a weight decay of 0.01 (Loshchilov and Hutter, 2019) and a peak learning rate of 5e \u2212 5. The number of the epochs is five. Surrogate Fine-Tuning. For surrogate fine-tuning, we fine-tune RoBERTa on surrogate classification tasks with the same Adam optimizer but use a peak learning rate of 2e \u2212 5.The pre-training and surrogate fine-tuning models are trained on eight Nvidia V100 GPUs (32G each). On average the running time is 24 hours per epoch for PMLMs, 2.5 hours per epoch for SFT models. All the models are implemented by Huggingface Transformers (Wolf et al., 2020) . Downstream Fine-Tuning. We evaluate the further pre-trained models with pragmatic masking and surrogate fine-tuned models on the 15 downstream tasks in Table 2 . We set maximal sequence length as 60 for 13 text classification tasks. For Crisis Oltea and Stance Moham , we append the topic term behind the post content, separate them by [SEP] token, and set maximal sequence length to 70, especially. For all the tasks, we pass the hidden",
"cite_spans": [
{
"start": 129,
"end": 158,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF24"
},
{
"start": 658,
"end": 677,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF51"
},
{
"start": 1016,
"end": 1021,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 832,
"end": 839,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendices",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "EmoNet: Fine-grained emotion detection with gated recurrent neural networks",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul",
"suffix": ""
},
{
"first": "-Mageed",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "718--728",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1067"
]
},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed and Lyle Ungar. 2017. EmoNet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 718-728, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "AraNet: A deep learning toolkit for Arabic social media",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "Chiyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Azadeh",
"middle": [],
"last": "Hashemi",
"suffix": ""
},
{
"first": "El Moatez Billah",
"middle": [],
"last": "Nagoudi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed, Chiyu Zhang, Azadeh Hashemi, and El Moatez Billah Nagoudi. 2020. AraNet: A deep learning toolkit for Arabic social media. In Proceedings of the 4th Workshop on Open- Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 16-23, Marseille, France. European Language Re- source Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextualized sarcasm detection on twitter",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Ninth International Conference on Web and Social Media, ICWSM 2015",
"volume": "",
"issue": "",
"pages": "574--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman and Noah A. Smith. 2015. Contextu- alized sarcasm detection on twitter. In Proceedings of the Ninth International Conference on Web and Social Media, ICWSM 2015, University of Oxford, Oxford, UK, May 26-29, 2015, pages 574-577. AAAI Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "TweetEval: Unified benchmark and comparative evaluation for tweet classification",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Neves",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1644--1650",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.148"
]
},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Jose Camacho-Collados, Luis Es- pinosa Anke, and Leonardo Neves. 2020. TweetEval: Unified benchmark and comparative evaluation for tweet classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1644-1650, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval 2018 task 2: Multilingual emoji prediction",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "24--33",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1003"
]
},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Jose Camacho-Collados, Francesco Ronzano, Luis Espinosa-Anke, Miguel Ballesteros, Valerio Basile, Viviana Patti, and Horacio Saggion. 2018. SemEval 2018 task 2: Multilingual emoji prediction. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 24-33, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "FEEL-IT: emotion and sentiment classification for the italian language",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, WASSA@EACL 2021",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Debora Nozza, and Dirk Hovy. 2021. FEEL-IT: emotion and sentiment classifica- tion for the italian language. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, WASSA@EACL 2021, Online, April 19, 2021, pages 76-83. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Overview of the EVALITA 2018 hate speech detection task",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Maurizio",
"middle": [],
"last": "Tesconi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics",
"volume": "2263",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristina Bosco, Felice Dell'Orletta, Fabio Poletto, Manuela Sanguinetti, and Maurizio Tesconi. 2018. Overview of the EVALITA 2018 hate speech de- tection task. In Proceedings of the Sixth Evalua- tion Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018), Turin, Italy, December 12-13, 2018, volume 2263 of CEUR Workshop Proceedings. CEUR-WS.org.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Rethinking why intermediate-task fine-tuning works",
"authors": [
{
"first": "Yun",
"middle": [],
"last": "Ting",
"suffix": ""
},
{
"first": "Chi-Jen",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"volume": "",
"issue": "",
"pages": "706--713",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-emnlp.61"
]
},
"num": null,
"urls": [],
"raw_text": "Ting-Yun Chang and Chi-Jen Lu. 2021. Rethinking why intermediate-task fine-tuning works. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2021, pages 706-713, Punta Cana, Do- minican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hybrid emojibased masked language models for zero-shot abusive language detection",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Corazza",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "943--949",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.84"
]
},
"num": null,
"urls": [],
"raw_text": "Michele Corazza, Stefano Menini, Elena Cabrio, Sara Tonelli, and Serena Villata. 2020. Hybrid emoji- based masked language models for zero-shot abusive language detection. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 943-949, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"W"
],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eleventh International Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "512--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International Conference on Web and Social Media, ICWSM 2017, Montr\u00e9al, Qu\u00e9bec, Canada, May 15-18, 2017, pages 512-515. AAAI Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "55--65",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contextu- alized word representations? Comparing the geom- etry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 55-65, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "Felbo",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1615--1625",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1169"
]
},
"num": null,
"urls": [],
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rah- wan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representa- tions for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1615-1625, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SimCSE: Simple contrastive learning of sentence embeddings",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xingcheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "6894--6910",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.552"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence em- beddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 6894-6910, Online and Punta Cana, Do- minican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "IDAT at FIRE2019: overview of the track on irony detection in arabic tweets",
"authors": [
{
"first": "Bilal",
"middle": [],
"last": "Ghanem",
"suffix": ""
},
{
"first": "Jihen",
"middle": [],
"last": "Karoui",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Moriceau",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2019,
"venue": "FIRE '19: Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "10--13",
"other_ids": {
"DOI": [
"10.1145/3368567.3368585"
]
},
"num": null,
"urls": [],
"raw_text": "Bilal Ghanem, Jihen Karoui, Farah Benamara, V\u00e9ronique Moriceau, and Paolo Rosso. 2019. IDAT at FIRE2019: overview of the track on irony de- tection in arabic tweets. In FIRE '19: Forum for Information Retrieval Evaluation, Kolkata, India, De- cember, 2019, pages 10-13. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Train no evil: Selective masking for task-guided pre-training",
"authors": [
{
"first": "Yuxian",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaozhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6966--6974",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.566"
]
},
"num": null,
"urls": [],
"raw_text": "Yuxian Gu, Zhengyan Zhang, Xiaozhi Wang, Zhiyuan Liu, and Maosong Sun. 2020. Train no evil: Selective masking for task-guided pre-training. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6966-6974, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Knowledge enhanced masked language model for stance detection",
"authors": [
{
"first": "Kornraphop",
"middle": [],
"last": "Kawintiranon",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4725--4735",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.376"
]
},
"num": null,
"urls": [],
"raw_text": "Kornraphop Kawintiranon and Lisa Singh. 2021. Knowledge enhanced masked language model for stance detection. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 4725-4735, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SentiLARE: Sentiment-aware language representation learning with linguistic knowledge",
"authors": [
{
"first": "Pei",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Haozhe",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Siyang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6975--6988",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.567"
]
},
"num": null,
"urls": [],
"raw_text": "Pei Ke, Haozhe Ji, Siyang Liu, Xiaoyan Zhu, and Min- lie Huang. 2020. SentiLARE: Sentiment-aware lan- guage representation learning with linguistic knowl- edge. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6975-6988, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On the sentence embeddings from pre-trained language models",
"authors": [
{
"first": "Bohan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "9119--9130",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.733"
]
},
"num": null,
"urls": [],
"raw_text": "Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "EntityBERT: Entity-centric masking strategy for model pretraining for the clinical domain",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 20th Workshop on Biomedical Language Processing",
"volume": "",
"issue": "",
"pages": "191--201",
"other_ids": {
"DOI": [
"10.18653/v1/2021.bionlp-1.21"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2021. EntityBERT: Entity-centric masking strategy for model pretrain- ing for the clinical domain. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 191-201, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders",
"authors": [
{
"first": "Fangyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vulic",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "2021",
"issue": "",
"pages": "1442--1459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangyu Liu, Ivan Vulic, Anna Korhonen, and Nigel Collier. 2021a. Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1442-1459. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Crisisbert: A robust transformer for crisis classification and contextual crisis embedding",
"authors": [
{
"first": "Junhua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Trisha",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "T",
"middle": [
"M"
],
"last": "Luci\u00ebnne",
"suffix": ""
},
{
"first": "Kristin",
"middle": [
"L"
],
"last": "Blessing",
"suffix": ""
},
{
"first": "Kwan Hui",
"middle": [],
"last": "Wood",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2021,
"venue": "HT '21: 32nd ACM Conference on Hypertext and Social Media",
"volume": "",
"issue": "",
"pages": "133--141",
"other_ids": {
"DOI": [
"10.1145/3465336.3475117"
]
},
"num": null,
"urls": [],
"raw_text": "Junhua Liu, Trisha Singhal, Luci\u00ebnne T. M. Blessing, Kristin L. Wood, and Kwan Hui Lim. 2021b. Crisis- bert: A robust transformer for crisis classification and contextual crisis embedding. In HT '21: 32nd ACM Conference on Hypertext and Social Media, Virtual Event, Ireland, 30 August 2021 -2 September 2021, pages 133-141. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations, ICLR 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenRe- view.net.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semeval 2021 task 7: Hahackathon, detecting and rating humor and offense",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Meaney",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"R"
],
"last": "Wilson",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Chiruzzo",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 15th International Workshop on Semantic Evaluation, SemEval@ACL/IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "105--119",
"other_ids": {
"DOI": [
"10.18653/v1/2021.semeval-1.9"
]
},
"num": null,
"urls": [],
"raw_text": "J. A. Meaney, Steven R. Wilson, Luis Chiruzzo, Adam Lopez, and Walid Magdy. 2021. Semeval 2021 task 7: Hahackathon, detecting and rating humor and offense. In Proceedings of the 15th International Workshop on Semantic Evaluation, SemEval@ACL/IJCNLP 2021, Virtual Event / Bangkok, Thailand, August 5-6, 2021, pages 105-119. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SemEval-2018 task 1: Affect in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1001"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1-17, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SemEval-2016 task 6: Detecting stance in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "31--41",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1003"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31- 41, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "BERTweet: A pre-trained language model for English tweets",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "9--14",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.2"
]
},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English tweets. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 9-14, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Crisislex: A lexicon for collecting and filtering microblogged communications in crises",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Olteanu",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Vieweg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Sarah Vieweg. 2014. Crisislex: A lexicon for col- lecting and filtering microblogged communications in crises. In Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014, Ann Arbor, Michigan, USA, June 1-4, 2014. The AAAI Press.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "English intermediatetask training improves zero-shot cross-lingual transfer too",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Iacer",
"middle": [],
"last": "Calixto",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Phu Mon Htut",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Kann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "557--575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruk- sachatkun, Haokun Liu, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. English intermediate- task training improves zero-shot cross-lingual trans- fer too. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Compu- tational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 557-575, Suzhou, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "SemEval-2017 task 6: #HashtagWars: Learning a sense of humor",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Potash",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2004"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. SemEval-2017 task 6: #HashtagWars: Learn- ing a sense of humor. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 49-57, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "What to pre-train on? Efficient intermediate task selection",
"authors": [
{
"first": "Clifton",
"middle": [],
"last": "Poth",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10585--10605",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.827"
]
},
"num": null,
"urls": [],
"raw_text": "Clifton Poth, Jonas Pfeiffer, Andreas R\u00fcckl\u00e9, and Iryna Gurevych. 2021. What to pre-train on? Efficient intermediate task selection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10585-10605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work?",
"authors": [
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoyi",
"middle": [],
"last": "Phu Mon Htut",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"Yuanzhe"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Kann",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5231--5247",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.467"
]
},
"num": null,
"urls": [],
"raw_text": "Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bow- man. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5231-5247, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sarcasm detection on Czech and English Twitter",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Pt\u00e1\u010dek",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "213--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Pt\u00e1\u010dek, Ivan Habernal, and Jun Hong. 2014. Sar- casm detection on Czech and English Twitter. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Techni- cal Papers, pages 213-223, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sarcasm detection on twitter: A behavioral modeling approach",
"authors": [
{
"first": "Ashwin",
"middle": [],
"last": "Rajadesingan",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM 2015",
"volume": "",
"issue": "",
"pages": "97--106",
"other_ids": {
"DOI": [
"10.1145/2684822.2685316"
]
},
"num": null,
"urls": [],
"raw_text": "Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM 2015, Shanghai, China, Febru- ary 2-6, 2015, pages 97-106. ACM.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Context-sensitive twitter sentiment classification using neural network",
"authors": [
{
"first": "Yafeng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "215--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yafeng Ren, Yue Zhang, Meishan Zhang, and Donghong Ji. 2016. Context-sensitive twitter sen- timent classification using neural network. In Pro- ceedings of the Thirtieth AAAI Conference on Arti- ficial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 215-221. AAAI Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Sarcasm as contrast between a positive sentiment and negative situation",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Surve",
"suffix": ""
},
{
"first": "Lalindra De",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "704--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 704-714, Seattle, Washing- ton, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "SemEval-2017 task 4: Sentiment analysis in Twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "502--518",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2088"
]
},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502- 518, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Dystemo: Distant supervision method for multi-category emotion recognition in tweets",
"authors": [
{
"first": "Valentina",
"middle": [],
"last": "Sintsova",
"suffix": ""
},
{
"first": "Pearl",
"middle": [],
"last": "Pu",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Trans. Intell. Syst. Technol",
"volume": "8",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2912147"
]
},
"num": null,
"urls": [],
"raw_text": "Valentina Sintsova and Pearl Pu. 2016. Dystemo: Dis- tant supervision method for multi-category emotion recognition in tweets. ACM Trans. Intell. Syst. Tech- nol., 8(1).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "MASS: masked sequence to sequence pre-training for language generation",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019",
"volume": "97",
"issue": "",
"pages": "5926--5936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Ma- chine Learning Research, pages 5926-5936. PMLR.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "ERNIE: enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yu-Kun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: enhanced representation through knowledge integration. CoRR, abs/1904.09223.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Meaning in interaction: An introduction to pragmatics",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jenny",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny A Thomas. 2014. Meaning in interaction: An introduction to pragmatics. Routledge.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis",
"authors": [
{
"first": "Can",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Bolei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4067--4076",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.374"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Tian, Can Gao, Xinyan Xiao, Hao Liu, Bolei He, Hua Wu, Haifeng Wang, and Feng Wu. 2020. SKEP: Sentiment knowledge enhanced pre-training for sen- timent analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4067-4076, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of machine learning research",
"volume": "",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11).",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "SemEval-2018 task 3: Irony detection in English tweets",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "39--50",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1005"
]
},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2018. SemEval-2018 task 3: Irony detection in En- glish tweets. In Proceedings of The 12th Interna- tional Workshop on Semantic Evaluation, pages 39- 50, New Orleans, Louisiana. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hula",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Raghavendra",
"middle": [],
"last": "Pappagari",
"suffix": ""
},
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Yinghui",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Katherin",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shuning",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4465--4476",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1439"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappa- gari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bowman. 2019. Can you tell me how to get past sesame street? sentence-level pretraining beyond language model- ing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4465-4476, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Understanding the behaviour of contrastive loss",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Huaping",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2021,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual",
"volume": "",
"issue": "",
"pages": "2495--2504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 2495-2504. Computer Vision Foundation / IEEE.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Exploiting emojis for abusive language detection",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "369--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand and Josef Ruppenhofer. 2021. Ex- ploiting emojis for abusive language detection. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 369-380, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Emoji as emotion tags for tweets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Wood",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Emotion and Sentiment Analysis Workshop LREC2016",
"volume": "",
"issue": "",
"pages": "76--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Wood and Sebastian Ruder. 2016. Emoji as emotion tags for tweets. In Proceedings of the Emotion and Sentiment Analysis Workshop LREC2016, Portoro\u017e, Slovenia, pages 76-79.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "XLNet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems 32: Annual Confer- ence on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5754-5764.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1144"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (Of-fensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. SemEval-2019 task 6: Identifying and cat- egorizing offensive language in social media (Of- fensEval). In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 75-86, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1441--1451",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "LIMIT-BERT : Linguistics informed multi-task BERT",
"authors": [
{
"first": "Junru",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Shuailiang",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4450--4461",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.399"
]
},
"num": null,
"urls": [],
"raw_text": "Junru Zhou, Zhuosheng Zhang, Hai Zhao, and Shuail- iang Zhang. 2020. LIMIT-BERT : Linguistics in- formed multi-task BERT. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 4450-4461, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "1: Full result of few-shot learning on Baseline (1), fine-tuning RoBERTa",
"authors": [
{
"first": "B",
"middle": [],
"last": "Table",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table B.1: Full result of few-shot learning on Baseline (1), fine-tuning RoBERTa.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "2: Full result of few-shot learning on BERTweet",
"authors": [
{
"first": "B",
"middle": [],
"last": "Table",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table B.2: Full result of few-shot learning on BERTweet.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "3: Full result of few-shot learning on PragS1",
"authors": [
{
"first": "B",
"middle": [],
"last": "Table",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table B.3: Full result of few-shot learning on PragS1.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "4: Full result of few-shot learning on PragS2",
"authors": [
{
"first": "B",
"middle": [],
"last": "Table",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table B.4: Full result of few-shot learning on PragS2.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Illustration of our proposed pragmatic masking and surrogate fine-tuning methods.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Liu et al. (2021b),\u22c6\u22c6 Waseem and Hovy (2016), \u2020 Davidson et al. (2017), = Meaney et al. (2021), \u2020 \u2020 Van Hee et al. (2018), \u2021 Zampieri et al. (2019b), \u2021 \u2021 Riloff et al. (2013), \u00a7 Pt\u00e1\u010dek et al. (2014), \u00a7 \u00a7 Rajadesingan et al. (2015), \u2225 Bamman and Smith (2015), \u2662 Rosenthal et al. (2017), \u229a Mohammad et al. (2016).",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": ": t-SNE plots of the learned embeddings on Dev and Test sets of Emo Moham . Our learned representations clearly help tease apart the different classes.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"content": "
",
"text": "Samples from our social meaning benchmark.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "",
"text": "",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"content": ": Pragmatic masking results. Baselines: (1) RB: RoBERTa, (2) BTw: BERTweet, (3) RM-NR. Light green |
indicates our models outperforming the baseline (1). |
",
"text": "Bold font indicates best model across all our random and pragmatic masking methods.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF7": {
"content": ": Model comparisons. SOTA: Best performance on each respective dataset. TwE: TweetEval (Barbi-eri et al., 2020) is a benchmark for tweet classification evaluation. BTw: |
",
"text": "",
"html": null,
"type_str": "table",
"num": null
},
"TABREF9": {
"content": "",
"text": "Zero-shot performance. RB: RoBERTa.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}