ACL-OCL / Base_JSON /prefixW /json /wassa /2021.wassa-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:44.710238Z"
},
"title": "Universal Joy A Data Set and Results for Classifying Emotions Across Languages",
"authors": [
{
"first": "Sotiris",
"middle": [],
"last": "Lamprinidis",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Daniel",
"middle": [],
"last": "Hardt",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "While emotions are universal aspects of human psychology, they are expressed differently across different languages and cultures. We introduce a new data set of over 530k anonymized public Facebook posts across 18 languages, labeled with five different emotions. Using multilingual BERT embeddings, we show that emotions can be reliably inferred both within and across languages. Zero-shot learning produces promising results for lowresource languages. Following established theories of basic emotions, we provide a detailed analysis of the possibilities and limits of crosslingual emotion classification. We find that structural and typological similarity between languages facilitates cross-lingual learning, as well as linguistic diversity of training data. Our results suggest that there are commonalities underlying the expression of emotion in different languages. We publicly release the anonymized data for future research.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "While emotions are universal aspects of human psychology, they are expressed differently across different languages and cultures. We introduce a new data set of over 530k anonymized public Facebook posts across 18 languages, labeled with five different emotions. Using multilingual BERT embeddings, we show that emotions can be reliably inferred both within and across languages. Zero-shot learning produces promising results for lowresource languages. Following established theories of basic emotions, we provide a detailed analysis of the possibilities and limits of crosslingual emotion classification. We find that structural and typological similarity between languages facilitates cross-lingual learning, as well as linguistic diversity of training data. Our results suggest that there are commonalities underlying the expression of emotion in different languages. We publicly release the anonymized data for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Emotions are fundamental to human experience across languages and cultures. The nature of emotions and their linguistic expression is a topic of enduring interest across disciplines such as psychology, linguistics, philosophy, and neuroscience. Emotion researchers have investigated the existence of basic emotions such as anger, fear, disgust, sadness, and happiness (Ekman, 2016) , all of which were already described in the 19th century by Darwin ([1872 Darwin ([ ] 1998 and Wundt (1896) . Furthermore, Ekman (2016) reports a growing consensus concerning the universality of emotions across languages and cultures. Computational linguistics can help shed light on the way in which emotions are expressed in the languages of the world.",
"cite_spans": [
{
"start": 368,
"end": 381,
"text": "(Ekman, 2016)",
"ref_id": "BIBREF6"
},
{
"start": 443,
"end": 456,
"text": "Darwin ([1872",
"ref_id": "BIBREF3"
},
{
"start": 457,
"end": 473,
"text": "Darwin ([ ] 1998",
"ref_id": "BIBREF3"
},
{
"start": 478,
"end": 490,
"text": "Wundt (1896)",
"ref_id": "BIBREF17"
},
{
"start": 506,
"end": 518,
"text": "Ekman (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, most existing research has focused either on English or used very small multilingual data sets. We present a new dataset, Universal Joy (UJ), of over 530,000 anonymized public Facebook posts distributed across 18 languages, labeled with five different emotions: anger, anticipation, fear, joy, and sadness. This dataset represents a substantial advance over prior datasets, both in terms of its size and its linguistic diversity. It provides a strong empirical foundation for exploring basic questions about the nature and expression of emotions across the languages of the world. Figure 3 shows the heatmap of relative emotion distribution for each language. In this paper, we use this dataset to explore multilingual emotion classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 590,
"end": 598,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first perform emotion classification in a monolingual setting, i.e., training and testing on a single language. We then expand to a cross-lingual setting, i.e., where the training data contains other languages in addition to the test data language. Finally, we test how well we can do in a zero-shot learning setting; here the training data does not include any data in the language of the test data -a setting particularly relevant for low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Overall, we find consistent effects of crosslingual learning, which raises several interesting issues: first, it suggests that accurate emotion detection might be possible even for low-resource languages. Accurate models for such languages might be achievable with substantial amounts of training data from high-resource languages like English. More generally, however, we explore why cross-lingual learning works, and what linguistic circumstances support or hamper such learning. We explore three main factors: code-switching, typological closeness of training and test languages, and linguistic diversity in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We hope the richness of this data set opens up exciting future research avenues, and release the models and the complete anonymized dataset at https: //github.com/sotlampr/universal-joy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions 1) We publish a new dataset of over 530k anonymized public Facebook posts in 18 languages, labeled with five basic emotions. 2) We show results for various classification setups, including transfer learning setups like cross-lingual and zero-shot learning. 3) We analyze the sources of cross-lingual learning in depth, including the effect of code-switching, typological closeness, and linguistic diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dataset described here is a substantially reorganized and cleaned version of one previously described, but not released Zimmerman et al. (2015) . It was collected in October 2014 by searching for public Facebook posts with a Facebook \"feelings tag\". We did verify publication with the data protection officer of the main institution. For a Data Statement (Bender and Friedman, 2018) , see Appendix A.",
"cite_spans": [
{
"start": 124,
"end": 147,
"text": "Zimmerman et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 359,
"end": 386,
"text": "(Bender and Friedman, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "We remove any duplicates, and classify each instance's language using three methods: langid, 1 cld3, 2 , and FastText. 3 We keep only instances where at least two of these methods agree. We manually evaluate 200 randomly selected instances labeled deu, fra, eng, ita, and spa, 4 and find the average precision of our method is 0.97(\u00b10.04).",
"cite_spans": [
{
"start": 119,
"end": 120,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "To anonymize the data, we remove identifying information by replacing names with the special token [PERSON] . Where possible (Dutch, English, French, German, Portuguese, Spanish, Italian), we use the spacy 5 NER to replace any PERSON entities. For languages without spacy support, we either use the Stanford CoreNLP NER tagger (Manning et al., 2014) and replace PERSON-tagged words (Chinese), or replace all given names and surnames found in Wiktionary for the respective languages (Bengali, Burmese, Hindi, Indonesian, Khmer, Malay, Romanian, Tagalog, Thai, Vietnamese).",
"cite_spans": [
{
"start": 99,
"end": 107,
"text": "[PERSON]",
"ref_id": null
},
{
"start": 327,
"end": 349,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Finally, we perform some additional preprocessing steps to replace any number with 0, and the Facebook-specific tags \"with [PERSON]\" with the special token [WITH], \"at [LOCATION]\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "with the special token [LOCATION] , and photos, emails, and URLs with the special tokens [PHOTO], [EMAIL] , and [URL], respectively.",
"cite_spans": [
{
"start": 23,
"end": 33,
"text": "[LOCATION]",
"ref_id": null
},
{
"start": 98,
"end": 105,
"text": "[EMAIL]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Similar to the approach of Zimmerman et al. (2015) , we map the 27 initial emotion tags into five labels of basic emotions: anger, anticipation, fear, joy, and sadness (see mapping in Appendix Table 10 ). We choose this label set for several reasons: first, it is similar to the lists of basic emotions proposed in the psychological literature (Ekman, 2016; Plutchik, 1994) . Second, each of the five labels is well-represented in the Facebook data, whereas the original tags are highly imbalanced and often rare. Finally, it is similar to the lists of basic emotions used in recent NLP studies, and facilitates comparison with recent work by e.g., Abdul-Mageed and Ungar (2017) .",
"cite_spans": [
{
"start": 27,
"end": 50,
"text": "Zimmerman et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 345,
"end": 358,
"text": "(Ekman, 2016;",
"ref_id": "BIBREF6"
},
{
"start": 359,
"end": 374,
"text": "Plutchik, 1994)",
"ref_id": "BIBREF13"
},
{
"start": 650,
"end": 679,
"text": "Abdul-Mageed and Ungar (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 193,
"end": 202,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "As training languages, we choose all languages with more than 35 samples for the least frequent emotion (i.e., fear). The size and distribution of emotions for each dataset is presented in table 11. Figure 3 . Fear is a very rare emotion in all languages, while joy is the most common. Anticipation, the second most frequent class, is especially prevalent in English . There are also differences in joy (more prevalent in Spanish) and sadness (more prevalent in Portuguese). ",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "We create three versions of the dataset for training purposes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Datasets",
"sec_num": "2.1"
},
{
"text": "\u2022 Small: this version includes the five languages with sufficient training data for the least frequent emotion, fear: namely eng, spa, por, cmn, tgl. This dataset is balanced by language, so that there are 2,947 posts for each language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Datasets",
"sec_num": "2.1"
},
{
"text": "\u2022 Large: this version includes 29,364 posts for each of the three most frequent languages: eng, spa, por. Note that each of these is a superset of the corresponding language in the Small training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Datasets",
"sec_num": "2.1"
},
{
"text": "\u2022 Huge: this version contains 283,853 posts from the single most frequent language, eng.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Datasets",
"sec_num": "2.1"
},
{
"text": "For each training language, we create fixed-size development and test sets, stratified by emotion and following a 70:15:15 ratio with respect to the Small version of the dataset. Thus for eng, spa, por, cmn, tgl, the test and dev sets each consist of 631 posts. The rest of the languages are combined in a separate test set, labeled low-resource. Note that for the purposes of this paper, what we call low-resource languages are the thirteen languages with insufficient training data in our corpus. This includes languages such as German and French, which are not, in general, low-resource languages. The lowresource test set for each of these languages simply consists of all the posts in that language. The low-resource sets will allow us to broadly measure zero-shot performance (see Section 4.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Datasets",
"sec_num": "2.2"
},
{
"text": "We model the task as a series of binary classification problems, one per emotion, similarly to Mohammad et al. (2018) (this allows for the theoretical case where there are multiple emotions in one instance). We use Logistic Regression and Multilingual BERT as classifiers, to probe for various lexical and syntactic properties of the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "We use the scikit-learn 6 implementation to extract TFIDF-weighted bag-of-words (BOW) features, and train Logistic Regression LR models with L2 regularization (C = 1.0) and balanced label loss-weighting. We include BOW features for comparison purposes -in particular, to assess the extent to which cross-lingual effects might arise from code switching or other forms of token overlap across languages. For the LR models we use the same tokenization as in 3.2, from the pre-trained multilingual BERT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression Models",
"sec_num": "3.1"
},
{
"text": "To optimize performance, we use the multilingual BERT (mBERT) model. 7 We follow Devlin et al. (2019) in optimizing the model using a machine with an Intel i9-9940X CPU, 32GB RAM and a NVIDIA Quadro RTX 6000 GPU.",
"cite_spans": [
{
"start": 69,
"end": 70,
"text": "7",
"ref_id": null
},
{
"start": 81,
"end": 101,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "The loss L is the mean over the individual losses l \u2208 L for each emotion: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "L = E i=1 L i (y i , x) /E L i (y i ; x) = \u2212w i [y i log(p(Y i | x)) +(1 \u2212 y i ) log(1 \u2212 p(Y i | x)]) p(Y | x) = Sigmoid(h(x)W + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "where E is the number of emotions e \u2208 E, y \u2208 {0, 1} E is a one-hot vector of the target emotion, x \u2208 X are the input byte-pair pieces, h(x) : X \u2192 R 768 is the mean-pooled output of the BERT [CLS] token for input x, W 768 \u00d7E and b \u2208 R E are learnable parameters, and P (Y |x) is the predicted probability distribution over the emotions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "We use instance weighting to address the high class imbalance. For each emotion, we weight positive class instances as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "w i = N e=\u00acE i /N e=E i , i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "e., the inverse proportion of negative examples to positive examples, averaged for all languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "We linearly increase the learning rate for half an epoch and then linearly decay it until the end of the training. For the monolingual and zero-shot learning task, we select the model with the highest macro-averaged F1-score across all emotions on the target language development set. For the crosslingual task, we choose the model with the best average score on all languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "3.2"
},
{
"text": "The simplest setting is the classification of emotions within one language -that is, the test, development, and training data are all taken from the same language. This provides a strong baseline for cross-lingual work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Classification",
"sec_num": null
},
{
"text": "Cross-lingual Classification In this setting, we test whether knowledge about emotions expressed in one language can be transferred to another language. Here, the training data includes one or more languages in addition to the language of the development and test data. Note that emotion distributions differ between languages. This likely affects performance and could be addressed by stratified resampling. However, that presupposes that all languages exhibit the same emotions to the same degree, which is by no means certain. So while sampling would improve performance, it would distort the \"natural\" distribution, and preclude future analysis of language-specific studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Classification",
"sec_num": null
},
{
"text": "Zero-shot learning Here the training data does not include the language of the development and test sets. We use two versions of zero-shot training: single-language, and multilingual, depending on the number of languages present in the training data. In either case, the size of the training data is the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Classification",
"sec_num": null
},
{
"text": "We treat each emotion as a separate binary task, and compute a macro-average of the F1-scores for each of the six tasks. A random baseline model (table 15 in the Appendix) always predicts the positive class for all emotions, giving an average macro F1-score of around 0.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We evaluate our methods on available data sets for emotion classification in English (Abdul-Mageed and Ungar, 2017; Troiano et al., 2019) . We com-pare performance on the EmoNet test data as well as the English test data from ISEAR (Troiano et al., 2019) . In addition, we report performance on the English test data from our Universal Joy dataset. These tests are designed to assess two points: whether our data set is comparable to previously published data, and whether the models we use are performant enough to enable meaningful investigations.",
"cite_spans": [
{
"start": 85,
"end": 115,
"text": "(Abdul-Mageed and Ungar, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 116,
"end": 137,
"text": "Troiano et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 232,
"end": 254,
"text": "(Troiano et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual English Tests",
"sec_num": "4.1"
},
{
"text": "We obtained the EmoNet dataset (Abdul-Mageed and Ungar, 2017) from the authors of the paper. The benchmark shared by the authors contains 80k tweet IDs. However, some of the tweets do not exist anymore, and some contain emotions we are not considering in this work (i.e., disgust). Thus, after removal we were left with a test set of around 40K tweets. We were, therefore, unable to reproduce their full setup. Table 3 shows results on English test data, using the two LR models as well as mBERT on the small, large, and huge training data. For a prediction, we take the output probabilities from EmoNet and use the most probable emotion that is in our set of five emotions. The results show that using more training data in a LR model or any of the mBERT model improves performance across the board and yields competitive results. We are therefore confident that our data collection and model choices produce meaningful results. But does this performance extend to other test languages than English?",
"cite_spans": [
{
"start": 31,
"end": 61,
"text": "(Abdul-Mageed and Ungar, 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 411,
"end": 418,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Monolingual English Tests",
"sec_num": "4.1"
},
{
"text": "We now turn to cross-lingual investigation: table 4 shows results using a variety of training sets with the five test sets from eng, por, spa, cmn, and tgl. We show results for both LR models and mBERT on monolingual, cross-lingual, and zeroshot Universal Joy data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Tests",
"sec_num": "4.2"
},
{
"text": "In addition to the Small, Large, and Huge training sets described above, we test mBERT on some additional training data combinations. First, we divide Small into Indo-European (Small-IE) and non-Indo-European (Small\u223cIE). We also combine the Large and Small training datasets from all languages to test whether more diversity balances out more data. This dataset comprises five languages, but only about half as many instances as Huge English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Tests",
"sec_num": "4.2"
},
{
"text": "In general, mBERT models outperform the LR models. Furthermore, the mBERT models frequently show positive cross-lingual effects; that is, training data improves performance even when it is from a language other than the test language. For example, small-mono for spa is 0.43, while smallall (including all five languages) is 0.45. Small-all on average is 0.53, while small-mono is 0.51. On the other hand, large-mono (0.57) is better than large-all (0.56). Perhaps cross-lingual improvements are easier to obtain when the monolingual model is weaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Tests",
"sec_num": "4.2"
},
{
"text": "Using training data based on language families (IE and \u223cIE, respectively), indicates typological effects (which we explore further in Sections 5.2 and 5.3). Specifically, training on non-Indo-European languages results in higher performance for cmn and tgl (though not the highest overall). Table 5 shows mBERT results for monolingual and zero-shot training. Unsurprisingly, the best results always involve training on the same language as the test language (see diagonal), and more data helps. Table 6 compares mBERT with the LR models on zero-shot. In particular, we compare singlelanguage zero-shot with multilingual. We see that the multilingual scores are consistently higher than the single-language scores, across all models. This provides evidence for the benefit of diversity in the training data. One reason could be a wider range of ways to express emotions. We will investigate this in more detail in Section 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 298,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 495,
"end": 502,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Cross-lingual Tests",
"sec_num": "4.2"
},
{
"text": "We consistently see cross-lingual learning capabilities with the mBERT models. The zeroshot scores are significantly above the random baseline (paired one-sided t-test): zero-shot vs. random: t = 4.08, dof = 24, p < 0.001 monolingual vs. zero-shot: t = 3.17, dof = 24, p = 0.002 cross-lingual vs. monolingual: t = 1.28, dof = 24, p < 0.105. Zero-shot vs. random and mono- lingual vs. zero-shot are significant at Bonferronicorrected significance level a = 0.05/3 = 0.0167. Cross-lingual frequently performs better than monolingual, but with considerable variation. Table 7 shows zero-shot results when testing on the low-resource languages with the Small, Large, Large&Small, and Huge training sets. Small is also split typologically: Small-IE consists only of the Indo-European languages eng, spa, and por, while Small \u223cIE consists of the remaining languages in Small, cmn and tgl. We compare against a monolingual result for each language, using a LR system described in Section 3.1, taking the 10-fold cross-validation average. 8 For Indo-European languages, zero-shot models consistently outperform the monolingual model; furthermore, larger zero-shot models tend to do better, although the linguistically diverse L&S model often does better than the much-larger Huge model (which is only English). The zero-shot models rarely do well with the non-Indo-European languages. This is not surprising, since most of the data in the zero-shot models comes from Indo-European languages. Below we investigate these results in more detail to assess the factors that facilitate cross-lingual learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 565,
"end": 572,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Cross-lingual Tests",
"sec_num": "4.2"
},
{
"text": "Our results show a wide range of cross-lingual effects. In many cases, they are quite substantial, while in other cases we observe no effect. We 8 For languages that have k < 10 instances per any emotion, we do a k-fold cross-validation. believe these differences are due to the linguistic properties of the languages that the models pick up on. Here we examine some of the factors involved in this: code switching, typological closeness, and linguistic diversity. The results can shed light on the similarities and differences in how emotions are expressed in different languages. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Cross-lingual Effects",
"sec_num": "5"
},
{
"text": "A post involves code-switching if it combines multiple languages, as in the following: \"We love you guys ... proud of you..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code Switching",
"sec_num": "5.1"
},
{
"text": "[PHOTO] jongens heel veel succes vanavond ..!!!!\" (the English translation of the Dutch part is \"guys lots of luck tonight\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code Switching",
"sec_num": "5.1"
},
{
"text": "This post is classified as Dutch, but includes text in English. BoW (bag of word) models are therefore well-suited to take advantage of code switching. This post provides information about both Dutch and English, since there are several tokens in each language associated with the labeled emotion, anticipation. A model could learn here that \"love\" is associated with anticipation and use this on English test data, even though the training example is classified as Dutch. However, observe in table 6 that the BoW models (LR) perform poorly in zero-shot learning. They are near or below baseline, except in the Large multilingual case. mBERT, by contrast, is consistently above the baseline and consistently better than the BoW models. This suggests that code-switching is not relevant to the cross-lingual effects we have observed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code Switching",
"sec_num": "5.1"
},
{
"text": "A natural hypothesis, explored by Singh et al. (2019) , is that cross-lingual effects are stronger for typologically close languages; that is, scores are higher when training and test language are closely related. Following Pires et al. (2019) , we compute typological closeness as overlap on selected features from the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013) . These features are particularly relevant to word order of categories. In Figure 2 , we plot for each pair of languages <l1,l2> the number of WALS features 9 shared between l1 and l2 against the zero-shot score obtained when training on l1 and testing on l2. Indeed, more shared features correlate with better performance, especially in mBERT models. Table 8 shows the correlation between performance and several other measures of similarity between languages, including the number of shared bigrams and emoticons, the shared WALS features, and the proximity in a genealogy tree. Compare the striking positive correlation between shared WALS features and performance (0.27) for the mBERT model, while there is no such correlation for the two LR models, corroborating Figure 2 . This suggests that these particular WALS features related to word order, are relevant to abstract features of the mBERT models, while they are irrelevant for the LR models. In contrast, see the flipped correlation of \"lexical\" features like bigrams and emoticons in the two model types. Genealogical proximity has a high correlation with performance in all models, but again is highest for mBERT. All this supports the idea, also discussed in (Pires et al., 2019) , that mBERT models are sensitive to abstract syntactic features that are shared across languages.",
"cite_spans": [
{
"start": 34,
"end": 53,
"text": "Singh et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 224,
"end": 243,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 362,
"end": 390,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": null
},
{
"start": 1613,
"end": 1633,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 466,
"end": 474,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 743,
"end": 750,
"text": "Table 8",
"ref_id": "TABREF13"
},
{
"start": 1159,
"end": 1167,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Typological Closeness",
"sec_num": "5.2"
},
{
"text": "We find clear evidence that diversity of training languages improves performance. Table 6 shows a clear multilingual advantage in zero-shot learning; for all three models, the multilingual scores are higher than the monolingual scores. In table 7, the last two lines compare a diverse training set of 102k instances (Large&Small) to the more than twice as large Huge English training data (283k). The average results over the 13 low-resource languages of both are identical.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Linguistic Diversity",
"sec_num": "5.3"
},
{
"text": "Abdul-Mageed and Ungar (2017) collect tweets based on user-inserted hashtags; the resulting dataset, EmoNet, is similar to ours in that it uses a distant supervision approach. Above we presented results based on the EmoNet dataset and model. The SemEval 2018 Task 1 (Baziotis et al., 2018) , involves classification and regression tasks for four emotions: joy, sadness, anger, and fear. The Affect in Tweets Dataset (Mohammad et al., 2018 ) is a small dataset that includes emotions annotated in multiple languages; it does not involve the cross-linguistic investigation of emotion that is central to the present work. Troiano et al. (2019) describe a small, bilingual emotion dataset, with English and German (ISEAR). Wang et al. (2018) describe a bilingual Chinese-English emotion dataset (NLPCC). We provide results on these datasets in table 9 in the Appendix; it's important to note that these datasets differ in important ways from the Facebook data in our dataset.",
"cite_spans": [
{
"start": 266,
"end": 289,
"text": "(Baziotis et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 416,
"end": 438,
"text": "(Mohammad et al., 2018",
"ref_id": "BIBREF11"
},
{
"start": 619,
"end": 640,
"text": "Troiano et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 719,
"end": 737,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We introduce a new data set of over 530,000 anonymized Facebook posts from 18 languages, labeled with five basic emotions. We show that emotions can be reliably identified, both within and across languages, including zero-shot learning. This suggests substantial opportunities for transferring knowledge from high-resource to low-resource languages. In a detailed investigation of the factors supporting cross-lingual learning, we find evidence for the importance of linguistic diversity of training data as well as syntactic and typological similarities between languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "These results provide intriguing evidence of deep commonalities in the linguistic expression of emotion across the languages of the world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Collecting and publishing a data set from social media raises a number of ethical concerns. We have prepared and planned the release of this dataset in close consultation with the Data Protection Officer of the main institution for the publication. The data is completely anonymized, with all identifying information removed. It was collected from pub-lic postings on Facebook, accessed using the standard Facebook API for collection of such postings. Based on these considerations, the Data Protection Officer approved our plan to release the dataset, and certified that it complies with GDPR and other relevant requirements. The actual data collection was performed by graduate assistants as part of their studies, and the process involve no manual annotation. We also provide a data statement, to allow future researchers to assess any inherent bias. The primary aim of our study is academic, to understand the interplay between language and emotion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical considerations",
"sec_num": "8"
},
{
"text": "We follow Bender and Friedman (2018) on providing a Data Statement for our corpus, in order to provide a fuller picture of the possibilities and limitations of the data, and to allow future researchers to spot any biases we might have missed.",
"cite_spans": [
{
"start": 10,
"end": 36,
"text": "Bender and Friedman (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "CURATION RATIONALE We use Facebook postings originally collected in 2014; any identifying information of the authors has been removed by anonymization. LANGUAGE VARIETY Eighteen different languages, as identified by language classification, therefore presumably mostly standard. Due to the setting (Facebook posts), some non-standard language is likely. SPEAKER DEMOGRAPHICS Unknown, though gender could be inferred from first names before anonymization. Due to the setting, all authors need to have access to internet, which means a young demographic is likely. SPEECH SITUATION Facebook posts selflabeled with emotions -i.e., short, written, spontaneous texts written synchronously with a broad audience in mind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "TEXT CHARACTERISTICS Wide range of topics, but confined broadly to emotional issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "Results on additional multilingual datasets are given in table 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Macro-F1 score for the results on other datasets",
"sec_num": null
},
{
"text": "The testing datasets are displayed in table 11. Table 12 gives monolingual and zero-shot results for the two logistic regression models, Uni-LR an BT-LR. Note that zero-shot results are generally at or below baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "C Datasets",
"sec_num": null
},
{
"text": "For extracting the bigram types, we remove all punctuation and split the sentences into tokens using nltk (Loper and Bird, 2002) word tokenizer. We keep bigrams with more than 5 occurences. For extracting the emoticons/punctuation, we keep only Unicode punctuation characters and use nltk TweetTokenizer to split the sentences into tokens. We discard any token consisting of a single character and replace all occurences of more than 3 consecutive punctuation characters with just 3 characters. We keep tokens with more than 5 occurences. Tables 16 through 18 give results for mBERT Small models, separated out for the different emotions. ",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 539,
"end": 559,
"text": "Tables 16 through 18",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "E Code switching & Emoticons",
"sec_num": null
},
{
"text": "https://github.com/saffsd/langid.py 2 https://github.com/google/cld3 3 https://fasttext.cc/docs/en/ language-identification.html(Joulin et al., 2016b,a) 4 Here and in what follows, we use standard ISO language codes, which are given in table 1.5 https://spacy.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://scikit-learn.org 7 https://github.com/google-research/ bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We keep features that all of our languages have annotations for: 81A (Order of Subject, Object and Verb), 82A (Order of Subject and Verb), and 83A (Order of Object and Verb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "EmoNet: Fine-grained emotion detection with gated recurrent neural networks",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul",
"suffix": ""
},
{
"first": "-Mageed",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "718--728",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1067"
]
},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed and Lyle Ungar. 2017. EmoNet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 718-728, Vancouver, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "NTUA-SLP at Semeval-2018 task 1: Predicting affective content in tweets with deep attentive RNNs and transfer learning",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Athanasiou",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Chronopoulou",
"suffix": ""
},
{
"first": "Athanasia",
"middle": [],
"last": "Kolovou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "245--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Nikos Athanasiou, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth Narayanan, and Alexandros Potamianos. 2018. NTUA-SLP at Semeval-2018 task 1: Predicting affective content in tweets with deep attentive RNNs and transfer learning. In Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018), pages 245--255.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00041"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The expression of the emotions in man and animals",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Darwin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Darwin. [1872] 1998. The expression of the emotions in man and animals. Oxford University Press, USA. (Original work published 1872).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What scientists who study emotion agree about",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 2016,
"venue": "Perspectives on Psychological Science",
"volume": "11",
"issue": "1",
"pages": "31--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 2016. What scientists who study emotion agree about. Perspectives on Psychological Science, 11(1):31-34.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Matthijs Douze, H\u00e9rve J\u00e9gou, and Tomas Mikolov. 2016a. Fasttext.zip: Compressing text classification models",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03651"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H\u00e9rve J\u00e9gou, and Tomas Mikolov. 2016a. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016b. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Nltk: The natural language toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics, pages 63-70.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semeval-2018 task 1: Affect in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval- 2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation, pages 1-17.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How multilingual is multilingual bert? arXiv preprint",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01502"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? arXiv preprint arXiv:1906.01502.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The psychology and biology of emotion",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 1994. The psychology and biology of emotion. HarperCollins College Publishers.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT is not an interlingua and the bias of tokenization",
"authors": [
{
"first": "Jasdeep",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP",
"volume": "",
"issue": "",
"pages": "47--55",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6106"
]
},
"num": null,
"urls": [],
"raw_text": "Jasdeep Singh, Bryan McCann, Richard Socher, and Caiming Xiong. 2019. BERT is not an interlingua and the bias of tokenization. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 47-55, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Crowdsourcing and validating event-focused emotion corpora for German and English",
"authors": [
{
"first": "Enrica",
"middle": [],
"last": "Troiano",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4005--4011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrica Troiano, Sebastian Pad\u00f3, and Roman Klinger. 2019. Crowdsourcing and validating event-focused emotion corpora for German and English. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4005- 4011.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of nlpcc 2018 shared task 1: Emotion detection in codeswitching text",
"authors": [
{
"first": "Zhongqing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qingying",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "CCF International Conference on Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "429--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqing Wang, Shoushan Li, Fan Wu, Qingying Sun, and Guodong Zhou. 2018. Overview of nlpcc 2018 shared task 1: Emotion detection in code- switching text. In CCF International Conference on Natural Language Processing and Chinese Comput- ing, pages 429-433. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Grundriss der Psychologie (Outlines of Psychology)",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wundt",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wundt. 1896. Grundriss der Psychologie (Outlines of Psychology). Leipzig: Engelmann.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Emergence of things felt: harnessing the semantic space of facebook feeling tags",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Zimmerman",
"suffix": ""
},
{
"first": "Mari-Klara",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hardt",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Vatrapu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Thirty Sixth International Conference on Information Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Zimmerman, Mari-Klara Stein, Daniel Hardt, and Ravi Vatrapu. 2015. Emergence of things felt: harnessing the semantic space of facebook feel- ing tags. In Proceedings of the Thirty Sixth Interna- tional Conference on Information Systems.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Heatmap of relative emotion distribution per language.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Number of shared WALS features as a function of zero-shot performance for various models.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Box plot of emotion ratios over all languages",
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "There is a wide variety in the amount of data per language, ranging from 284,265 posts for English, the most frequent language, to 869 posts for Bengali (see table 1).",
"content": "<table><tr><td colspan=\"3\">ISO Samples Language</td><td>Family</td></tr><tr><td>code</td><td/><td/><td/></tr><tr><td>ben</td><td>869</td><td colspan=\"2\">Bengali Indo-European</td></tr><tr><td>cmn</td><td>4909</td><td>Chinese</td><td>Sino-Tibetan</td></tr><tr><td>deu</td><td>5902</td><td colspan=\"2\">German Indo-European</td></tr><tr><td>eng</td><td>284265</td><td colspan=\"2\">English Indo-European</td></tr><tr><td>fra</td><td>6557</td><td colspan=\"2\">French Indo-European</td></tr><tr><td>hin</td><td>1823</td><td colspan=\"2\">Hindi Indo-European</td></tr><tr><td>ind</td><td colspan=\"3\">6201 Indonesian Austronesian</td></tr><tr><td>ita</td><td>6709</td><td colspan=\"2\">Italian Indo-European</td></tr><tr><td>khm</td><td>977</td><td colspan=\"2\">Khmer Austroasiatic</td></tr><tr><td>mya</td><td>953</td><td>Burmese</td><td>Sino-Tibetan</td></tr><tr><td>nld</td><td>2201</td><td colspan=\"2\">Dutch Indo-European</td></tr><tr><td>por</td><td colspan=\"3\">31326 Portuguese Indo-European</td></tr><tr><td>rom</td><td colspan=\"3\">1940 Romanian Indo-European</td></tr><tr><td>spa</td><td>31326</td><td colspan=\"2\">Spanish Indo-European</td></tr><tr><td>tgl</td><td>4909</td><td colspan=\"2\">Tagalog Austronesian</td></tr><tr><td>tha</td><td>3803</td><td>Thai</td><td>Tai-Kadai</td></tr><tr><td>vie</td><td colspan=\"3\">3956 Vietnamese Austroasiatic</td></tr><tr><td>zsm</td><td>4908</td><td colspan=\"2\">Malay Austronesian</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "Languages in Universal Joy data set",
"content": "<table><tr><td>Distribution of Emotions per Language There</td></tr><tr><td>are significant differences in the prevalence of each</td></tr></table>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Number of samples per language and emotion, for top ten languages, significant outliers at \u03b1 = 0.05 in bold, using \u03c7 2 test \u03c7 2 = 1780054.57, dof = 36, N = 50, p 0.001.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "Model Training UJ EmoNet ISEAR Avg.",
"content": "<table><tr><td colspan=\"3\">EmoNet EmoNet 0.23</td><td>0.47</td><td>0.41 0.37</td></tr><tr><td/><td>Small</td><td>0.45</td><td>0.31</td><td>0.24 0.33</td></tr><tr><td>LR</td><td>Large</td><td>0.52</td><td>0.41</td><td>0.31 0.41</td></tr><tr><td/><td>Huge</td><td>0.52</td><td>0.47</td><td>0.37 0.45</td></tr><tr><td/><td>Small</td><td>0.46</td><td>0.40</td><td>0.48 0.45</td></tr><tr><td>mBERT</td><td>Large</td><td>0.58</td><td>0.48</td><td>0.46 0.51</td></tr><tr><td/><td>Huge</td><td>0.63</td><td>0.55</td><td>0.49 0.56</td></tr></table>",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF7": {
"type_str": "table",
"text": "Macro-F1 results for mono & cross-lingual learning on the Universal Joy data. mBERT and LR models.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF9": {
"type_str": "table",
"text": "mBERT macro-F1 results in mono-lingual setting on two sets of universal joy data. Best result for each training dataset per language in bold.",
"content": "<table><tr><td colspan=\"2\">Dataset Method</td><td>Model</td><td>eng</td><td>por</td><td>spa cmn</td><td>tgl Avg</td></tr><tr><td>Small</td><td>Zero-shot single-lang avg</td><td colspan=\"5\">LR mBERT 0.40 0.36 0.34 0.37 0.30 0.35 0.26 0.24 0.24 0.17 0.22 0.23</td></tr><tr><td/><td>Zero-shot multilingual</td><td colspan=\"5\">LR mBERT 0.47 0.40 0.34 0.39 0.41 0.40 0.28 0.28 0.26 0.16 0.26 0.25</td></tr><tr><td>Large</td><td>Zero-shot single lang avg</td><td colspan=\"4\">LR mBERT 0.42 0.41 0.37 0.30 0.29 0.29</td><td>0.29 0.40</td></tr><tr><td/><td>Zero-shot multilingual</td><td colspan=\"4\">LR mBERT 0.45 0.45 0.39 0.33 0.35 0.31</td><td>0.33 0.43</td></tr></table>",
"num": null,
"html": null
},
"TABREF10": {
"type_str": "table",
"text": "Macro-F1 results for zero-shot learning with Small and Large training sets. Single lang avg. is based on average results of models for each language other than test language. Multilingual involves a single model with the same amount of training data as single-language, but evenly mixed among the different languages. Results below the random baseline in gray.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF11": {
"type_str": "table",
"text": "Data ben deu fra hin ita nld rom Avg ind khm mya tha vie zsm Avg mono 0.26 0.37 0.42 0.36 0.32 0.36 0.35 0.35 0.40 0.27 0.36 0.33 0.39 0.47 0.37 S 0.35 0.38 0.42 0.30 0.37 0.39 0.35 0.37 0.34 0.31 0.21 0.32 0.34 0.34 0.31 L 0.34 0.37 0.42 0.29 0.37 0.40 0.38 0.37 0.35 0.32 0.29 0.33 0.37 0.37 0.34 L&S 0.38 0.38 0.44 0.30 0.38 0.40 0.37 0.38 0.35 0.32 0.28 0.32 0.38 0.37 0.34 Huge en 0.34 0.39 0.43 0.30 0.37 0.42 0.35 0.37 0.36 0.34 0.30 0.32 0.36 0.36 0.34",
"content": "<table><tr><td>Indo-European</td><td>non Indo-European</td></tr></table>",
"num": null,
"html": null
},
"TABREF12": {
"type_str": "table",
"text": "mBERT macro-F1 results for zero-shot learning on low-resource languages in Universal Joy data. Improvements hold mainly for IE languages.",
"content": "<table><tr><td colspan=\"5\">Model Bigrams Emoticons WALS Genealogy</td></tr><tr><td>LR</td><td>0.54</td><td>0.31</td><td>0.01</td><td>0.50</td></tr><tr><td>mBERT</td><td>0.18</td><td>0.41</td><td>0.27</td><td>0.57</td></tr></table>",
"num": null,
"html": null
},
"TABREF13": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Spearman's rank correlation between perfor-</td></tr><tr><td>mance and various language similarity measures. Sig-</td></tr><tr><td>nificant correlations at \u03b1 = 0.05, Bonferroni-corrected</td></tr><tr><td>for each model, in bold.</td></tr></table>",
"num": null,
"html": null
},
"TABREF15": {
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td>: Results on other datasets</td></tr><tr><td colspan=\"2\">Original Emotion Mapped Emotion</td></tr><tr><td>accomplished</td><td/></tr><tr><td>amused</td><td/></tr><tr><td>angry</td><td>anger</td></tr><tr><td>annoyed</td><td>anger</td></tr><tr><td>awesome</td><td/></tr><tr><td>bad</td><td/></tr><tr><td>confident</td><td/></tr><tr><td>confused</td><td/></tr><tr><td>depressed</td><td>sadness</td></tr><tr><td>determined</td><td/></tr><tr><td>disappointed</td><td>sadness</td></tr><tr><td>disgusted</td><td/></tr><tr><td>down</td><td>sadness</td></tr><tr><td>excited</td><td>anticipation</td></tr><tr><td>fantastic</td><td>joy</td></tr><tr><td>great</td><td>joy</td></tr><tr><td>happy</td><td>joy</td></tr><tr><td>heartbroken</td><td>sadness</td></tr><tr><td>hopeful</td><td>anticipation</td></tr><tr><td>pissed</td><td>anger</td></tr><tr><td>proud</td><td/></tr><tr><td>pumped</td><td>anticipation</td></tr><tr><td>sad</td><td>sadness</td></tr><tr><td>scared</td><td>fear</td></tr><tr><td>super</td><td>joy</td></tr><tr><td>wonderful</td><td>joy</td></tr><tr><td>worried</td><td>fear</td></tr></table>",
"num": null,
"html": null
},
"TABREF16": {
"type_str": "table",
"text": "Mapping from Facebook emotion to the 5 basic emotions.",
"content": "<table><tr><td>Dataset</td><td colspan=\"4\">Lang. Anger Anticip. Fear</td><td colspan=\"3\">Joy Sadness Total</td></tr><tr><td/><td>eng</td><td>58</td><td>400</td><td>11</td><td>384</td><td>128</td><td>981</td></tr><tr><td/><td>spa</td><td>56</td><td>228</td><td>5</td><td>538</td><td>154</td><td>981</td></tr><tr><td>UJ Testing</td><td>por cmn</td><td>56 92</td><td>254 218</td><td>7 21</td><td>418 533</td><td>246 117</td><td>981 981</td></tr><tr><td/><td>tgl</td><td>129</td><td>183</td><td>32</td><td>433</td><td>204</td><td>981</td></tr><tr><td/><td>ben</td><td>120</td><td>211</td><td>7</td><td>249</td><td>282</td><td>869</td></tr><tr><td/><td>deu</td><td>425</td><td>1475</td><td colspan=\"2\">8 3388</td><td colspan=\"2\">606 5902</td></tr><tr><td/><td>fra</td><td>382</td><td>1788</td><td colspan=\"2\">22 3222</td><td colspan=\"2\">1143 6557</td></tr><tr><td/><td>hin</td><td>274</td><td>231</td><td>8</td><td>830</td><td colspan=\"2\">480 1823</td></tr><tr><td/><td>ind</td><td>382</td><td>1841</td><td colspan=\"2\">32 3077</td><td colspan=\"2\">869 6201</td></tr><tr><td/><td>ita</td><td>472</td><td>1910</td><td colspan=\"2\">20 3656</td><td colspan=\"2\">651 6709</td></tr><tr><td>UJ Low Resource</td><td>khm</td><td>115</td><td>158</td><td>23</td><td>469</td><td>212</td><td>977</td></tr><tr><td/><td>zsm</td><td>326</td><td>1344</td><td colspan=\"2\">34 2566</td><td colspan=\"2\">638 4908</td></tr><tr><td/><td>mya</td><td>177</td><td>130</td><td>9</td><td>412</td><td>225</td><td>953</td></tr><tr><td/><td>nld</td><td>150</td><td>788</td><td>10</td><td>981</td><td colspan=\"2\">272 2201</td></tr><tr><td/><td>rom</td><td>97</td><td>560</td><td>8</td><td>923</td><td colspan=\"2\">352 1940</td></tr><tr><td/><td>tha</td><td>244</td><td>938</td><td colspan=\"2\">21 2202</td><td colspan=\"2\">398 3803</td></tr><tr><td/><td>vie</td><td>176</td><td>1137</td><td colspan=\"2\">39 1982</td><td colspan=\"2\">622 3956</td></tr></table>",
"num": null,
"html": null
},
"TABREF17": {
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">: Testing Datasets</td></tr><tr><td>Model</td><td colspan=\"3\">Dataset Language eng</td><td>por</td><td>spa cmn</td><td>tgl</td></tr><tr><td/><td/><td>eng</td><td colspan=\"3\">0.40 0.30 0.25 0.30 0.29</td></tr><tr><td/><td/><td>por</td><td colspan=\"3\">0.28 0.48 0.31 0.25 0.27</td></tr><tr><td/><td>Small</td><td>spa</td><td colspan=\"3\">0.27 0.38 0.38 0.26 0.25</td></tr><tr><td>Uni-LR</td><td/><td>cmn tgl</td><td colspan=\"3\">0.32 0.25 0.26 0.43 0.28 0.31 0.29 0.26 0.26 0.56</td></tr><tr><td/><td/><td>eng</td><td colspan=\"3\">0.51 0.28 0.27 0.29 0.44</td></tr><tr><td/><td>Large</td><td>por</td><td colspan=\"3\">0.28 0.52 0.35 0.27 0.29</td></tr><tr><td/><td/><td>spa</td><td colspan=\"3\">0.32 0.37 0.47 0.31 0.29</td></tr><tr><td/><td/><td>eng</td><td colspan=\"3\">0.45 0.21 0.22 0.24 0.26</td></tr><tr><td/><td/><td>por</td><td colspan=\"3\">0.25 0.48 0.32 0.21 0.19</td></tr><tr><td/><td>Small</td><td>spa</td><td colspan=\"3\">0.24 0.32 0.41 0.15 0.21</td></tr><tr><td>BT-LR</td><td/><td>cmn tgl</td><td colspan=\"3\">0.29 0.19 0.21 0.46 0.21 0.26 0.22 0.23 0.10 0.59</td></tr><tr><td/><td/><td>eng</td><td colspan=\"3\">0.52 0.23 0.26 0.21 0.41</td></tr><tr><td/><td>Large</td><td>por</td><td colspan=\"3\">0.28 0.50 0.33 0.21 0.29</td></tr><tr><td/><td/><td>spa</td><td colspan=\"3\">0.31 0.36 0.46 0.21 0.28</td></tr></table>",
"num": null,
"html": null
},
"TABREF18": {
"type_str": "table",
"text": "Macro-F1 results in mono-lingual setting for the Logistic Regression models. Results below the baseline in gray.",
"content": "<table><tr><td>Bigram</td><td># Languages</td></tr><tr><td>i love</td><td>14</td></tr><tr><td>love you</td><td>12</td></tr><tr><td>good morning</td><td/></tr><tr><td>of the</td><td>9</td></tr><tr><td>new year</td><td>9</td></tr><tr><td>the best</td><td>8</td></tr><tr><td>this is</td><td>8</td></tr><tr><td>in the</td><td>8</td></tr><tr><td>have a</td><td>8</td></tr><tr><td>thank you</td><td>7</td></tr><tr><td>happy birthday</td><td>7</td></tr><tr><td>so much</td><td>6</td></tr><tr><td>coming soon</td><td>6</td></tr><tr><td>we are</td><td>6</td></tr><tr><td>for the</td><td>6</td></tr><tr><td>i am</td><td>6</td></tr><tr><td>on the</td><td>5</td></tr><tr><td>see you</td><td>5</td></tr><tr><td>happy new</td><td>5</td></tr><tr><td>a nice</td><td>5</td></tr><tr><td>more info</td><td>4</td></tr><tr><td>to be</td><td>4</td></tr><tr><td>make up</td><td>4</td></tr><tr><td>you all</td><td>4</td></tr><tr><td>like share</td><td>4</td></tr></table>",
"num": null,
"html": null
},
"TABREF19": {
"type_str": "table",
"text": "Prevalence of common bigrams between languages",
"content": "<table><tr><td colspan=\"2\">Pattern # Languages</td></tr><tr><td>!!</td><td>18</td></tr><tr><td>!!!</td><td>18</td></tr><tr><td>...</td><td>18</td></tr><tr><td>???</td><td>18</td></tr><tr><td>:)</td><td>18</td></tr><tr><td>..</td><td>18</td></tr><tr><td>:D</td><td>17</td></tr><tr><td>??</td><td>16</td></tr><tr><td>:(</td><td>16</td></tr><tr><td>***</td><td>16</td></tr><tr><td>!.</td><td>16</td></tr><tr><td>;)</td><td>16</td></tr><tr><td>!!!.</td><td>15</td></tr><tr><td>:-)</td><td>15</td></tr><tr><td>:'(</td><td>15</td></tr><tr><td>).</td><td>15</td></tr><tr><td>...!!!</td><td>15</td></tr><tr><td>,,,</td><td>15</td></tr><tr><td>...!</td><td>14</td></tr><tr><td>.(</td><td>14</td></tr><tr><td>\"</td><td>14</td></tr><tr><td>,,</td><td>14</td></tr><tr><td>...#</td><td>13</td></tr><tr><td>!!.</td><td>13</td></tr><tr><td>!...</td><td>13</td></tr></table>",
"num": null,
"html": null
},
"TABREF20": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Prevalence of common punctuation patterns</td></tr><tr><td>between languages</td></tr></table>",
"num": null,
"html": null
},
"TABREF21": {
"type_str": "table",
"text": "F1-scores for random baseline.",
"content": "<table><tr><td colspan=\"3\">language anger anticipation fear</td><td colspan=\"2\">joy sadness avg</td></tr><tr><td>eng</td><td>0.45</td><td colspan=\"2\">0.63 0.15 0.59</td><td>0.49 0.46</td></tr><tr><td>por</td><td>0.39</td><td colspan=\"2\">0.54 0.13 0.66</td><td>0.68 0.48</td></tr><tr><td>spa</td><td>0.25</td><td colspan=\"2\">0.44 0.29 0.71</td><td>0.46 0.43</td></tr><tr><td>cmn</td><td>0.67</td><td colspan=\"2\">0.47 0.86 0.75</td><td>0.61 0.67</td></tr><tr><td>tgl</td><td>0.47</td><td colspan=\"2\">0.45 0.52 0.65</td><td>0.55 0.53</td></tr><tr><td>avg</td><td>0.45</td><td colspan=\"2\">0.51 0.39 0.67</td><td>0.56 0.51</td></tr></table>",
"num": null,
"html": null
},
"TABREF22": {
"type_str": "table",
"text": "F1-scores for mono-lingual classification.",
"content": "<table><tr><td colspan=\"3\">language anger anticipation fear</td><td colspan=\"3\">joy sadness avg</td></tr><tr><td>eng</td><td>0.42</td><td colspan=\"2\">0.65 0.24 0.58</td><td colspan=\"2\">0.49 0.47</td></tr><tr><td>por</td><td>0.28</td><td colspan=\"2\">0.54 0.00 0.61</td><td>0.55</td><td>0.4</td></tr><tr><td>spa</td><td>0.19</td><td colspan=\"2\">0.47 0.00 0.63</td><td colspan=\"2\">0.41 0.34</td></tr><tr><td>cmn</td><td>0.44</td><td colspan=\"2\">0.42 0.00 0.69</td><td colspan=\"2\">0.41 0.39</td></tr><tr><td>tgl</td><td>0.26</td><td colspan=\"2\">0.38 0.43 0.53</td><td colspan=\"2\">0.44 0.41</td></tr><tr><td>avg</td><td>0.32</td><td colspan=\"2\">0.49 0.13 0.61</td><td colspan=\"2\">0.46 0.40</td></tr></table>",
"num": null,
"html": null
},
"TABREF23": {
"type_str": "table",
"text": "F1-scores for zero-shot multi-lingual classification",
"content": "<table><tr><td colspan=\"3\">language anger anticipation fear</td><td colspan=\"2\">joy sadness avg</td></tr><tr><td>eng</td><td>0.43</td><td colspan=\"2\">0.66 0.07 0.56</td><td>0.54 0.45</td></tr><tr><td>por</td><td>0.41</td><td colspan=\"2\">0.53 0.25 0.65</td><td>0.65 0.50</td></tr><tr><td>spa</td><td>0.33</td><td colspan=\"2\">0.49 0.18 0.72</td><td>0.53 0.45</td></tr><tr><td>cmn</td><td>0.63</td><td colspan=\"2\">0.47 0.89 0.77</td><td>0.62 0.67</td></tr><tr><td>tgl</td><td>0.51</td><td colspan=\"2\">0.45 0.65 0.64</td><td>0.57 0.56</td></tr><tr><td>avg</td><td>0.46</td><td colspan=\"2\">0.52 0.41 0.67</td><td>0.58 0.53</td></tr></table>",
"num": null,
"html": null
},
"TABREF24": {
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}