{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:06:57.926844Z" }, "title": "XLM-EMO: Multilingual Emotion Prediction in Social Media Text", "authors": [ { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bocconi University", "location": { "addrLine": "Via Sarfatti 25", "settlement": "Milan", "country": "Italy" } }, "email": "f.bianchi@unibocconi.it" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bocconi University", "location": { "addrLine": "Via Sarfatti 25", "settlement": "Milan", "country": "Italy" } }, "email": "debora.nozza@unibocconi.it" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bocconi University", "location": { "addrLine": "Via Sarfatti 25", "settlement": "Milan", "country": "Italy" } }, "email": "dirk.hovy@unibocconi.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Detecting emotion in text allows social and computational scientists to study how people behave and react to online events. However, developing these tools for different languages requires data that is not always available. This paper collects the available emotion detection datasets across 19 languages. We train a multilingual emotion prediction model for social media data, XLM-EMO. The model shows competitive performance in a zero-shot setting, suggesting it is helpful in the context of lowresource languages. We release our model to the community so that interested researchers can directly use it.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Detecting emotion in text allows social and computational scientists to study how people behave and react to online events. However, developing these tools for different languages requires data that is not always available. This paper collects the available emotion detection datasets across 19 languages. We train a multilingual emotion prediction model for social media data, XLM-EMO. The model shows competitive performance in a zero-shot setting, suggesting it is helpful in the context of lowresource languages. We release our model to the community so that interested researchers can directly use it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Emotion Detection is an important task for Natural Language Processing and for Affective Computing. Indeed, several resources and models have been proposed (Alm et al., 2005; Abdul-Mageed and Ungar, 2017; Nozza et al., 2017; Xia and Ding, 2019; Demszky et al., 2020 , inter alia) for this task. These models can be used by social and computational scientists (Verma et al., 2020; Kleinberg et al., 2020; Huguet Cabot et al., 2020) to better understand how people react to events through the use of social media. However, these methods often require large training sets that are not always available for low-resource languages. Nonetheless, multilingual methods (Wu and Dredze, 2019) have risen across the entire field showing powerful few-shot and zero-shot capabilities (Bianchi et al., 2021b; Nozza, 2021) .", "cite_spans": [ { "start": 156, "end": 174, "text": "(Alm et al., 2005;", "ref_id": "BIBREF2" }, { "start": 175, "end": 204, "text": "Abdul-Mageed and Ungar, 2017;", "ref_id": "BIBREF0" }, { "start": 205, "end": 224, "text": "Nozza et al., 2017;", "ref_id": "BIBREF25" }, { "start": 225, "end": 244, "text": "Xia and Ding, 2019;", "ref_id": null }, { "start": 245, "end": 265, "text": "Demszky et al., 2020", "ref_id": "BIBREF11" }, { "start": 359, "end": 379, "text": "(Verma et al., 2020;", "ref_id": "BIBREF36" }, { "start": 380, "end": 403, "text": "Kleinberg et al., 2020;", "ref_id": "BIBREF19" }, { "start": 404, "end": 430, "text": "Huguet Cabot et al., 2020)", "ref_id": "BIBREF16" }, { "start": 661, "end": 682, "text": "(Wu and Dredze, 2019)", "ref_id": null }, { "start": 771, "end": 794, "text": "(Bianchi et al., 2021b;", "ref_id": "BIBREF6" }, { "start": 795, "end": 807, "text": "Nozza, 2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this short paper, we introduce a new resource: XLM-EMO. XLM-EMO is a model for multilingual emotion prediction on social media data. We collected datasets for emotion detection in 19 different languages and mapped the labels of each dataset to a common set {joy, anger, fear, sadness} that is then used to train the model. We show that XLM-EMO is capable of maintaining stable performances across languages and it is competitive against language-specific baselines in zero-shot settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We believe that XLM-EMO can be of help to the community as emotion prediction is becoming an interesting and relevant task in NLP; the addition of a multilingual model that can perform zero-shot emotion prediction can be of help for many lowresource languages that still do not have a dataset for emotion detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We release XLM-EMO which is a multilingual emotion detection model for social media text. XLM-EMO shows competitive zeroshot capabilities on unseen languages. We release the model in two versions a base and a large to adapt to different possible use-cases. We make the models 1 and the code to train it freely available under a Python package that can be directly embedded in novel data analytics pipelines. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": null }, { "text": "We surveyed the literature to understand which datasets are available in the literature and with which kinds of emotions. Details on how we operate on this data can be found in the Appendix, here we give an overview of the transformation pipeline we have adopted and which datasets have been included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Related Work", "sec_num": "2" }, { "text": "The datasets we have collected and used in this paper are presented in Table 1 with the method of annotation and the linguistic family of the language. Figure 1 shows instead the class distribution.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 152, "end": 160, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Data and Related Work", "sec_num": "2" }, { "text": "We describe here the general guidelines we have used to create this dataset, readers can find details for each dataset in the Appendix. For all the datasets we removed the emotions that are not in the set joy, anger, fear, sadness (e.g., Cortiz et al. (2021) , Vasantharajan et al. (2022) , Shome (2021) used the 27 emotions from GoEmotion (Demszky et al., 2020) and we just collected the subset of our emotions). We have some exceptions to Twitter data, as the Tamil dataset Vasantharajan et al. (2022) contains YouTube comments.", "cite_spans": [ { "start": 238, "end": 258, "text": "Cortiz et al. (2021)", "ref_id": "BIBREF9" }, { "start": 261, "end": 288, "text": "Vasantharajan et al. (2022)", "ref_id": null }, { "start": 340, "end": 362, "text": "(Demszky et al., 2020)", "ref_id": "BIBREF11" }, { "start": 476, "end": 503, "text": "Vasantharajan et al. (2022)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data and Related Work", "sec_num": "2" }, { "text": "Some data was impossible to reconstruct because the tweets do not exist anymore and thus only a subset is still available (e.g., Korean (Do and Choi, 2015)). For some languages, we decided to apply undersampling in order to limit the skewness of the final distribution (e.g., both Shome (2021) and Cortiz et al. (2021) provide dozens of thousands of tweets). To simplify reproducibility, we will release the exact data extraction scripts that we have used to collect our data.", "cite_spans": [ { "start": 298, "end": 318, "text": "Cortiz et al. (2021)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Related Work", "sec_num": "2" }, { "text": "There are papers that we have not included in our research: Vijay et al. (2018) introduce a Hindi dataset that contains Hindi-English code switched text. However, Hindi is Romanized and only a few of this data has been used to pre-train XLM. Sabri et al. (2021) released a collection of Persian tweets annotated with emotions, however, their data has not been evaluated in a training task and thus we decided not to include it in our training. We also found a dataset for Japanese Danielewicz-Betz et al. (2015) , however, the dataset is not publicly available.", "cite_spans": [ { "start": 60, "end": 79, "text": "Vijay et al. (2018)", "ref_id": "BIBREF37" }, { "start": 242, "end": 261, "text": "Sabri et al. (2021)", "ref_id": "BIBREF29" }, { "start": 481, "end": 511, "text": "Danielewicz-Betz et al. (2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Related Work", "sec_num": "2" }, { "text": "French and German are collected through the translation of Spanish (Mohammad et al., 2018) tweets using DeepL. 3 For Chinese, we use the messages found in the NLPCC dataset (Wang et al., 2018) . Note that this dataset has some internal code-switching.", "cite_spans": [ { "start": 67, "end": 90, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF22" }, { "start": 173, "end": 192, "text": "(Wang et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Related Work", "sec_num": "2" }, { "text": "The most similar work to ours is the work by Lamprinidis et al. (2021) . Lamprinidis et al. (2021) introduces a dataset collected through distant supervision on Facebook and covers 6 main languages for training and a set of 12 other languages that can be used for testing. We will run a comparison with this model in Section 3.3. ", "cite_spans": [ { "start": 45, "end": 70, "text": "Lamprinidis et al. (2021)", "ref_id": "BIBREF20" }, { "start": 73, "end": 98, "text": "Lamprinidis et al. (2021)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Related Work", "sec_num": "2" }, { "text": "We perform three different experiments. The first one is meant to show the performance of XLM-EMO across the different languages. The second one evaluates how well XLM-EMO works on a zero-shot task in which data from one language is held out; we focus on testing three languages: English, Arabic, and Vietnamese. The third evaluation shows the performance of XLM-EMO on additional datasets different from those used for training on which we compare our model with other state-of-the-art models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We fine-tune 3 different models: XLM-RoBERTabase (Conneau et al., 2020) , XLM-RoBERTalarge (Conneau et al., 2020) and Twitter-XLM-RoBERTa (Barbieri et al., 2021) . The first two are trained on data from 100 languages while the latter is a fine-tuned version of XLM-RoBERTa-base on Twitter data. We use 10% for validation (we evaluate the model every 50 steps and get the best checkpoint) and 5% of data for the test. Figure 2 shows the comparison between the three different models averaged on 5 runs with different seeds. These results show that the model is able to maintain a stable performance even when trained on data from 19 languages. The overall average Macro-F1s for XLM-RoBERTa-large, XLM-RoBERTa-base and XLM-Twitter-base are 0.86, 0.81 and 0.84. The results also indicate that XLM-RoBERTalarge is the best model; however, XLM-Twitterbase performs better than XLM-RoBERTa-base and this is probably because it is a Twitter-specific model. Unfortunately, at this date, a large version of XLM-Twitter does not exist.", "cite_spans": [ { "start": 49, "end": 71, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF8" }, { "start": 91, "end": 113, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF8" }, { "start": 138, "end": 161, "text": "(Barbieri et al., 2021)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 417, "end": 425, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Performance on Test Set", "sec_num": "3.1" }, { "text": "For all languages but Korean and Filipino, the performance is reliable. This is probably because both do not occur frequently in the training data. It should be noted that also Chinese and Tamil have a performance that is slightly above 0.6 with the large model. Considering these results, we will refer to the fine-tuned XLM-RoBERTa-large as XLM-EMO and we will use it in the rest of the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on Test Set", "sec_num": "3.1" }, { "text": "We run 3 zero-shot comparisons to show the model performance on unseen languages. We select Arabic, English, and Vietnamese. Target language data is split into training and test (80/20). A languagespecific model is trained (we again select the best model based on checkpoints on validation that is 10% of the training data). We use languagespecific BERT-large for all the three languages. 456 . We also use an XLM-EMO trained on all the languages plus the 80% training data also used for the language-specific model. Results in Table 2 show that XLM-EMO is competitive in the zero-shot settings. Still, languagespecific models beat both the zero-shot and the model with additional training data. 7 On English data, XLM-EMO Trained seems to show better performance than the language-specific model, but this is probably because in language-specific datasets some English data might still be present.", "cite_spans": [ { "start": 389, "end": 392, "text": "456", "ref_id": null } ], "ref_spans": [ { "start": 528, "end": 535, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Zero-shot Tests", "sec_num": "3.2" }, { "text": "We compare how XLM-EMO (large) behaves against out-of-training data to better understand if it generalizes well in other domains. In this test, we use other models to see how they perform in comparison with our XLM-EMO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Available Models", "sec_num": "3.3" }, { "text": "As datasets, we use the MultiEmotion Italian dataset (ME) (Sprugnoli, 2020) that contains YouTube and Facebook comments annotated with emotions (we collect only the comments with emotions that overlap with ours) and the EmoEvent dataset (EE) in English and Spanish (Plaza del Arco et al., 2020). 8 For both datasets we filtered out only the text that has been annotated with one of the labels we also use.", "cite_spans": [ { "start": 58, "end": 75, "text": "(Sprugnoli, 2020)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Available Models", "sec_num": "3.3" }, { "text": "Respectively, as language-specific competitors (LS-EMO), we use the FEEL-IT (Bianchi et al., 2021a) as found on HuggingFace 9 and EmoNet Abdul-Mageed and Ungar (2017) as found on GitHub 10 . In addition, we also compare with the multilingual baseline Universal Joy (UJ) (Lamprinidis et al., 2021), using their combi model that has been trained on 6 languages (English, Spanish, Portuguese, Tagalog, Indonesian, and Chinese); note that, Italian has not been seen by the UJ model during training.", "cite_spans": [ { "start": 76, "end": 99, "text": "(Bianchi et al., 2021a)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Available Models", "sec_num": "3.3" }, { "text": "EmoNet and UJ predict additional emotions. To be as a fair as possible, we filter out the missing emotions from the predicted logits so that both models predict only joy, anger, sadness, and fear. The results in Table 3 show that XLM-EMO is the best performing model.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Comparison with Available Models", "sec_num": "3.3" }, { "text": "Unfortunately, we have not been able to find datasets for emotions detection in any of the African Languages. Moreover, automatic translation tools do not often cover African languages or et al., 2021a,b) is trained on this data. they do not provide reliable evidence of being able to provide those translations with a certain level of quality. We reached out to members of our community to understand if there was any work that we were not aware of but we did not find any. Further iterations of this resource might want to focus on those languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "4" }, { "text": "In this short paper, we propose XLM-EMO, a novel resource for emotion detection. The model shows stable performance across 19 languages and it is competitive in a zero-shot setting, supporting its usage in low-resource contexts. We plan to enrich this model with more languages as soon as we find them so that we can continually improve these results and offer better methods to the community. 2020-4288, MONICA). Federico Bianchi, Debora Nozza, and Dirk Hovy are members of the Mi-laNLP group, and the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "There is still a mismatch in the adoption of the methods we release and our understanding of them . We are releasing a resource for multi-lingual emotion detection, but any list of language resources runs the risk of being (mis)interpreted as exhaustive, with languages included being regarded as more important than those that are not. We would like to emphatically state that this is not the case here: we tried to include as many languages as possible to allow for a wide comparison and provide a basis for further research. Any omission should not be read as a value judgment. A Training Details", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethical Considerations", "sec_num": null }, { "text": "All the models are trained with the same pipeline. We report the shared parameters in Table 4 . The only difference can be found in the experiments presented in Section 3.2, the zero-shot tests. Since the language-specific datasets contain less data, we reduced the number of steps for which we run the evaluation and create a checkpoint (i.e, we evaluate every 5 steps). The loss we use is weighted with respect to the frequency of each label.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "A.1 Parameters", "sec_num": null }, { "text": "This configuration was obtained after several grid search experiments, we found that one of the parameter that impacts the most the training of large configurations of the models is the batch size. Models are trained on a Nvidia GeForce RTX 2080 Ti.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Parameters", "sec_num": null }, { "text": "We align our pre-processing to the one described in (Barbieri et al., 2021) , replacing user tags with", "cite_spans": [ { "start": 52, "end": 75, "text": "(Barbieri et al., 2021)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Pre-processing", "sec_num": null }, { "text": "Models can be found at https://huggingface. co/MilaNLProc/ 2 See https://github.com/MilaNLProc/ xlm-emo, where we also release other details for replication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We are aware that this process might introduce bias in the model as described by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/ bert-large-uncased 5 https://huggingface.co/aubmindlab/ bert-large-arabertv02-twitter 6 https://huggingface.co/vinai/ phobert-large", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Similar conclusions have been reached byNozza et al. (2020).8 We could not find another Spanish model to test against this data since the Spanish emotion recognition model (P\u00e9rez", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/MilaNLProc/ feel-it-italian-emotion 10 https://github.com/UBC-NLP/EmoNet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This project has partially received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTE-GRATOR), and by Fondazione Cariplo (grant No.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "@user and links with http. For those datasets that had a different pre-processing (e.g., some datasets used @username to replace user tags) we applied a normalization procedure to align them with our pre-processing.PhoBERT Note that the Vietnamese model requires a particular pre-processing pipeline: as suggested by the authors on their own GitHub page, for this specific model we apply segmentation on the Vietnamese text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "In general, when a message is annotated with multiple emotions we remove it from the dataset. When a dataset comes with multiple emotions that could overlap (e.g., joy and enthusiasm), we just select the emotions of our interest and we do not apply any mapping (e.g., treating enthusiasm messages as joy). This is done to avoid bias in the final collection.We are going to release also our entire processing pipeline (that is mainly based on data transformations) so that interested researchers can re-run it. Note that all the samplings we do have been run with a fixed seed so that they are reproducible.Arabic This data come from the Affects In Tweet dataset (Mohammad et al., 2018) . We combine train, validation and test in a single dataset but we drop emotions that are not covered by our set of emotions.Bengali This dataset contains data coming from a different source, such as youtube comments and Facebook posts. We only take the messages with emotions that are part of our set.English This data come from the Affects In Tweet dataset (Mohammad et al., 2018) . We combine train, validation and test in a single dataset but we drop emotions that are not covered by our set of emotions.Spanish This data come from the Affects In Tweet dataset (Mohammad et al., 2018) . We combine train, validation and test in a single dataset but we drop emotions that are not covered by our set of emotions.Filipino This is one of the languages with a lower amount of data. The number of tweets in Filipino (Lapitan et al., 2016) was already low in the original work (i.e., 647) and the final number is even lower since we removed the emotions that do not overlap with ours.French For this language, we translated the training data that comes from the Spanish subset of the Affects In Tweet dataset (Mohammad et al., 2018) .German For this language, we translated the training data that comes from the Spanish subset of the Affects In Tweet dataset (Mohammad et al., 2018) .Hindi This dataset comes from a translation of the original GoEmotion dataset (Demszky et al., 2020) . We just selected the emotions we are interested in and removed the others. Since this dataset has been translated with Google API we opted for sampling only 2000 examples not to bias the representation too much.Indonesian We collected this dataset directly from the authors work (Saputri et al., 2018), we dropped the love emotions and we mapped happy to our emotion joy.Italian This dataset comes from the work of Bianchi et al. (2021a) , their labels overlap with ours.Malyan We were slightly less confident on the quality of the annotations of this dataset and we thus sampled 200 messages for each emotion.Portuguese This dataset has been collected using a keyword search of terms related to emotions. We focus only on our target emotions and randomly sample a maximum of 1000 tweets. This is done because the keyword used for the emotions are few and we would like to avoid biasing the actual representation.Romanian This dataset (Ciobotaru and Dinu, 2021) has been collected by scraping Twitter using specific keywords. The emotions considered are 5, where the additional one is neutral, which we remove. As our data, we used both the training and the validation data released by the authors.Russian We mainly focused on Twitter data and from the Russian dataset Sboev et al. (2020) we extract only the data that comes from Twitter. We remove the tweets with neutral label.Tamil The Tamil dataset contains YouTube comments and we use the training dataset described by the authors. We decided to remove the long tail of messages that have more than 30 tokens to make the dataset more consistent with the other datasets. Our labels are a subset of the labels described in the paper and we take only the messages with those labels.Turkish The Turkish dataset contains 5 emotions, one of which is surprise that was removed from our datasets.Vietnamese This dataset contains youtube comments and has been manually annotated. We drop the emotions that are not covered in our dataset.Chinese This dataset comes from the challenge described by (Wang et al., 2018) . It contains Chinese messages, some of which contain English words (it is a code-switching dataset).Korean The Korean dataset contains tweets that we reconstructed using the Twitter API. Since the release of the dataset, most tweets have been deleted or are not available anymore for other reasons. The dataset contains the Neutral label that we filter out. The other labels easily map onto ours.", "cite_spans": [ { "start": 662, "end": 685, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF22" }, { "start": 1045, "end": 1068, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF22" }, { "start": 1251, "end": 1274, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF22" }, { "start": 1792, "end": 1815, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF22" }, { "start": 1942, "end": 1965, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF22" }, { "start": 2045, "end": 2067, "text": "(Demszky et al., 2020)", "ref_id": "BIBREF11" }, { "start": 2485, "end": 2507, "text": "Bianchi et al. (2021a)", "ref_id": "BIBREF5" }, { "start": 3005, "end": 3031, "text": "(Ciobotaru and Dinu, 2021)", "ref_id": "BIBREF7" }, { "start": 3339, "end": 3358, "text": "Sboev et al. (2020)", "ref_id": "BIBREF31" }, { "start": 4112, "end": 4131, "text": "(Wang et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "B Dataset Details", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "EmoNet: Fine-grained emotion detection with gated recurrent neural networks", "authors": [ { "first": "Muhammad", "middle": [], "last": "Abdul", "suffix": "" }, { "first": "-Mageed", "middle": [], "last": "", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-1067" ] }, "num": null, "urls": [], "raw_text": "Muhammad Abdul-Mageed and Lyle Ungar. 2017. EmoNet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "718--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 718-728, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Emotions from text: Machine learning for textbased emotion prediction", "authors": [ { "first": "Cecilia", "middle": [], "last": "Ovesdotter Alm", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "579--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: Machine learning for text- based emotion prediction. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Process- ing, pages 579-586, Vancouver, British Columbia, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "XLM-T: A multilingual language model toolkit for Twitter", "authors": [ { "first": "Francesco", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Espinosa Anke", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.12250" ] }, "num": null, "urls": [], "raw_text": "Francesco Barbieri, Luis Espinosa Anke, and Jose Camacho-Collados. 2021. XLM-T: A multilingual language model toolkit for Twitter. arXiv preprint arXiv:2104.12250.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "On the gap between adoption and understanding in NLP", "authors": [ { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "3895--3901", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-acl.340" ] }, "num": null, "urls": [], "raw_text": "Federico Bianchi and Dirk Hovy. 2021. On the gap be- tween adoption and understanding in NLP. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021, pages 3895-3901, Online. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "FEEL-IT: Emotion and sentiment classification for the Italian language", "authors": [ { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Federico Bianchi, Debora Nozza, and Dirk Hovy. 2021a. FEEL-IT: Emotion and sentiment classification for the Italian language. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjec- tivity, Sentiment and Social Media Analysis, pages 76-83, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Cross-lingual contextualized topic models with zero-shot learning", "authors": [ { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Terragni", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Elisabetta", "middle": [], "last": "Fersini", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "1676--1683", "other_ids": { "DOI": [ "10.18653/v1/2021.eacl-main.143" ] }, "num": null, "urls": [], "raw_text": "Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, and Elisabetta Fersini. 2021b. Cross-lingual contextualized topic models with zero-shot learning. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 1676-1683, Online. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "RED: A novel dataset for Romanian emotion detection from tweets", "authors": [ { "first": "Alexandra", "middle": [], "last": "Ciobotaru", "suffix": "" }, { "first": "P", "middle": [], "last": "Liviu", "suffix": "" }, { "first": "", "middle": [], "last": "Dinu", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)", "volume": "", "issue": "", "pages": "291--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandra Ciobotaru and Liviu P. Dinu. 2021. RED: A novel dataset for Romanian emotion detection from tweets. In Proceedings of the International Confer- ence on Recent Advances in Natural Language Pro- cessing (RANLP 2021), pages 291-300, Held Online. INCOMA Ltd.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A weak supervised dataset of fine-grained emotions in Portuguese", "authors": [ { "first": "Diogo", "middle": [], "last": "Cortiz", "suffix": "" }, { "first": "Jefferson", "middle": [ "O" ], "last": "Silva", "suffix": "" }, { "first": "Newton", "middle": [], "last": "Calegari", "suffix": "" }, { "first": "Ana", "middle": [ "Lu\u00edsa" ], "last": "Freitas", "suffix": "" }, { "first": "Ana", "middle": [ "Ang\u00e9lica" ], "last": "Soares", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Botelho", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Gaudencio R\u00eago", "suffix": "" }, { "first": "Waldir", "middle": [], "last": "Sampaio", "suffix": "" }, { "first": "Paulo", "middle": [ "Sergio" ], "last": "Boggio", "suffix": "" } ], "year": 2021, "venue": "Symposium in Information and Human Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diogo Cortiz, Jefferson O. Silva, Newton Calegari, Ana Lu\u00edsa Freitas, Ana Ang\u00e9lica Soares, Carolina Botelho, Gabriel Gaudencio R\u00eago, Waldir Sampaio, and Paulo Sergio Boggio. 2021. A weak supervised dataset of fine-grained emotions in Portuguese. Sym- posium in Information and Human Language Tech- nology.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Creating English and Japanese Twitter corpora for emotion analysis", "authors": [ { "first": "A", "middle": [], "last": "Danielewicz-Betz", "suffix": "" }, { "first": ",", "middle": [], "last": "", "suffix": "" }, { "first": "H", "middle": [], "last": "Kaneda", "suffix": "" }, { "first": "M", "middle": [], "last": "Mozgovoy", "suffix": "" }, { "first": "M", "middle": [], "last": "Purgina", "suffix": "" } ], "year": 2015, "venue": "International Journal of Knowledge Engineering-IACSIT", "volume": "1", "issue": "2", "pages": "120--124", "other_ids": { "DOI": [ "10.7763/ijke.2015.v1.20" ] }, "num": null, "urls": [], "raw_text": "A. Danielewicz-Betz, , H. Kaneda, M. Mozgovoy, M. Purgina, , and and. 2015. Creating English and Japanese Twitter corpora for emotion analysis. Inter- national Journal of Knowledge Engineering-IACSIT, 1(2):120-124.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "GoEmotions: A dataset of fine-grained emotions", "authors": [ { "first": "Dorottya", "middle": [], "last": "Demszky", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Movshovitz-Attias", "suffix": "" }, { "first": "Jeongwoo", "middle": [], "last": "Ko", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Cowen", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Nemade", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4040--4054", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.372" ] }, "num": null, "urls": [], "raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emo- tions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4040-4054, Online. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Korean Twitter emotion classification using automatically built emotion lexicons and fine-grained features", "authors": [ { "first": "Jin", "middle": [], "last": "Hyo", "suffix": "" }, { "first": "Ho-Jin", "middle": [], "last": "Do", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters", "volume": "", "issue": "", "pages": "142--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyo Jin Do and Ho-Jin Choi. 2015. Korean Twitter emo- tion classification using automatically built emotion lexicons and fine-grained features. In Proceedings of the 29th Pacific Asia Conference on Language, Infor- mation and Computation: Posters, pages 142-150, Shanghai, China.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Comparison of n-stage latent dirichlet allocation versus other topic modeling methods for emotion analysis", "authors": [ { "first": "Banu", "middle": [], "last": "Zekeriya Anil G\u00fcven", "suffix": "" }, { "first": "Tolgahan", "middle": [], "last": "Diri", "suffix": "" }, { "first": "", "middle": [], "last": "\u00c7akaloglu", "suffix": "" } ], "year": 2020, "venue": "Journal of the Faculty of Engineering and Architecture of Gazi University", "volume": "35", "issue": "4", "pages": "2135--2145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zekeriya Anil G\u00fcven, Banu Diri, and Tolgahan \u00c7akaloglu. 2020. Comparison of n-stage latent dirichlet allocation versus other topic modeling meth- ods for emotion analysis. Journal of the Faculty of Engineering and Architecture of Gazi University, 35(4):2135-2145.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Emotion recognition for Vietnamese social media text", "authors": [ { "first": "Anh", "middle": [], "last": "Vong", "suffix": "" }, { "first": "Duong", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Danh", "middle": [], "last": "Huynh-Cong Nguyen", "suffix": "" }, { "first": "Linh", "middle": [], "last": "Hoang Nguyen", "suffix": "" }, { "first": "Duc-Vu", "middle": [], "last": "Thi-Van Pham", "suffix": "" }, { "first": "Kiet", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Ngan", "middle": [], "last": "Van Nguyen", "suffix": "" }, { "first": "-Thuy", "middle": [], "last": "Luu", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2019, "venue": "Computational Linguistics -16th International Conference of the Pacific Association for Computational Linguistics", "volume": "1215", "issue": "", "pages": "319--333", "other_ids": { "DOI": [ "10.1007/978-981-15-6168-9_27" ] }, "num": null, "urls": [], "raw_text": "Vong Anh Ho, Duong Huynh-Cong Nguyen, Danh Hoang Nguyen, Linh Thi-Van Pham, Duc-Vu Nguyen, Kiet Van Nguyen, and Ngan Luu- Thuy Nguyen. 2019. Emotion recognition for Vietnamese social media text. In Computational Linguistics -16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019, Hanoi, Vietnam, October 11-13, 2019, volume 1215 of Communications in Computer and Information Science, pages 319-333. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "You sound just like your father\" commercial machine translation systems include stylistic biases", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1686--1690", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.154" ] }, "num": null, "urls": [], "raw_text": "Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. \"You sound just like your father\" commer- cial machine translation systems include stylistic bi- ases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1686-1690, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Pragmatics behind Politics: Modelling Metaphor, Framing and Emotion in Political Discourse", "authors": [ { "first": "Pere-Llu\u00eds Huguet", "middle": [], "last": "Cabot", "suffix": "" }, { "first": "Verna", "middle": [], "last": "Dankers", "suffix": "" }, { "first": "David", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Agneta", "middle": [], "last": "Fischer", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4479--4488", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.402" ] }, "num": null, "urls": [], "raw_text": "Pere-Llu\u00eds Huguet Cabot, Verna Dankers, David Abadi, Agneta Fischer, and Ekaterina Shutova. 2020. The Pragmatics behind Politics: Modelling Metaphor, Framing and Emotion in Political Discourse. In Find- ings of the Association for Computational Linguistics: EMNLP 2020, pages 4479-4488, Online. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Malay-dataset, we gather bahasa malaysia corpus!, semi-supervised emotion dataset", "authors": [ { "first": "Zolkepli", "middle": [], "last": "Husein", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zolkepli Husein. 2018. Malay-dataset, we gather bahasa malaysia corpus!, semi-supervised emotion dataset. https://github.com/", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Mohammed Moshiul Hoque, and Iqbal H. Sarker. 2022. BEmoC: A corpus for identifying emotion in Bengali texts", "authors": [ { "first": "M", "middle": [ "D" ], "last": "", "suffix": "" }, { "first": "Asif", "middle": [], "last": "Iqbal", "suffix": "" }, { "first": "Avishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Omar", "middle": [], "last": "Sharif", "suffix": "" } ], "year": null, "venue": "SN Computer Science", "volume": "3", "issue": "2", "pages": "", "other_ids": { "DOI": [ "10.1007/s42979-022-01028-w" ] }, "num": null, "urls": [], "raw_text": "MD. Asif Iqbal, Avishek Das, Omar Sharif, Mo- hammed Moshiul Hoque, and Iqbal H. Sarker. 2022. BEmoC: A corpus for identifying emotion in Bengali texts. SN Computer Science, 3(2):135.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Measuring Emotions in the COVID-19 Real World Worry Dataset", "authors": [ { "first": "Bennett", "middle": [], "last": "Kleinberg", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Van Der", "suffix": "" }, { "first": "Maximilian", "middle": [], "last": "Vegt", "suffix": "" }, { "first": "", "middle": [], "last": "Mozes", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Online", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bennett Kleinberg, Isabelle van der Vegt, and Maxi- milian Mozes. 2020. Measuring Emotions in the COVID-19 Real World Worry Dataset. In Proceed- ings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Online. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Universal joy a data set and results for classifying emotions across languages", "authors": [ { "first": "Sotiris", "middle": [], "last": "Lamprinidis", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hardt", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "62--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sotiris Lamprinidis, Federico Bianchi, Daniel Hardt, and Dirk Hovy. 2021. Universal joy a data set and results for classifying emotions across languages. In Proceedings of the Eleventh Workshop on Compu- tational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 62-75, Online. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Crowdsourcing-based annotation of emotions in Filipino and English tweets", "authors": [ { "first": "Fermin", "middle": [], "last": "Roberto Lapitan", "suffix": "" }, { "first": "Riza", "middle": [ "Theresa" ], "last": "Batista-Navarro", "suffix": "" }, { "first": "Eliezer", "middle": [], "last": "Albacea", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WS-SANLP2016)", "volume": "", "issue": "", "pages": "74--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fermin Roberto Lapitan, Riza Theresa Batista-Navarro, and Eliezer Albacea. 2016. Crowdsourcing-based an- notation of emotions in Filipino and English tweets. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WS- SANLP2016), pages 74-82, Osaka, Japan. The COL- ING 2016 Organizing Committee.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "SemEval-2018 task 1: Affect in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--17", "other_ids": { "DOI": [ "10.18653/v1/S18-1001" ] }, "num": null, "urls": [], "raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1-17, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Exposing the limits of zero-shot cross-lingual hate speech detection", "authors": [ { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "907--914", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-short.114" ] }, "num": null, "urls": [], "raw_text": "Debora Nozza. 2021. Exposing the limits of zero-shot cross-lingual hate speech detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 907-914, Online. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "What the [MASK]? Making sense of language-specific BERT models", "authors": [ { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.02912" ] }, "num": null, "urls": [], "raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [MASK]? Making sense of language-specific BERT models. arXiv preprint arXiv:2003.02912.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A multi-view sentiment corpus", "authors": [ { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Elisabetta", "middle": [], "last": "Fersini", "suffix": "" }, { "first": "Enza", "middle": [], "last": "Messina", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "273--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debora Nozza, Elisabetta Fersini, and Enza Messina. 2017. A multi-view sentiment corpus. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 273-280, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Emo-Event: A multilingual emotion corpus based on different events", "authors": [ { "first": "Flor Miriam Plaza", "middle": [], "last": "Del Arco", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "L", "middle": [], "last": "Alfonso Urena", "suffix": "" }, { "first": "Maite", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1492--1498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flor Miriam Plaza del Arco, Carlo Strapparava, L. Al- fonso Urena Lopez, and Maite Martin. 2020. Emo- Event: A multilingual emotion corpus based on dif- ferent events. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1492- 1498, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "RoBERTuito: a pre-trained language model for social media text in Spanish", "authors": [ { "first": "Juan", "middle": [], "last": "Manuel P\u00e9rez", "suffix": "" }, { "first": "Dami\u00e1n", "middle": [ "A" ], "last": "Furman", "suffix": "" }, { "first": "Laura", "middle": [ "Alonso" ], "last": "Alemany", "suffix": "" }, { "first": "Franco", "middle": [], "last": "Luque", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2111.09453" ] }, "num": null, "urls": [], "raw_text": "Juan Manuel P\u00e9rez, Dami\u00e1n A. Furman, Laura Alonso Alemany, and Franco Luque. 2021a. RoBERTuito: a pre-trained language model for social media text in Spanish. arXiv preprint arXiv:2111.09453.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Python toolkit for 200 sentiment analysis and socialnlp tasks", "authors": [ { "first": "Juan", "middle": [], "last": "Manuel P\u00e9rez", "suffix": "" }, { "first": "Juan", "middle": [ "Carlos" ], "last": "Giudici", "suffix": "" }, { "first": "Franco", "middle": [], "last": "Luque", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.09462" ] }, "num": null, "urls": [], "raw_text": "Juan Manuel P\u00e9rez, Juan Carlos Giudici, and Franco Luque. 2021b. pysentimiento: A Python toolkit for 200 sentiment analysis and socialnlp tasks. arXiv preprint arXiv:2106.09462.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "EmoPars: A collection of 30K emotionannotated Persian social media texts", "authors": [ { "first": "Nazanin", "middle": [], "last": "Sabri", "suffix": "" }, { "first": "Reyhane", "middle": [], "last": "Akhavan", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Bahrak", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Student Research Workshop Associated with RANLP 2021", "volume": "", "issue": "", "pages": "167--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nazanin Sabri, Reyhane Akhavan, and Behnam Bahrak. 2021. EmoPars: A collection of 30K emotion- annotated Persian social media texts. In Proceedings of the Student Research Workshop Associated with RANLP 2021, pages 167-173, Online. INCOMA Ltd.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Emotion classification on Indonesian Twitter dataset", "authors": [ { "first": "Rahmad", "middle": [], "last": "Mei Silviana Saputri", "suffix": "" }, { "first": "Mirna", "middle": [], "last": "Mahendra", "suffix": "" }, { "first": "", "middle": [], "last": "Adriani", "suffix": "" } ], "year": 2018, "venue": "2018 International Conference on Asian Language Processing", "volume": "2018", "issue": "", "pages": "90--95", "other_ids": { "DOI": [ "10.1109/IALP.2018.8629262" ] }, "num": null, "urls": [], "raw_text": "Mei Silviana Saputri, Rahmad Mahendra, and Mirna Adriani. 2018. Emotion classification on Indonesian Twitter dataset. In 2018 International Conference on Asian Language Processing, IALP 2018, Ban- dung, Indonesia, November 15-17, 2018, pages 90- 95. IEEE.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Data-driven model for emotion detection in Russian texts", "authors": [ { "first": "Alexander", "middle": [ "G" ], "last": "Sboev", "suffix": "" }, { "first": "Aleksandr", "middle": [], "last": "Naumov", "suffix": "" }, { "first": "Roman", "middle": [ "B" ], "last": "Rybka", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1016/j.procs.2021.06.075" ] }, "num": null, "urls": [], "raw_text": "Alexander G. Sboev, Aleksandr Naumov, and Roman B. Rybka. 2020. Data-driven model for emotion detec- tion in Russian texts. In Proceedings of the 2020", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Annual International Conference on Brain-Inspired Cognitive Architectures for Artificial Intelligence", "authors": [], "year": null, "venue": "", "volume": "2020", "issue": "", "pages": "637--642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual International Conference on Brain-Inspired Cognitive Architectures for Artificial Intelligence, BICA 2020, volume 190 of Procedia Computer Sci- ence, pages 637-642. Elsevier.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "EmoHinD: Fine-grained multilabel emotion recognition from Hindi texts with deep learning", "authors": [], "year": null, "venue": "12th International Conference on Computing Communication and Networking Technologies, ICCCNT 2021", "volume": "", "issue": "", "pages": "1--5", "other_ids": { "DOI": [ "10.1109/ICCCNT51525.2021.9579886" ] }, "num": null, "urls": [], "raw_text": "Debaditya Shome. 2021. EmoHinD: Fine-grained multi- label emotion recognition from Hindi texts with deep learning. In 12th International Conference on Com- puting Communication and Networking Technologies, ICCCNT 2021, pages 1-5. IEEE.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "MultiEmotions-It: a new dataset for opinion polarity and emotion analysis for Italian", "authors": [ { "first": "Rachele", "middle": [], "last": "Sprugnoli", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Seventh Italian Conference on Computational Linguistics, CLiC-it 2020", "volume": "2769", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachele Sprugnoli. 2020. MultiEmotions-It: a new dataset for opinion polarity and emotion analysis for Italian. In Proceedings of the Seventh Italian Conference on Computational Linguistics, CLiC-it 2020, Bologna, Italy, March 1-3, 2021, volume 2769 of CEUR Workshop Proceedings. CEUR-WS.org.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Tamilemo: Finegrained emotion detection dataset for tamil", "authors": [ { "first": "Charangan", "middle": [], "last": "Vasantharajan", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Benhur", "suffix": "" }, { "first": "Prasanna", "middle": [], "last": "Kumar Kumarasen", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Ponnusamy", "suffix": "" }, { "first": "Sathiyaraj", "middle": [], "last": "Thangasamy", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2202.04725" ] }, "num": null, "urls": [], "raw_text": "Charangan Vasantharajan, Sean Benhur, Prasanna Ku- mar Kumarasen, Rahul Ponnusamy, Sathiyaraj Thangasamy, Ruba Priyadharshini, Thenmozhi Du- rairaj, Kanchana Sivanraju, Anbukkarasi Sampath, Bharathi Raja Chakravarthi, and John Phillip McCrae. 2022. Tamilemo: Finegrained emotion detection dataset for tamil. arXiv preprint arXiv:2202.04725.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Identifying worry in Twitter: Beyond emotion analysis", "authors": [ { "first": "Reyha", "middle": [], "last": "Verma", "suffix": "" }, { "first": "Jithin", "middle": [], "last": "Christian Von Der Weth", "suffix": "" }, { "first": "Mohan", "middle": [], "last": "Vachery", "suffix": "" }, { "first": "", "middle": [], "last": "Kankanhalli", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science", "volume": "", "issue": "", "pages": "72--82", "other_ids": { "DOI": [ "10.18653/v1/2020.nlpcss-1.9" ] }, "num": null, "urls": [], "raw_text": "Reyha Verma, Christian von der Weth, Jithin Vachery, and Mohan Kankanhalli. 2020. Identifying worry in Twitter: Beyond emotion analysis. In Proceedings of the Fourth Workshop on Natural Language Process- ing and Computational Social Science, pages 72-82, Online. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Corpus creation and emotion prediction for Hindi-English code-mixed social media text", "authors": [ { "first": "Deepanshu", "middle": [], "last": "Vijay", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Bohra", "suffix": "" }, { "first": "Vinay", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Syed Sarfaraz Akhtar", "suffix": "" }, { "first": "", "middle": [], "last": "Shrivastava", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "128--135", "other_ids": { "DOI": [ "10.18653/v1/N18-4018" ] }, "num": null, "urls": [], "raw_text": "Deepanshu Vijay, Aditya Bohra, Vinay Singh, Syed Sar- faraz Akhtar, and Manish Shrivastava. 2018. Corpus creation and emotion prediction for Hindi-English code-mixed social media text. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Student Research Workshop, pages 128-135, New Orleans, Louisiana, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Overview of NLPCC 2018 shared task 1: Emotion detection in codeswitching text", "authors": [ { "first": "Zhongqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shoushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Qingying", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-319-99501-4_39" ] }, "num": null, "urls": [], "raw_text": "Zhongqing Wang, Shoushan Li, Fan Wu, Qingying Sun, and Guodong Zhou. 2018. Overview of NLPCC 2018 shared task 1: Emotion detection in code- switching text. In Natural Language Processing and", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "The performance (Macro-F1) of the three fine-tuned models across the various languages present in the test set. XLM-RoBERTa-large has the best performance. We averaged the run of 5 different seeds." }, "TABREF2": { "text": "", "num": null, "content": "
large)
", "type_str": "table", "html": null }, "TABREF3": { "text": "", "num": null, "content": "
ModelME EE-EN EE-ES
XLM-EMO 0.62 LS-EMO 0.580.66 0.440.73 -
UJ-Combi0.350.520.51
", "type_str": "table", "html": null }, "TABREF4": { "text": "", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF6": { "text": "The main parameters we used to run the models.Rui Xia and Zixiang Ding. 2019. Emotion-cause pair extraction: A new task to emotion analysis in texts.In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1003-1012, Florence, Italy. Association for Computational Linguistics.", "num": null, "content": "
While epochs are 5, we remark that we are running a step-wise evaluation.
Chinese Computing -7th CCF International Confer-ence, NLPCC 2018, volume 11109 of Lecture Notes in Computer Science, pages 429-433. Springer.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguis-tics.
", "type_str": "table", "html": null } } } }