{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:39.606788Z" }, "title": "MilaNLP at WASSA 2021: Does BERT Feel Sad When You Cry?", "authors": [ { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bocconi University", "location": { "addrLine": "Via Sarfatti 25", "postCode": "20136", "settlement": "Milan", "country": "Italy" } }, "email": "tommaso.fornaciari@unibocconi.it" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bocconi University", "location": { "addrLine": "Via Sarfatti 25", "postCode": "20136", "settlement": "Milan", "country": "Italy" } }, "email": "f.bianchi@unibocconi.it" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bocconi University", "location": { "addrLine": "Via Sarfatti 25", "postCode": "20136", "settlement": "Milan", "country": "Italy" } }, "email": "debora.nozza@unibocconi.it" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bocconi University", "location": { "addrLine": "Via Sarfatti 25", "postCode": "20136", "settlement": "Milan", "country": "Italy" } }, "email": "dirk.hovy@unibocconi.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The paper describes the MilaNLP team's submission (Bocconi University, Milan) in the WASSA 2021 Shared Task on Empathy Detection and Emotion Classification. We focus on Track 2-Emotion Classification-which consists of predicting the emotion of reactions to English news stories at the essay-level. We test different models based on multi-task and multi-input frameworks. The goal was to better exploit all the correlated information given in the data set. We find, though, that empathy as an auxiliary task in multi-task learning and demographic attributes as additional input provide worse performance with respect to singletask learning. While the result is competitive in terms of the competition, our results suggest that emotion and empathy are not related tasks-at least for the purpose of prediction.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The paper describes the MilaNLP team's submission (Bocconi University, Milan) in the WASSA 2021 Shared Task on Empathy Detection and Emotion Classification. We focus on Track 2-Emotion Classification-which consists of predicting the emotion of reactions to English news stories at the essay-level. We test different models based on multi-task and multi-input frameworks. The goal was to better exploit all the correlated information given in the data set. We find, though, that empathy as an auxiliary task in multi-task learning and demographic attributes as additional input provide worse performance with respect to singletask learning. While the result is competitive in terms of the competition, our results suggest that emotion and empathy are not related tasks-at least for the purpose of prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Different researchers have been exploring emotion prediction from text (Abdul-Mageed and Ungar, 2017; Nozza et al., 2017) . The WASSA-2021 shared task (Tafreshi et al., 2021) tackles the prediction of empathy (Track 1 of the challenge) and emotion (Track 2 of the challenge) in text. We, the Mi-laNLP lab, participated in Track 2 of the challenge. Nozza et al. (2020) show that Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) can provide accurate results for many different languages in different tasks. Indeed, we contributed to this year's WASSA workshop with two papers (Lamprinidis et al., 2021; that show that BERT can obtain good results in the emotion prediction task.", "cite_spans": [ { "start": 71, "end": 101, "text": "(Abdul-Mageed and Ungar, 2017;", "ref_id": "BIBREF0" }, { "start": 102, "end": 121, "text": "Nozza et al., 2017)", "ref_id": "BIBREF13" }, { "start": 151, "end": 174, "text": "(Tafreshi et al., 2021)", "ref_id": null }, { "start": 348, "end": 367, "text": "Nozza et al. (2020)", "ref_id": "BIBREF12" }, { "start": 441, "end": 462, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 610, "end": 636, "text": "(Lamprinidis et al., 2021;", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This system paper describes our approach to the emotion prediction shared task. Based on our previous experience with text classification tasks (Rashid et al., 2020; Fornaciari and Hovy, 2019; Uma et al., 2020) , our initial idea was to use BERT, and support emotion prediction by adding an auxiliary task with the empathy score provided in the training set. Using such a Multi-Task Learning (MTL) setup can significantly boost performance on the main task, by exploring complementary information in the tasks, and by acting as a regularizer (a model that has to be able to predict more than one task is less prone to overfitting to any one of them). However, we unexpectedly find evidence for the opposite: using empathy as an auxiliary task in multi-task learning in this setting does not work as expected. In fact, adding empathy prediction hurts performance compared to a single-task model. This finding adds to the literature that auxiliary tasks in MTL setups need to be related to the main task to help performance (Mart\u00ednez Alonso and Plank, 2017). It also indicates that empathy is not directly a contributing factor to emotions, i.e., that there is no strong correlation between the two tasks.", "cite_spans": [ { "start": 144, "end": 165, "text": "(Rashid et al., 2020;", "ref_id": "BIBREF14" }, { "start": 166, "end": 192, "text": "Fornaciari and Hovy, 2019;", "ref_id": "BIBREF5" }, { "start": 193, "end": 210, "text": "Uma et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focused on emotion prediction (Track 2) of reactions to English news stories. The data set is an extended version of the one presented in Buechel et al. (2018) . Each instance corresponds to an empathic reaction to news stories extracted by popular online news platforms. A set of 1860 training documents annotated with seven emotions was given (see Table 2 for the data set size). With each text document, an empathy score that ranges from 1 to seven has been associated; in Table 1 we show some examples of text with the emotion and the empathy that come from the data set.", "cite_spans": [ { "start": 156, "end": 177, "text": "Buechel et al. (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 368, "end": 375, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 494, "end": 501, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Emotion Empathy it is really diheartening to read about these immigrants from this article who drowned. it makes me feel anxious and upset how the whole ordeal happened. it is a terrible occurrence that this had to happen at the mediterranean sea. thankfully there were some survivors. the fact that babies were lost makes it that much more emotional to read all of this", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": null }, { "text": "sadness 5.667", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": null }, { "text": "This is a crazy story with so many facets to it, omg. I mean on one hand, I don't support an eye for an eye. I don't support the death penalty and I don't support blinding someone. BUT on the other hand, this is a country where women really struggle and the justice system is not well developed. ALSO, he blinded a FOUR YEAR OLD GIRL. What the fuck is wrong with this guy. So if this was in America I would not support it, but I don't feel right condemning the actions of an entirely different country for doing what they felt needed to be done.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": null }, { "text": "anger 1 Table 1 : Examples of documents with the emotion and the empathy that come from the data set. One document has relatively high empathy, while the second one has very low empathy. ", "cite_spans": [], "ref_spans": [ { "start": 8, "end": 15, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Text", "sec_num": null }, { "text": "In this section, we describe the different configurations of the system we use for the emotion prediction task. We remind that we focused only on Track 2 of the WASSA Shared Task challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "3" }, { "text": "The data set allows building both Multi-Input (MI) and Multi-Task Learning (MTL) models. We tried both methods, separately and together, and we compare them with a single-input, single-task model. The single-input, single-output model uses text as input and predicts emotions. We create three MI models. All of them take the texts as input: this is our Single-Task Learning (STL) model. We also build three Multi-Input (MI) models where, besides the text, we also include gender information (2-input model, MI1), gender and income (3-input model, MI2), and gender, income, and Interpersonal Reactivity Index (IRI) (4-input model, MI3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3.1" }, { "text": "Given the availability of further dependent variables, we create a Multi-Task Learning (MTL) model that takes the text as only input and jointly predicts emotions (classification task with categorical cross-entropy), empathy, and distress (regression task) (MTL2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3.1" }, { "text": "Lastly, we implement an MI-MTL model that exploits text, gender, income, and IRI as input and predicts emotions, empathy, and distress (MI3-MTL2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "3.1" }, { "text": "In all our models, we use the BERT language model (Devlin et al., 2019) . In particular, we use the bert-large-uncased model for English, that is made of 336M parameters. The model comes with its own tokenizer that we use to extract a word \u00d7 contextual embedding matrix for each text. We use such matrix as input for a single-layer, singlehead Transformer, following Vaswani et al. (2017) , that is in charge to detect specific patterns of emotion. Lastly, a fully connected layer provides the output prediction.", "cite_spans": [ { "start": 50, "end": 71, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 367, "end": 388, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Architectures of the Models", "sec_num": "3.2" }, { "text": "In the MTL models, we have a separate fully connected layer for each task. Even though the different tasks concern the prediction of values being in Table 3 : Accuracy (Acc), Precision (P), Recall (R) and F1-score (F1) on the Development set. Significance levels over STL: * : p \u2264 0.05 a similar scale, we also tried to add a normalization layer, with the aim of keeping such scale similar for all the tasks. We did not find performance improvements, therefore we show the results without normalization.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Architectures of the Models", "sec_num": "3.2" }, { "text": "In the MI models, besides the BERT representation, we also use vectors of size 3, 1, and 4 for gender categories, income, and IRI values respectively. The gender vectors are one-hot encoded; income and IRI values are (column-wise) normalized float values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architectures of the Models", "sec_num": "3.2" }, { "text": "As loss functions, we use cross-entropy for the classification, and mean squared error for the two regression tasks. We use Adam optimizer (Kingma and Ba, 2015). We select the models through an early-stopping that requires a decrement rate on the development set's loss lower than 12% for three consecutive epochs. Our learning rate is 0.002, drop-out probability 0.2, and batch size 64, manually tuned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architectures of the Models", "sec_num": "3.2" }, { "text": "To test the significance of the possible improvements over the STL base model, we use a bootstrap sampling test (S\u00f8gaard et al., 2014) , with 1000 loops and a sample size of 30%. Table 3 presents our results. In many cases (Ruder, 2017 ), using multi-task learning on related tasks can help performance, especially when labeled data is sparse or unbalanced. Intuitively, it would seem that empathy and emotion would make for good candidates. However, in our experiments with this combination, we found a negative effect of empathy on emotion prediction. Upon closer inspection, that makes sense: being empathetic towards someone does not necessarily entail a particular emotion.", "cite_spans": [ { "start": 112, "end": 134, "text": "(S\u00f8gaard et al., 2014)", "ref_id": "BIBREF16" }, { "start": 223, "end": 235, "text": "(Ruder, 2017", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 179, "end": 186, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Architectures of the Models", "sec_num": "3.2" }, { "text": "Somewhat surprisingly, neither do demographic attributes. Various prior works have found those factors to help in classification settings (Volkova et al., 2013; Hovy, 2015; Lynn et al., 2017) , especially with MTL (Ruder, 2017; Benton et al., 2017; Li et al., 2018) . However, they do not seem to improve emotion classification here.", "cite_spans": [ { "start": 138, "end": 160, "text": "(Volkova et al., 2013;", "ref_id": "BIBREF20" }, { "start": 161, "end": 172, "text": "Hovy, 2015;", "ref_id": "BIBREF6" }, { "start": 173, "end": 191, "text": "Lynn et al., 2017)", "ref_id": "BIBREF10" }, { "start": 214, "end": 227, "text": "(Ruder, 2017;", "ref_id": "BIBREF15" }, { "start": 228, "end": 248, "text": "Benton et al., 2017;", "ref_id": "BIBREF1" }, { "start": 249, "end": 265, "text": "Li et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Therefore, for the shared task submission we choose the STL model, which showed the highest F-measure on the development set in our experimental conditions. On the test set, we obtained an F1-score equal to 48.6; with this score our team ranked third in the Track 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Our results seem to suggest that the combination between empathy and emotion is a difficult task. Given our low scores in the multi-task setting we also speculate on the fact that the two tasks might not be so easy to relate. Future work should consider better ways to aggregate the information coming from these two models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "The authors are members of the Data and Marketing Insights research unit in the Bocconi Institute for Data Science and Analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "EmoNet: Fine-grained emotion detection with gated recurrent neural networks", "authors": [ { "first": "Muhammad", "middle": [], "last": "Abdul", "suffix": "" }, { "first": "-Mageed", "middle": [], "last": "", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "718--728", "other_ids": { "DOI": [ "10.18653/v1/P17-1067" ] }, "num": null, "urls": [], "raw_text": "Muhammad Abdul-Mageed and Lyle Ungar. 2017. EmoNet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 718-728, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multitask learning for mental health conditions with limited social media data", "authors": [ { "first": "Adrian", "middle": [], "last": "Benton", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "152--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multitask learning for mental health condi- tions with limited social media data. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 152-162, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Feel-it: Emotion and sentiment classification for the italian language", "authors": [ { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Federico Bianchi, Debora Nozza, and Dirk Hovy. 2021. Feel-it: Emotion and sentiment classification for the italian language. In Proceedings of the Workshop on Computational Approaches to Subjectivity, Sen- timent and Social Media Analysis. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Modeling empathy and distress in reaction to news stories", "authors": [ { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "" }, { "first": "Anneke", "middle": [], "last": "Buffone", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Slaff", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4758--4765", "other_ids": { "DOI": [ "10.18653/v1/D18-1507" ] }, "num": null, "urls": [], "raw_text": "Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Un- gar, and Jo\u00e3o Sedoc. 2018. Modeling empathy and distress in reaction to news stories. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4758-4765, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Geolocation with Attention-Based Multitask Learning Models", "authors": [ { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (WNUT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tommaso Fornaciari and Dirk Hovy. 2019. Geoloca- tion with Attention-Based Multitask Learning Mod- els. In Proceedings of the 5th Workshop on Noisy User-generated Text (WNUT).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Demographic factors improve classification performance", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "752--762", "other_ids": { "DOI": [ "10.3115/v1/P15-1073" ] }, "num": null, "urls": [], "raw_text": "Dirk Hovy. 2015. Demographic factors improve classi- fication performance. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 752-762, Beijing, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Universal joy a data set and results for classifying emotions across languages", "authors": [ { "first": "Sotiris", "middle": [], "last": "Lamprinidis", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hardt", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sotiris Lamprinidis, Federico Bianchi, Daniel Hardt, and Dirk Hovy. 2021. Universal joy a data set and results for classifying emotions across languages. In Proceedings of the Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Towards robust and privacy-preserving text representations", "authors": [ { "first": "Yitong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "25--30", "other_ids": { "DOI": [ "10.18653/v1/P18-2005" ] }, "num": null, "urls": [], "raw_text": "Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text represen- tations. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25-30, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Human centered NLP with user-factor adaptation", "authors": [ { "first": "Veronica", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "Youngseo", "middle": [], "last": "Son", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "H", "middle": [ "Andrew" ], "last": "Schwartz", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1146--1155", "other_ids": { "DOI": [ "10.18653/v1/D17-1119" ] }, "num": null, "urls": [], "raw_text": "Veronica Lynn, Youngseo Son, Vivek Kulkarni, Ni- ranjan Balasubramanian, and H. Andrew Schwartz. 2017. Human centered NLP with user-factor adap- tation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1146-1155, Copenhagen, Denmark. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "When is multitask learning effective? semantic sequence prediction under varying data conditions", "authors": [ { "first": "Alonso", "middle": [], "last": "H\u00e9ctor Mart\u00ednez", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "44--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "H\u00e9ctor Mart\u00ednez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic se- quence prediction under varying data conditions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 44-53, Va- lencia, Spain. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "What the [MASK]? making sense of language-specific bert models", "authors": [ { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.02912" ] }, "num": null, "urls": [], "raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [MASK]? making sense of language-specific bert models. arXiv preprint arXiv:2003.02912.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A multi-view sentiment corpus", "authors": [ { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Elisabetta", "middle": [], "last": "Fersini", "suffix": "" }, { "first": "Enza", "middle": [], "last": "Messina", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "273--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debora Nozza, Elisabetta Fersini, and Enza Messina. 2017. A multi-view sentiment corpus. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 273-280, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Helpful or hierarchical? predicting the communicative strategies of chat participants, and their impact on success", "authors": [ { "first": "Farzana", "middle": [], "last": "Rashid", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Eduardo", "middle": [], "last": "Blanco", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Vega-Redondo", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", "volume": "", "issue": "", "pages": "2366--2371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farzana Rashid, Tommaso Fornaciari, Dirk Hovy, Ed- uardo Blanco, and Fernando Vega-Redondo. 2020. Helpful or hierarchical? predicting the communica- tive strategies of chat participants, and their impact on success. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing: Findings, pages 2366-2371.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An overview of multi-task learning in", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2017, "venue": "deep neural networks", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.05098" ] }, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "What's in a p-value in nlp?", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "H\u00e9ctor Mart\u00ednez", "middle": [], "last": "Alonso", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the eighteenth conference on computational natural language learning", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and H\u00e9ctor Mart\u00ednez Alonso. 2014. What's in a p-value in nlp? In Proceedings of the eighteenth conference on computational natural lan- guage learning, pages 1-10.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Jo\u00e3o Sedoc, and Alexandra Balahur. 2021. WASSA2021 Shared Task: Predicting Empathy and Emotion in Reaction to News Stories", "authors": [ { "first": "Shabnam", "middle": [], "last": "Tafreshi", "suffix": "" }, { "first": "Orph\u00e9e", "middle": [], "last": "De Clercq", "suffix": "" }, { "first": "Valentin", "middle": [], "last": "Barriere", "suffix": "" }, { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "" } ], "year": null, "venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shabnam Tafreshi, Orph\u00e9e De Clercq, Valentin Bar- riere, Sven Buechel, Jo\u00e3o Sedoc, and Alexandra Bal- ahur. 2021. WASSA2021 Shared Task: Predicting Empathy and Emotion in Reaction to News Stories. In Proceedings of the Eleventh Workshop on Compu- tational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computa- tional Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A case for soft loss functions", "authors": [ { "first": "Alexandra", "middle": [], "last": "Uma", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Silviu", "middle": [], "last": "Paun", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Human Computation and Crowdsourcing", "volume": "8", "issue": "", "pages": "173--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, Sil- viu Paun, Barbara Plank, and Massimo Poesio. 2020. A case for soft loss functions. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 8, pages 173-177.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Exploring demographic language variations to improve multilingual sentiment analysis in social media", "authors": [ { "first": "Svitlana", "middle": [], "last": "Volkova", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1815--1827", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic lan- guage variations to improve multilingual sentiment analysis in social media. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1815-1827, Seattle, Washington, USA. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "type_str": "table", "text": "", "content": "", "num": null } } } }