{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:19.064007Z" }, "title": "WASSA Shared Task: Predicting Empathy and Emotion in Reaction to News Stories", "authors": [ { "first": "Shabnam", "middle": [], "last": "Tafreshi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Georgetown University", "location": {} }, "email": "" }, { "first": "Orph\u00e9e", "middle": [], "last": "De Clercq", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ghent University", "location": {} }, "email": "orphee.declercq@ugent.be" }, { "first": "Valentin", "middle": [], "last": "Barriere", "suffix": "", "affiliation": {}, "email": "valentin.barriere@ec.europa.eu" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "jsedoc@stern.nyu.edu" }, { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Friedrich Schiller University Jena", "location": {} }, "email": "sven.buechel@uni-jena.de" }, { "first": "Alexandra", "middle": [], "last": "Balahur", "suffix": "", "affiliation": {}, "email": "alexandra.balahur@ec.europa.eu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents the results that were obtained from the WASSA 2021 shared task on predicting empathy and emotions. The participants were given access to a dataset comprising empathic reactions to news stories where harm is done to a person, group, or other. These reactions consist of essays, Batson empathic concern, and personal distress scores, and the dataset was further extended with news articles, person-level demographic information (age, gender, ethnicity, income, education level), and personality information. Additionally, emotion labels, namely Ekman's six basic emotions, were added to the essays at both the document and sentence level. Participation was encouraged in two tracks: predicting empathy and predicting emotion categories. In total five teams participated in the shared task. We summarize the methods and resources used by the participating teams.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper presents the results that were obtained from the WASSA 2021 shared task on predicting empathy and emotions. The participants were given access to a dataset comprising empathic reactions to news stories where harm is done to a person, group, or other. These reactions consist of essays, Batson empathic concern, and personal distress scores, and the dataset was further extended with news articles, person-level demographic information (age, gender, ethnicity, income, education level), and personality information. Additionally, emotion labels, namely Ekman's six basic emotions, were added to the essays at both the document and sentence level. Participation was encouraged in two tracks: predicting empathy and predicting emotion categories. In total five teams participated in the shared task. We summarize the methods and resources used by the participating teams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "It is important to be able to analyze empathy and emotion in natural languages. Emotion classification in natural languages has been studied over two decades and many applications successfully used emotion as their major components. Empathy utterances can be emotional, therefore, examining emotion in text-based empathy possibly has a major impact on predicting empathy. Analyzing text-based empathy and emotion have different applications; empathy is a crucial component in applications such as empathic AI agents, effective gesturing of robots, and mental health, emotion has natural language applications such as commerce, public health, and disaster management. In this paper, we present the WASSA 2021 Shared Task: Predicting Empathy and Emotion in Reaction to News Stories. This shared task included two individual tasks where teams develop models to predict emotions and empathy in essays in which people expressed their empathy and distress in reaction to news articles in which an individual, group of people or nature was harmed. Additionally, the dataset also included the demographic information of the authors of the essays such as age, gender, ethnicity, income, and education level, and personality information (details of the collection of the dataset is provided in section 3). Optionally, we suggested that the teams could also use emotion labels when modeling empathy to learn more about the impact of emotions on empathy. The shared task consisted of two tracks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Predicting Empathy (EMP): the formulation of this track is to predict the Batson empathic concern (\"feeling for someone\") and personal distress (\"suffering with someone\") using the essay, personality information, demographic information, and emotion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ": the formulation of this track is to predict emotion tags (sadness, joy, disgust, surprise, anger, or fear) , taken from Ekman's six basic emotions (Ekman, 1971) , plus no-emotion tag for essays. In this setting personality and demographic information as well as empathy and distress scores were also made available and optional to use.", "cite_spans": [ { "start": 59, "end": 108, "text": "(sadness, joy, disgust, surprise, anger, or fear)", "ref_id": null }, { "start": 149, "end": 162, "text": "(Ekman, 1971)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Emotion Label Prediction (EMO)", "sec_num": "2." }, { "text": "For both tasks, an identical train-dev-test split was provided. The dataset consists of essays that were collected from participants, who had read disturbing news articles about a person, a group of people, or painful situations. Empathy, distress, demographic, and personality information was taken from the original work by Buechel et al. (2018) . They used Batson's Empathic Concern -Personal Distress Scale (Batson et al., 1987) , i.e, rating 6 items for empathy (i.e., warm, tender, sympathetic, softhearted, moved, compassionate) and 8 items for distress (i.e., worried, upset, troubled, perturbed, grieved, disturbed, alarmed, distressed) using a 7point scale for each of these items (detailed information can be found in the Appendix section of the original paper). Regarding emotion, all data was annotated with the six basic Ekman emotions (sadness, joy, disgust, surprise, anger, or fear). Five teams participated in this shared task, three participated in both tracks, and each time one additional team participated in either the EMP or EMO track. During the evaluation phase, every team was allowed to submit their results until a certain deadline, after which the final submission was taken into consideration for the ranking. The best result for the empathy prediction track was an average Pearson correlation of 0.545 and the best macro F1-score for the emotion track amounted to 55%.", "cite_spans": [ { "start": 326, "end": 347, "text": "Buechel et al. (2018)", "ref_id": "BIBREF3" }, { "start": 360, "end": 432, "text": "Batson's Empathic Concern -Personal Distress Scale (Batson et al., 1987)", "ref_id": null }, { "start": 467, "end": 535, "text": "(i.e., warm, tender, sympathetic, softhearted, moved, compassionate)", "ref_id": null }, { "start": 561, "end": 645, "text": "(i.e., worried, upset, troubled, perturbed, grieved, disturbed, alarmed, distressed)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Emotion Label Prediction (EMO)", "sec_num": "2." }, { "text": "All tasks were designed in CodaLab 1 and the teams were allowed to submit one official result during evaluation phase and several ones during the training phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Label Prediction (EMO)", "sec_num": "2." }, { "text": "In the remainder of this paper we first review related work (Section 2), after which we introduce the dataset used for both tracks (Section 3). The shared task is presented in Section 4 and the official results in Section 5. A discussion of the different systems participating in both tracks is presented in Section 6 and we conclude our work in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Label Prediction (EMO)", "sec_num": "2." }, { "text": "1 Task descriptions, datasets, and results are designed in CodaLab https://competitions.codalab.org/ competitions/28713", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Label Prediction (EMO)", "sec_num": "2." }, { "text": "Emotion has been studied for two decades and a large body of works have provided insights and remarkable findings. In contrast, detecting and predicting empathy and distress in text is a growing field and there is little work on the correlation and relatedness of emotion, empathy, and distress. This shared task is designed to study the modeling of empathy and distress and the correlation among them. In the literature empathy is considered towards negative events, however, recent studies suggest that people's joyful emotions towards positive events can be termed as positive empathy (Morelli et al., 2015) . The psychological theory distinguishes two separate constructs for distress and empathy; distress is a self-focused, negative affective state (suffering with someone), and empathy is a warm, tender, and compassionate state (feeling for someone). To quantify empathy and distress, studies present different approaches, the most popular one is Batson's Empathic Concern -Personal Distress Scale (Batson et al., 1987) , which is used to obtain empathy and distress scores for each essay in this dataset. To annotate emotions in text, classical studies in NLP suggest categorical tagsets, and most studies are focused on basic emotion models that are suggested by psychological emotion models. The most popular ones are the Ekman 6 basic emotions (Ekman, 1971) , the Plutchik 8 basic emotions (Plutchik, 1984) , and 4 basic emotions (Frijda, 1988) . We opted for the Ekman emotions, because this model is well adopted in different downstream NLP tasks of which emotion is a component, and it is most suited to the dataset we aim to study in this shared task.", "cite_spans": [ { "start": 588, "end": 610, "text": "(Morelli et al., 2015)", "ref_id": "BIBREF34" }, { "start": 1006, "end": 1027, "text": "(Batson et al., 1987)", "ref_id": "BIBREF1" }, { "start": 1356, "end": 1369, "text": "(Ekman, 1971)", "ref_id": "BIBREF12" }, { "start": 1402, "end": 1418, "text": "(Plutchik, 1984)", "ref_id": "BIBREF37" }, { "start": 1442, "end": 1456, "text": "(Frijda, 1988)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Crowdsourcing annotations have become a popular way to acquire human judgments. Collecting categorical annotations for emotions is among the tasks that has been designed successfully in crowdsourcing platforms (Mohammad and Turney, 2013; Mohammad et al., 2014; Abdul-Mageed and Ungar, 2017; Mohammad et al., 2018; Tafreshi and Diab, 2018; Bostan et al., 2019) . Example of such platforms are Amazon Mechanical Turk or Figure Eight (previously known as Crowdflower). There are several SemEval shared tasks that have successfully been developed for Affect computing and emotion classification (Strapparava and Mihalcea, 2007; Mohammad and Bravo-Marquez, 2017; Mohammad et al., 2018; Chatterjee et al., 2019; Sharma et al., 2020) , in which, several approaches, methods, resources, and features have been developed by the participants. These works mainly focused on supervised machine learning approaches with different ways of designing features (traditional feature engineering) to feature representations using word2vec embedding models (Mikolov et al., 2013) , contextualized word embeddings (Peters et al., 2018) and pretrained language models from transformers (Devlin et al., 2018) .", "cite_spans": [ { "start": 210, "end": 237, "text": "(Mohammad and Turney, 2013;", "ref_id": "BIBREF33" }, { "start": 238, "end": 260, "text": "Mohammad et al., 2014;", "ref_id": "BIBREF27" }, { "start": 261, "end": 290, "text": "Abdul-Mageed and Ungar, 2017;", "ref_id": "BIBREF0" }, { "start": 291, "end": 313, "text": "Mohammad et al., 2018;", "ref_id": "BIBREF25" }, { "start": 314, "end": 338, "text": "Tafreshi and Diab, 2018;", "ref_id": "BIBREF45" }, { "start": 339, "end": 359, "text": "Bostan et al., 2019)", "ref_id": "BIBREF2" }, { "start": 591, "end": 623, "text": "(Strapparava and Mihalcea, 2007;", "ref_id": "BIBREF43" }, { "start": 624, "end": 657, "text": "Mohammad and Bravo-Marquez, 2017;", "ref_id": "BIBREF31" }, { "start": 658, "end": 680, "text": "Mohammad et al., 2018;", "ref_id": "BIBREF25" }, { "start": 681, "end": 705, "text": "Chatterjee et al., 2019;", "ref_id": "BIBREF5" }, { "start": 706, "end": 726, "text": "Sharma et al., 2020)", "ref_id": null }, { "start": 1037, "end": 1059, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" }, { "start": 1093, "end": 1114, "text": "(Peters et al., 2018)", "ref_id": "BIBREF36" }, { "start": 1164, "end": 1185, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 418, "end": 430, "text": "Figure Eight", "ref_id": null } ], "eq_spans": [], "section": "Emotions", "sec_num": "2.1" }, { "text": "Prior work on modeling text-based empathy focused on the empathetic concern which is to share others' emotions in the conversations Litvak et al. (2016) ; Fung et al. (2016) ; Xiao et al. (2015 ; Gibson et al. (2016) modeled empathy based on the ability of a therapist to adapt to the emotions of their clients; Zhou and Jurgens (2020) quantified empathy in condolences in social media using appraisal theory.", "cite_spans": [ { "start": 132, "end": 152, "text": "Litvak et al. (2016)", "ref_id": "BIBREF20" }, { "start": 155, "end": 173, "text": "Fung et al. (2016)", "ref_id": "BIBREF15" }, { "start": 176, "end": 193, "text": "Xiao et al. (2015", "ref_id": "BIBREF48" }, { "start": 196, "end": 216, "text": "Gibson et al. (2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Empathy and Distress", "sec_num": "2.2" }, { "text": "In this section, we present how the dataset that was used for this shared task has been collected and annotated. The starting point was the dataset as described in (Buechel et al., 2018) . This dataset comprises both news articles and essays, we provide a brief description of both below. 2 News article collection We used the same news articles in Buechel et al. (2018) in which there is major or minor harm inflicted to an individual, group of people, or other by either a person, group of people, political organization, or nature. The stories were specifically selected by Buechel et al. (2018) to evoke varying degrees of empathy among readers.", "cite_spans": [ { "start": 164, "end": 186, "text": "(Buechel et al., 2018)", "ref_id": "BIBREF3" }, { "start": 349, "end": 370, "text": "Buechel et al. (2018)", "ref_id": "BIBREF3" }, { "start": 577, "end": 598, "text": "Buechel et al. (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Data Collection and Annotation", "sec_num": "3" }, { "text": "Essay collection The corpus acquisition was set up as a crowdsourcing task on MTurk.com pointing to a Qualtrics.com questionnaire. The participants completed background measures on demographics and personality and then proceeded to the main part of the survey where they read a random selection of five of the news articles. After reading each of the articles, participants were asked to rate their level of empathy and distress before describing their thoughts and feelings about it in writing. From this initial dataset, the training data was extracted for the shared task. For the development and test dataset, an additional 805 essays were added to the dataset, these were written in response to the same news articles by an additional 161 participants using the same AMT setting as described above. The test and development datasets were both new collections. Since each message is annotated by only one rater, its author, typical measures of inter-rater agreement are not applicable. Instead, we compute split-half reliability (SHR), a standard approach in psychology (Cronbach, 1947) . SHR is computed by splitting the ratings for the individual scale items (e.g., warm, tender, etc. for empathy) of all participants randomly into two groups, averaging the individual item ratings for each group and participant, and then measuring the correlation between both groups. This process is repeated 100 times with random splits, before again averaging the results. Doing so for empathy and distress, we find very high SHR values of r=.875 and .924, for the training set and value of r=.872 and .928 for test+dev set for empathy and distress respectively.", "cite_spans": [ { "start": 1074, "end": 1090, "text": "(Cronbach, 1947)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Data Collection and Annotation", "sec_num": "3" }, { "text": "In a next phase, all essays were further enriched with the 6 basic Ekman emotion labels at the essay level in order to find out whether certain basic emotions are more correlated with empathy and distress. To this purpose the emotion labels were first predicted automatically and then manually verified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Annotation Process", "sec_num": "3.1" }, { "text": "For the automatic prediction two different NN models were applied to generate predictions at the essay level. The models were 1) a Gated RNN with attention mechanism which is trained with multigenre corpus, i.e., news, tweets, blog posts, (Tafreshi, 2021 ) (chapter 5), 2) fine-tuned RoBERTa model on the GoEmotions dataset (Demszky et al., 2020) .", "cite_spans": [ { "start": 239, "end": 254, "text": "(Tafreshi, 2021", "ref_id": "BIBREF44" }, { "start": 324, "end": 346, "text": "(Demszky et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Emotion Annotation Process", "sec_num": "3.1" }, { "text": "For the manual verification another Amazon Mechanical Turk task was set up for which annotators with the highest AMT quality rating were recruited. For each essay the turkers were provided with the two automatically predicted labels. If they did not indicate one of these labels as correct, they had to choose the correct label from a tagset including the 6 basic Ekman emotions (sadness, joy, disgust, surprise, anger, or fear) or assign the label no-emotion. Some instances were ambiguous, which means that neither of the machines' labels nor the two annotators were agreeing on the same tag. We excluded these essays, and a PhD candidate in NLP further annotated these instances and selected the most related tag. Results obtained from this post annotation step completed the annotation procedure, thus, we acquired gold emotion labels for each essay.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Annotation Process", "sec_num": "3.1" }, { "text": "The distribution of the emotion tags per data split split is illustrated in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Emotion Annotation Process", "sec_num": "3.1" }, { "text": "As anticipated, the majority of the essays have the emotion tag sadness. Moreover, we observe an even distribution of the emotion tags disgust, fear, and anger, and a small number of joy, which seems somewhat counter-intuitive given the nature of the essays. After inspecting a small sample of the latter, we found that in these instances the authors of the essays were suggesting actions to improve the situation, in some cases, these essays also contained political views. This could explain the positive emotion that was assigned by the turkers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Annotation Process", "sec_num": "3.1" }, { "text": "We setup both empathy and emotion label predictions in CodaLab. We describe each task separately, the objectives and the metadata that we provided for each task, the data split, resources, and evaluation metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared Task", "sec_num": "4" }, { "text": "The formulation of this task is to predict Batson empathic concern (\"feeling for someone\") and personal distress (\"suffering with someone\") scores using the essay. Each empathy score is a real value between 0 and 7. Given the essay and empathy score, participants suppose to predict the empathy score for each essay. We provided personality and demographic information for each essay as well as emotion labels. The demographic information consists of: gender, education, race, age, and income. To code personality information the Big 5 personality traits were provided, also known as the OCEAN model (Gosling et al., 2003) and the Interpersonal Reactivity Index (Davis, 1980) . In the OCEAN model, the theory identifies five factors (e.g., openness to experience, conscientiousness, extraversion, agreeableness and neuroticism). The Interpersonal Reactivity Index (IRI) is a measurement tool for the multi-dimensional assessment of empathy. The four subscales are: Perspective Taking, Fantasy, Empathic Concern and Personal Distress.", "cite_spans": [ { "start": 600, "end": 622, "text": "(Gosling et al., 2003)", "ref_id": "BIBREF17" }, { "start": 662, "end": 675, "text": "(Davis, 1980)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Empathy Prediction (EMP)", "sec_num": "4.1" }, { "text": "Both personality and demographic information were provided by a real value from 0 to 7. Besides, we provided emotion labels, from Ekman's six ba-sic emotions (sadness, joy, disgust, surprise, anger, or fear) for each essay.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy Prediction (EMP)", "sec_num": "4.1" }, { "text": "The formulation of this task is to predict emotion labels for essays. Given the essay and emotion label X, the task is to predict emotion label X per essay. The same set of metadata that we described in section 4.1 were also provided for each essay in this task. Participants optionally could use this information as features to predict emotion labels. Table 2 represents the train, development, and test splits. Participants were able to add the development set to the training set and submit systems trained on both. Training and development sets with gold labels for empathy, distress, demographic, and personality information were available at the beginning of the competition. For the first two weeks of the competition automatic emotion labels were provided, after which the gold-labeled emotions were made available. The test set was made available to the participants at the beginning of the evaluation period.", "cite_spans": [], "ref_spans": [ { "start": 353, "end": 360, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Emotion Label Prediction (EMO)", "sec_num": "4.2" }, { "text": "Participants were allowed to use any lexical resources (e.g., emotion or empathy dictionaries) of their choice, any training data besides the one we provided, or any off-the-shelf emotion or empathy models they could access. We did not put any restriction in this shared task nor did we suggest any baseline models. These are the first generated results for this task setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources and Systems Restrictions", "sec_num": "4.4" }, { "text": "The organizers published an evaluation script that calculates Pearson correlation for the predictions of the empathy prediction task and precision, recall, and F1 measure for each emotion class as well as the micro and macro average for the emotion label prediction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Evaluation", "sec_num": "4.5" }, { "text": "Pearson coefficient is the linear correlations between two variables, and it produces scores from -1 (perfectly inversely correlated) to 1 (perfectly correlated). A score of 0 indicates no correlation. The official competition metric for the empathy prediction task (EMP) is the average of the two Pearson correlations. The official competition metric for the emotion evaluation is the macro F1-score, which is the harmonic mean between precision and recall. Table 3 shows the main results of the track on empathy (Emp) and distress (Dis) prediction. Four teams submitted results and the best scoring system is the one of PVG (averaged r = .545). If we examine the results for the empathy and distress prediction separately, we observe that for empathy, team WASSA@IITK scored best (r = .558), whereas for distress PVG obtained the best result (r = .574). To compare, in Buechel et al. (2018) the bestperforming system, a CNN, obtained r=.404 for empathy and r=.444 for distress. Of course these results were achieved only on the training set using ten-fold cross validation experiments.", "cite_spans": [ { "start": 871, "end": 892, "text": "Buechel et al. (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 459, "end": 466, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Systems Evaluation", "sec_num": "4.5" }, { "text": "In Table 4 the absolute difference between the predicted and gold empathy and distress scores by the best-performing system (PVG) are presented. It can be observed that the majority of predicted Batson emphatic concern and distress instances only differ in between zero or one point from the gold scores, i.e. 39% and 45%, respectively. For both labels the maximum difference amounts to 4-5 points and this in only a very few cases, 3 instances for empathy and 1 instance for distress. : Absolute difference in score between predicted and gold for both the empathy and distress scores of the best-performing system (expressed in number of instances and percentagewise).", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Team", "sec_num": null }, { "text": "The results of the track on emotion label prediction are presented in Table 5 . Four teams submitted results and the best scoring system is the one of WASSA@IITK (indicated in bold, 55% Macro F1), largely outperforming the runner-up Team Phoenix (50%). Table 5 : Results of the teams participating in the EMO track (macro-averaged precision (P), recall (R), F1score (F1) and accuracy (Acc)).", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 5", "ref_id": null }, { "start": 253, "end": 260, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Emotion Label Prediction (EMO)", "sec_num": "5.2" }, { "text": "Given that the labels in the datasets were not equally distributed (see Table 1 ), we also have a look at the accuracy, which equals the micro-averaged F-1 score. Again, the result by WASSA@IITK outperforms the second team, 62% versus 59%, though with a less outspoken margin.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Team", "sec_num": null }, { "text": "To get more insight we also provide a breakdown of the macro-averaged results by emotion class in Table 6 . As expected sadness and especially anger are predicted with the highest performance by most systems. For anger, the F1-score ranges from 59% to 77%, even though this label was not the most frequently occurring one in the training and development data (Table 1 ). In the same vein, the classification of disgust also seems better than expected given its limited number of training instances. For all emotion labels team WASSA@IITK outperforms the others, except for the label fear, though the difference is only marginal (45 instead of 44% F1). Please recall that besides the 6 Ekman emotions label, the systems could also predict a seventh category \"no-emotion\". The macro F1 scores for predicting this particular label were 67, 64, 63, and 40%, respectively.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 6", "ref_id": "TABREF8" }, { "start": 359, "end": 367, "text": "(Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Team", "sec_num": null }, { "text": "We had a closer look at those instances that were predicted with a difference in score of between 4 and 5 by the best-performing system, you can find the actual essays in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy prediction", "sec_num": "5.3.1" }, { "text": "For empathy there were three such instances: in the first one (essay 1) the gold score was 7 and the predicted one 2.470, which is actually a pretty strange error as this describes a really typical high empathy -high distress essay. For the other two instances, the predicted empathy scores were very high (5.428 and 5.272) compared to the gold one (1). In essay 2 the low empathy score seems obvious for a human reader, but aside from \"whining\" there are few markers without deeper understanding, making this a challenging example for automatic prediction. Moreover, we observe that a few questions are being raised by the author in the essay, and questions are often associated with high empathy. Upon first inspection we would have expected higher empathy given the text in essay 3. Initially, we thought this was a bad annotation, but upon second reading it seems to be a rare case of very low empathy and high distress.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy prediction", "sec_num": "5.3.1" }, { "text": "For distress there was one instance with a high discrepancy between the predicted (5.347) and gold (1.25) score. If we consider essay 4 we observe that there is no self-focus language at all. So a low distress score does make sense here. Nonetheless this is not a typical low distress response since there is some empathy expressed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy prediction", "sec_num": "5.3.1" }, { "text": "Considering essays 3 and 4 we can state that these exhibit high distress/low empathy and vice versa low distress/ mild empathy. It is possible that models have difficulty in scenarios where there is empathy with a lack of distress and vice versa. Table 7 presents the confusion matrix of the topperforming team on the test data. It can be observed that the top three occurring labels in the training data, sadness (Sa) -anger (A) -no-emotion (No)are accurately classified most frequently and that anger and fear are most often confused with sadness, whereas the same goes for sadness being classified as anger.", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 254, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Empathy prediction", "sec_num": "5.3.1" }, { "text": "Assigning an emotion label at the document level is not a trivial task as certain sentences within an essay may exhibit different emotions or sentiment. In Appendix B we present for every possible label a first essay (i) which was correctly classified by all four participating systems and a second one (ii) where most systems assigned the wrong label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion label prediction", "sec_num": "5.3.2" }, { "text": "Looking at the correctly classified essays, we observe that in these essays many emotional words and phrases are being used and that there is not much discrepancy of emotions between the sentences. The same cannot be said for the erroneously classified essays, there we clearly observe that often many emotions are being presented within the same essay.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion label prediction", "sec_num": "5.3.2" }, { "text": "In the meantime all essays have also been labeled with emotions at the sentence level using the same annotation procedure as described in Section 3, this dataset will also be made available for research purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion label prediction", "sec_num": "5.3.2" }, { "text": "A total of 5 teams participated in the shared tasks with 3 teams participating in both tracks. In this section, we provide a summary of the machine learning models, features, resources, and lexicons that were used by the teams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy and Emotion Systems", "sec_num": "6" }, { "text": "All systems follow supervised machine learning models for empathy prediction and emotion classification (Table 8 ). Most teams built systems using pre-trained transformer language models, which were fine-tuned or from which features from different layers were extracted. Linear Regression and logistic regression with feature engineering and the CNN model were proposed by one team.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "(Table 8", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Machine Learning Architectures", "sec_num": "6.1" }, { "text": "Detection and classification of emotion in text is challenging because marking textual emotional cues is difficult. Emotion model performance has been always improved when lexical features (e.g., emotion, sentiment, subjectivity, etc.), emotionspecific embedding, or different emotional datasets were augmented and used (Mohammad et al., 2018) to represent an emotion. Similar to emotion, predicting text-based empathy is challenging as well, and using lexical features, and external resources have an impact on empathy model performance. As such, it is quite common to use different resources and design different features in emotion and empathy models. As part of the dataset we provided to teams, we include personality, demographic, and categorical emotions as additional features for both emotion and empathy tasks. Teams were allowed to use any external resources or design any features of their choice and use them in their models. Table 9 summarizes the features and extra resources that teams used to build their models.", "cite_spans": [ { "start": 320, "end": 343, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 939, "end": 947, "text": "Table 9", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Features and Resources", "sec_num": "6.2" }, { "text": "The presence of emotion and empathic words are the first cues for a piece of text to be emotional or empathic, therefore, it is beneficial to use emotion/empathy lexicons to extract those words and create features. Table 10 summarizes the lexicons that were employed by the different teams.", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 223, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Lexicons", "sec_num": "6.3" }, { "text": "PVG The best performing system in predicting empathy was PVG. The team developed a multitask, multi-view system. To design the multi-views the team used the information provided in the dataset in the form of an empathy bin. This feature divides essays with empathy from the ones with low empathy based on a threshold empathy score (this threshold is 4). In the multi-task model, the tasks are predicting empathy scores and classifying empathy and emotion. The primary task is to predict empathy, thus emotion and empathy classifications are considered auxiliary (secondary) tasks. The machine learning algorithm has a NN architecture that consists of an embedding layer, a max-pooling layer, and a fully connected layer. To represent contextual features, they used RoBERTA-base . Further, the demographic and personality information was concatenated and used as features. For the distress system, in addition to the previously mentioned feature representations, the NRC emotion intensity (Mohammad, 2018a) and NRC VAD lexicons were also used.", "cite_spans": [ { "start": 988, "end": 1005, "text": "(Mohammad, 2018a)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Top three systems in EMP track", "sec_num": "6.4" }, { "text": "EmpNa The team developed a linear regression model and built the features representing the text as n-grams and adding a set of characteristics extracted from a handcrafted set of lexicons (AFINN, QWN, SenticNet, etc) . The lexical n-gram features consisted of uni-gram, bi-gram, and tri-grams. They defined a threshold to select words with higher frequency (80% for empathy and 70% for distress). These lexical features were concatenated with all the scores extracted from the different lexicons, plus the personality and demographic information that was provided in this shared task as extra features. They used this feature to represent contextual features per essay as information for a linear regression model to predict empathy and distress. They selected two baseline model: a CNN model as described in (Buechel et al., 2018) and a model relying solely on n-grams. Their results suggest that combining all features (lexical, demographic, and personality features) yields the best result.", "cite_spans": [ { "start": 188, "end": 216, "text": "(AFINN, QWN, SenticNet, etc)", "ref_id": null }, { "start": 809, "end": 831, "text": "(Buechel et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Top three systems in EMP track", "sec_num": "6.4" }, { "text": "WASSA@IITK This team built a multi-task model using a transformer architecture, then they fine-tuned this model for empathy and distress with the Mean Squared Error loss function. In their multi-task model, they jointly learned empathy and distress. The pre-trained language model they used was the ELECTRA large model (Clark et al., 2020) with two dense layers on top of it, one responsible for Empathy and another for Distress. MSE loss was used, adding the loss for Empathy and Distress and jointly training the architecture end to end on that total loss. The same approach was tried out with the RoBERTa model . Finally, they built an Ensemble model consisting of multi-task RoBERTA and Vanilla ALBERTA. For distress prediction they used an ensemble of two models, both being multi-task ELECTRA models (Clark et al., 2020) with different performance on the dev set.", "cite_spans": [ { "start": 319, "end": 339, "text": "(Clark et al., 2020)", "ref_id": "BIBREF7" }, { "start": 806, "end": 826, "text": "(Clark et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Top three systems in EMP track", "sec_num": "6.4" }, { "text": "WASSA@IITK The best performing system in emotion classification was WASSA@IITK. The team developed multiple systems by fine-tuning several pre-trained transformer language models on the dataset that was provided for the shared task, which they augmented with the GoEmotions dataset (Demszky et al., 2020) . The transformer models that were employed were ELECTRA base and large (Clark et al., 2020) , and RoBERTA base and large . Eventually, the models' outputs were averaged or summed into ensembles and the results of these ensemble models were used for the shared task. The best-performing system was an ensemble model consisting of a combination of two ELECTRA base and one ELECTRA large.", "cite_spans": [ { "start": 282, "end": 304, "text": "(Demszky et al., 2020)", "ref_id": "BIBREF10" }, { "start": 377, "end": 397, "text": "(Clark et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Top three systems in EMO track", "sec_num": "6.5" }, { "text": "Empathic or Emotion Lexicons Lexicons # of team Emp System Emo System NRC EmoLex (Mohammad and Turney, 2010) 1 NRC intensity (Mohammad, 2018c) 1 NRC valence (Mohammad, 2018b) 1 Opinion Lexicon (Hu and Liu, 2004) 1 AFINN (Nielsen, 2011) 1 General Inquirer lexicon (Inquirer, 1966) 1 Sentiment140 Lexicon 1 +/-Effect Lexicon (Choi and Wiebe, 2014) 1 QWN (San Vicente et al., 2014) 1 Twitter (Speriosu et al., 2011) 1 SenticNet (Cambria et al., 2010) 1 Affective rating (Warriner et al., 2013) 1 Empath Lexicon (Fast et al., 2016) 1 Table 10 : Empathic or Emotion Lexicons that are used by different teams. We listed all the lexicons that teams reported in their results.", "cite_spans": [ { "start": 81, "end": 108, "text": "(Mohammad and Turney, 2010)", "ref_id": "BIBREF26" }, { "start": 125, "end": 142, "text": "(Mohammad, 2018c)", "ref_id": "BIBREF30" }, { "start": 157, "end": 174, "text": "(Mohammad, 2018b)", "ref_id": "BIBREF29" }, { "start": 193, "end": 211, "text": "(Hu and Liu, 2004)", "ref_id": "BIBREF18" }, { "start": 263, "end": 279, "text": "(Inquirer, 1966)", "ref_id": "BIBREF19" }, { "start": 323, "end": 345, "text": "(Choi and Wiebe, 2014)", "ref_id": "BIBREF6" }, { "start": 352, "end": 378, "text": "(San Vicente et al., 2014)", "ref_id": "BIBREF39" }, { "start": 389, "end": 412, "text": "(Speriosu et al., 2011)", "ref_id": "BIBREF42" }, { "start": 425, "end": 447, "text": "(Cambria et al., 2010)", "ref_id": "BIBREF4" }, { "start": 467, "end": 490, "text": "(Warriner et al., 2013)", "ref_id": "BIBREF46" }, { "start": 508, "end": 527, "text": "(Fast et al., 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 530, "end": 538, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Top three systems in EMO track", "sec_num": "6.5" }, { "text": "Team Phoenix This team fine-tuned a T5 or Textto-Text Transfer Transformer model (Raffel et al., 2019) using the emotion recognition dataset (Saravia et al., 2018) to predict categorical emotion labels. They used features extracted ([CLS] token) from transformer models such as BERT base, ALBERT-base-v2, Pegasus-xsum, and T5-base, however, fine-tuning yielded the best result by a large margin.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF38" }, { "start": 141, "end": 163, "text": "(Saravia et al., 2018)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Top three systems in EMO track", "sec_num": "6.5" }, { "text": "MilaNLP Several multi-inputs models were constructed by this team, a combination of essay text, demographic, and personality information, and a number of multi-task learning models, where they learned categorical emotion as one task and empathy and distress as another task. The model architecture was NN, where contextualized features were extracted from BERT large-uncased. The best model was a two inputs model which was the combination of contextualized features and gender. The worst results they reported were the output of the multi-task model with four inputs: contextual features, and gender, income, and Interpersonal Reactivity Index (personality information).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Top three systems in EMO track", "sec_num": "6.5" }, { "text": "In this paper we presented the shared task on empathy and emotion prediction of essays that were written in response to news stories to which five teams participated. Based on the analysis of the systems we can conclude that fine-tuning a transformer language model or relying on features extracted from transformer models along with jointly learning related tasks can lead to a robust modeling of empathy, distress, and emotion. Despite the strenght of these strong contextualized features, we also observed that task-specific lexical features extracted from emotion, sentiment, opinion, and empathy lexicons can still create a significant impact on empathy, distress, and emotion models. Furthermore, the top-performing emotion models used external datasets to further fine-tune the language models, which indicates that data augmentation is important when modeling emotion, even if the text genre is different from the genre of the task at hand. Finally, using demographic and personality information as features revealed a significant impact on empathy, distress, and emotion models. Particularly, joint modeling of distress and empathy coupled with those features yielded the best results for most of the top-ranked systems that were developed as part of this shared task. Essay 2: Can you tell we live in the age of Me! Me! Me! Now we have obese and trans people whining that their special needs are not being met. Are medical device companies supposed to design machines extra large for the few morbidly obese people in the world? Won't that make them more expensive and make them take up more space and raise costs for everyone? Should doctors be expected to learn even more than the incredible amount they already have to learn just for morbidly obese patients? Same thing goes for the \"trans\" patients. We seem to be living in a world where the small minority of people with special circumstances want the world to cater to them at the expense of everyone else's time, effort and money. (Gold Emp: 1, Predicted Emp: 5.428)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Essay 3: I understand that businesses need to worry about profits. But It really angers me when governments and companies throw away lives in order to protect their bottom line. When people riot and chaos breaks out, it is always for a reason. It is up to the government and our police forces to protect the everyday citizens, not take their lives to protect their own. It angers me so much, all the needless violence and lives lost for no good reason. (Gold Emp: 1, Predicted Emp: 5.272) Essay 4: This article was about the crisis in Syria that is currently going on. Families are struggling with no end in sight. It's horrible conditions over there and impossible to get themselves out. Elderly people who have been retired and worked for so long, are faced with the horrible scenario of fighting for every little bit of resources they can find. Younger families don't have a supply of anything to fall back on. They fearful they will die at any moment. (Gold Dis: 1.25, Predicted Dis: 5.347)", "cite_spans": [ { "start": 453, "end": 466, "text": "(Gold Emp: 1,", "ref_id": null }, { "start": 467, "end": 488, "text": "Predicted Emp: 5.272)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Below examples are shown of essays that received one of the seven labels and for each label we present one essay that was correctly classified by all teams (i) and one that was misclassified by most systems (ii). This is discussed in closer detail in Section 5.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "Joy: (i) Connecting with people is just always good for me personally, It's a matter of finding people with similar desires. That person who hasn't seen you in six month may be your \"true friend\" in an abstract sense, and even be very loyal and dependable in an emergency. But if you're bored on the weekends and want someone to hang out with you regularly ... just go find some more friends. Don't try to guilt your old friends that are busy or have different interests into changing their social habits to match yours. You can have old friends and new friends. We just have so much in common in what we can do and I just really think that's awesome. (ii) I believe we all have someone in our lives suffering from PTSD, whether we know it or not. I know it takes quite a lot of courage and strength (and persistence) to get PTSD diagnosed and treated. Please know that you're loved and supported. Any way I can help get the word I will, I just need the messaging. (Predicted as: sadness, sadness, noemo, surprise).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "Sadness: (i) Hello Friend, i am writing to you as regards an article i read and i will also like to let you how i felt about the article. I was really sad and gutted by what transpired in the article. It was about an inmate with the name Richardson. Richardson normally stay alone in his cell room, but on this day another inmate was brought to him to start living with Richardson in his cell room the nickname of the Cell room mate was The prophet which has previous record of assaulting about 20 other inmate. This also lead to the assault of Richardson which really mad me sad. (ii) Hey man, I just read this article about smokers and cancer and stuff and I think you should have a look. I know you like to smoke but I think you should try to cut back a bit. I don't want you to end up with cancer man. The risk is really high and I care about you dude. I think we're too young to have to start worrying about cancer and death and stuff man. (Predicted: fear, anger, no-emo, no-emo) .", "cite_spans": [ { "start": 945, "end": 985, "text": "(Predicted: fear, anger, no-emo, no-emo)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "Disgust: (i) This is kinda of disgusting that the Royal Carribean workers were taunting a passenger for being gay. However, is that any reason for the passenger to kill himself? Either way, the Cruise line is at fault and should be sued by the dead guys husband because they didn't do what they could in order ot save the man once he went overboard. (ii) You know, our city has this odour to it sometimes. When I was a kid, we would be congested in traffic just trying to get to the beach for hours on end, burning up in 80 degree heat in a tiny beater car, but yeah, you don't feel choked. You definitely feel like the city isn't cleanly. I have better scents from my socks sometimes. (Predicted: no-emo, sadness, anger, sadness).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "Fear: (i) So there are these flesh-eating bacteria that kills 25 percent of people. You can be enjoying your day and you can go some time without knowing what is attacking you. We have to be more vigilant with our bodies and get tested in order to prevent such things to happen to us. It could be scary thinking you're okay but you can be under attack by such a bacteria. (ii) I just read an article concerning the repatriation of somalia's by the Kenyan government. Apparently thee a quite of few somalian refugees who fled their country and Kenya is attempting to repatriate them. It sounds like a very significant and challenging undertaking requiring tremendous amounts of resources. Hopefully the efforts will be successful and the families involved won't be adversely affected. (Predicted:no-emo, joy, no-emo, anger).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "Anger: (i) We need more training for police. Police shouldn't be getting killed in the line of duty. It's not fair to their families because people are stupid and can't follow the law. People need to stop being so selfish and we need to make it less easy to obtain guns if people didn't have such easy access to them there wouldn't be so many deaths overall. (ii) If only the republican party could get their act together. I'm not a republican, but some of this article really tells the tale of how republican are trying to deal with the currant president and the lack of confidence practically most of what female republican voters are feeling concerning everything that's happening today. You need to read this, a lot of this article is really interesting! (Predicted: joy, surprise, no-emo, sadness).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "Surprise: (i) The article is so shocking. I had heard a little about it before but I had no idea that it was so drastic. And now I am not surprised about how the weather has been so screwy for the past few years. It doesn't seem like there is anything that we can do about it though. So I feel kind of helpless about that. (ii) Is this what we have come to beaten a boy for stealing food something that he really needed he must of been hungry why else would he steal it something should be done in cases like this and the people that did it need the same jungle justice to happen to them also where is people hearts when they do things like this. (Predicted: anger, anger, no-emo, disgust) .", "cite_spans": [ { "start": 647, "end": 689, "text": "(Predicted: anger, anger, no-emo, disgust)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "No-emo: (i) More todlers and preschooler are over dozing on opiods as shown in a recentt research. the analyzed data show that kids admittted in hospitals for opiods poisoning and it focused on 13000 records of patients aged beween 1 and 19. Possible increase on prescribed pain killers show reail sales of the drug increased by four times. (ii) I think as a parent you will find this very interesting. There is a study from Denmark that people who take the pill . That can be concerning for people that take the pill for health reasons and also to keep from having unnecessary pregnancy. I think that is really a cause from concern. I think that the pill is something that a lot of people take but if they know about the side effect of it with the depression they may not want to take it. I think that it can cause concern because of the things that will happen after they do not take the pill. There is a risk of pregnancy and that can cause issues down the road. I think that we need to research it further and see how we can turn it around and make it positive. I think you should really read this and tell me what you think about it. (Predicted: fear, surprise, no-emo, sadness).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Examples Track II (EMO)", "sec_num": null }, { "text": "For more details we refer the reader to the original paper ofBuechel et al. (2018).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partially supported by the Amazon AWS Cloud Credits for Research program, Jo\u00e3o Sedoc's Microsoft Dissertation Grant and JRC's Exploratory Research Activity. Given the short time period in which the shared task was organized, we want to thank everyone who participated in this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emonet: Fine-grained emotion detection with gated recurrent neural networks", "authors": [ { "first": "Muhammad", "middle": [], "last": "Abdul", "suffix": "" }, { "first": "-Mageed", "middle": [], "last": "", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "718--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhammad Abdul-Mageed and Lyle Ungar. 2017. Emonet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 718-728.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences", "authors": [ { "first": "Jim", "middle": [], "last": "Daniel Batson", "suffix": "" }, { "first": "Patricia", "middle": [ "A" ], "last": "Fultz", "suffix": "" }, { "first": "", "middle": [], "last": "Schoenrade", "suffix": "" } ], "year": 1987, "venue": "Journal of personality", "volume": "55", "issue": "1", "pages": "19--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "C Daniel Batson, Jim Fultz, and Patricia A Schoenrade. 1987. Distress and empathy: Two qualitatively dis- tinct vicarious emotions with different motivational consequences. Journal of personality, 55(1):19-39.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Goodnewseveryone: A corpus of news headlines annotated with emotions, semantic roles, and reader perception", "authors": [ { "first": "Laura", "middle": [], "last": "Bostan", "suffix": "" }, { "first": "Evgeny", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.03184" ] }, "num": null, "urls": [], "raw_text": "Laura Bostan, Evgeny Kim, and Roman Klinger. 2019. Goodnewseveryone: A corpus of news headlines an- notated with emotions, semantic roles, and reader perception. arXiv preprint arXiv:1912.03184.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Modeling empathy and distress in reaction to news stories", "authors": [ { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "" }, { "first": "Anneke", "middle": [], "last": "Buffone", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Slaff", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4758--4765", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Un- gar, and Jo\u00e3o Sedoc. 2018. Modeling empathy and distress in reaction to news stories. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4758-4765.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Senticnet: A publicly available semantic resource for opinion mining", "authors": [ { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Hussain", "suffix": "" } ], "year": 2010, "venue": "AAAI fall symposium: commonsense knowledge", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik Cambria, Robert Speer, Catherine Havasi, and Amir Hussain. 2010. Senticnet: A publicly available semantic resource for opinion mining. In AAAI fall symposium: commonsense knowledge, volume 10. Citeseer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "SemEval-2019 task 3: EmoContext contextual emotion detection in text", "authors": [ { "first": "Ankush", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Kedhar", "middle": [], "last": "Nath Narahari", "suffix": "" }, { "first": "Meghana", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Puneet", "middle": [], "last": "Agrawal", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "39--48", "other_ids": { "DOI": [ "10.18653/v1/S19-2005" ] }, "num": null, "urls": [], "raw_text": "Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019. SemEval-2019 task 3: EmoContext contextual emotion detection in text. In Proceedings of the 13th International Work- shop on Semantic Evaluation, pages 39-48, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "+/-effectwordnet: Sense-level lexicon acquisition for opinion inference", "authors": [ { "first": "Yoonjung", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1181--1191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoonjung Choi and Janyce Wiebe. 2014. +/- effectwordnet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1181-1191.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Pre-training transformers as energy-based cloze models", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Pre-training trans- formers as energy-based cloze models. In EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Test \"reliability\": Its meaning and determination", "authors": [ { "first": "J", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Cronbach", "suffix": "" } ], "year": 1947, "venue": "Psychometrika", "volume": "12", "issue": "1", "pages": "1--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee J Cronbach. 1947. Test \"reliability\": Its meaning and determination. Psychometrika, 12(1):1-16.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Interpersonal Reactivity Index", "authors": [ { "first": "H", "middle": [], "last": "Mark", "suffix": "" }, { "first": "", "middle": [], "last": "Davis", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark H Davis. 1980. Interpersonal Reactivity Index. Edwin Mellen Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Goemotions: A dataset of fine-grained emotions", "authors": [ { "first": "Dorottya", "middle": [], "last": "Demszky", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Movshovitz-Attias", "suffix": "" }, { "first": "Jeongwoo", "middle": [], "last": "Ko", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Cowen", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Nemade", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00547" ] }, "num": null, "urls": [], "raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeong- woo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. Goemotions: A dataset of fine-grained emotions. arXiv preprint arXiv:2005.00547.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Universals and cultural differences in facial expressions of emotion", "authors": [ { "first": "Paul", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1971, "venue": "Nebraska symposium on motivation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Ekman. 1971. Universals and cultural differences in facial expressions of emotion. In Nebraska sym- posium on motivation. University of Nebraska Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Empath: Understanding topic signals in largescale text", "authors": [ { "first": "Ethan", "middle": [], "last": "Fast", "suffix": "" }, { "first": "Binbin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Michael S", "middle": [], "last": "Bernstein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 CHI conference on human factors in computing systems", "volume": "", "issue": "", "pages": "4647--4657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ethan Fast, Binbin Chen, and Michael S Bernstein. 2016. Empath: Understanding topic signals in large- scale text. In Proceedings of the 2016 CHI confer- ence on human factors in computing systems, pages 4647-4657.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The laws of emotion", "authors": [ { "first": "H", "middle": [], "last": "Nico", "suffix": "" }, { "first": "", "middle": [], "last": "Frijda", "suffix": "" } ], "year": 1988, "venue": "American psychologist", "volume": "43", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nico H Frijda. 1988. The laws of emotion. American psychologist, 43(5):349.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards empathetic human-robot interactions", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Bertero", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Anik", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Ricky", "middle": [], "last": "Ho Yin Chan", "suffix": "" }, { "first": "Farhad", "middle": [], "last": "Bin Siddique", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ruixi", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2016, "venue": "International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "173--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung, Dario Bertero, Yan Wan, Anik Dey, Ricky Ho Yin Chan, Farhad Bin Siddique, Yang Yang, Chien-Sheng Wu, and Ruixi Lin. 2016. To- wards empathetic human-robot interactions. In In- ternational Conference on Intelligent Text Process- ing and Computational Linguistics, pages 173-193. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A deep learning approach to modeling empathy in addiction counseling", "authors": [ { "first": "James", "middle": [], "last": "Gibson", "suffix": "" }, { "first": "Dogan", "middle": [], "last": "Can", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "E", "middle": [], "last": "Zac", "suffix": "" }, { "first": "", "middle": [], "last": "Imel", "suffix": "" }, { "first": "C", "middle": [], "last": "David", "suffix": "" }, { "first": "Panayiotis", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "Shrikanth", "middle": [], "last": "Georgiou", "suffix": "" }, { "first": "", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2016, "venue": "Commitment", "volume": "111", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Gibson, Dogan Can, Bo Xiao, Zac E Imel, David C Atkins, Panayiotis Georgiou, and Shrikanth Narayanan. 2016. A deep learning approach to mod- eling empathy in addiction counseling. Commit- ment, 111:21.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A very brief measure of the big-five personality domains", "authors": [ { "first": "D", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Gosling", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Rentfrow", "suffix": "" }, { "first": "", "middle": [], "last": "Williams B Swann", "suffix": "" } ], "year": 2003, "venue": "Journal of Research in Personality", "volume": "37", "issue": "", "pages": "504--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel D Gosling, Peter J Rentfrow, and Williams B Swann Jr. 2003. A very brief measure of the big-five personality domains. Journal of Research in Person- ality, 37:504-528.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Mining and summarizing customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A computer approach to content analysis", "authors": [ { "first": "General", "middle": [], "last": "Inquirer", "suffix": "" } ], "year": 1966, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "General Inquirer. 1966. A computer approach to con- tent analysis. Cambridge, MA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Social and linguistic behavior and its correlation to trait empathy", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Jahna", "middle": [], "last": "Otterbacher", "suffix": "" }, { "first": "Chee", "middle": [], "last": "Siang Ang", "suffix": "" }, { "first": "David", "middle": [], "last": "Atkins", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media (PEOPLES)", "volume": "", "issue": "", "pages": "128--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak, Jahna Otterbacher, Chee Siang Ang, and David Atkins. 2016. Social and linguistic be- havior and its correlation to trait empathy. In Pro- ceedings of the Workshop on Computational Model- ing of People's Opinions, Personality, and Emotions in Social Media (PEOPLES), pages 128-137.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Obtaining reliable human ratings of valence, arousal, and dominance for", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "", "volume": "20", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P18-1017" ] }, "num": null, "urls": [], "raw_text": "Saif Mohammad. 2018a. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "English words", "authors": [], "year": null, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "174--184", "other_ids": { "DOI": [ "10.18653/v1/P18-1017" ] }, "num": null, "urls": [], "raw_text": "English words. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 174- 184, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Semeval-2018 task 1: Affect in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 12th international workshop on semantic evaluation", "volume": "", "issue": "", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval- 2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation, pages 1-17.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text", "volume": "", "issue": "", "pages": "26--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using me- chanical turk to create an emotion lexicon. In Pro- ceedings of the NAACL HLT 2010 workshop on com- putational approaches to analysis and generation of emotion in text, pages 26-34.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semantic role labeling of emotions in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "32--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Xiaodan Zhu, and Joel Martin. 2014. Semantic role labeling of emotions in tweets. In Pro- ceedings of the 5th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 32-41.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The Annual Conference of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad. 2018b. Obtaining reliable hu- man ratings of valence, arousal, and dominance for 20,000 english words. In Proceedings of The An- nual Conference of the Association for Computa- tional Linguistics (ACL), Melbourne, Australia.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Word affect intensities", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad. 2018c. Word affect intensities. In Proceedings of the 11th Edition of the Language Re- sources and Evaluation Conference (LREC-2018), Miyazaki, Japan.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Wassa-2017 shared task on emotion intensity", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Bravo-Marquez", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.03700" ] }, "num": null, "urls": [], "raw_text": "Saif M Mohammad and Felipe Bravo-Marquez. 2017. Wassa-2017 shared task on emotion intensity. arXiv preprint arXiv:1708.03700.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Nrc-canada: Building the stateof-the-art in sentiment analysis of tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1308.6242" ] }, "num": null, "urls": [], "raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Xiao- dan Zhu. 2013. Nrc-canada: Building the state- of-the-art in sentiment analysis of tweets. arXiv preprint arXiv:1308.6242.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Crowdsourcing a word-emotion association lexicon", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Peter", "middle": [ "D" ], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2013, "venue": "Computational Intelligence", "volume": "", "issue": "", "pages": "436--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational Intelligence, pages 436-465.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The emerging study of positive empathy. Social and Personality Psychology Compass", "authors": [ { "first": "A", "middle": [], "last": "Sylvia", "suffix": "" }, { "first": "", "middle": [], "last": "Morelli", "suffix": "" }, { "first": "D", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Jamil", "middle": [], "last": "Lieberman", "suffix": "" }, { "first": "", "middle": [], "last": "Zaki", "suffix": "" } ], "year": 2015, "venue": "", "volume": "9", "issue": "", "pages": "57--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sylvia A Morelli, Matthew D Lieberman, and Jamil Zaki. 2015. The emerging study of positive empa- thy. Social and Personality Psychology Compass, 9(2):57-68.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs", "authors": [ { "first": "", "middle": [], "last": "Finn \u00c5rup Nielsen", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1103.2903" ] }, "num": null, "urls": [], "raw_text": "Finn \u00c5rup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proc. of NAACL.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Emotions: A general psychoevolutionary theory. Approaches to emotion", "authors": [ { "first": "Robert", "middle": [], "last": "Plutchik", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "197--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Plutchik. 1984. Emotions: A general psy- choevolutionary theory. Approaches to emotion, 1984:197-219.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Simple, robust and (almost) unsupervised generation of polarity lexicons for multiple languages", "authors": [ { "first": "Rodrigo", "middle": [], "last": "Inaki San Vicente", "suffix": "" }, { "first": "German", "middle": [], "last": "Agerri", "suffix": "" }, { "first": "", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "88--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inaki San Vicente, Rodrigo Agerri, and German Rigau. 2014. Simple, robust and (almost) unsupervised gen- eration of polarity lexicons for multiple languages. In Proceedings of the 14th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 88-97.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Carer: Contextualized affect representations for emotion recognition", "authors": [ { "first": "Elvis", "middle": [], "last": "Saravia", "suffix": "" }, { "first": "Hsien-Chi Toby", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yen-Hao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Junlin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yi-Shin", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3687--3697", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018. Carer: Con- textualized affect representations for emotion recog- nition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3687-3697.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Viswanath Pulabaigari, and Bjorn Gamback. 2020. Semeval-2020 task 8: Memotion analysisthe visuo-lingual metaphor! arXiv preprint", "authors": [ { "first": "Chhavi", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Deepesh", "middle": [], "last": "Bhageria", "suffix": "" }, { "first": "William", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Pykl", "middle": [], "last": "Srinivas", "suffix": "" }, { "first": "Amitava", "middle": [], "last": "Das", "suffix": "" }, { "first": "Tanmoy", "middle": [], "last": "Chakraborty", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.03781" ] }, "num": null, "urls": [], "raw_text": "Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas PYKL, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bjorn Gamback. 2020. Semeval-2020 task 8: Memotion analysis- the visuo-lingual metaphor! arXiv preprint arXiv:2008.03781.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Twitter polarity classification with label propagation over lexical links and the follower graph", "authors": [ { "first": "Michael", "middle": [], "last": "Speriosu", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Sudan", "suffix": "" }, { "first": "Sid", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the First workshop on Unsupervised Learning in NLP", "volume": "", "issue": "", "pages": "53--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Speriosu, Nikita Sudan, Sid Upadhyay, and Ja- son Baldridge. 2011. Twitter polarity classification with label propagation over lexical links and the fol- lower graph. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 53-63.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Semeval-2007 task 14: Affective text", "authors": [ { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "70--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. Semeval- 2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evalua- tions (SemEval-2007), pages 70-74.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Cross-Genre, Cross-Lingual, and Low-Resource Emotion Classification", "authors": [ { "first": "Shabnam", "middle": [], "last": "Tafreshi", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shabnam Tafreshi. 2021. Cross-Genre, Cross-Lingual, and Low-Resource Emotion Classification. Ph.D. thesis, The George Washington University.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Sentence and clause level emotion annotation, detection, and classification in a multi-genre corpus", "authors": [ { "first": "Shabnam", "middle": [], "last": "Tafreshi", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shabnam Tafreshi and Mona Diab. 2018. Sentence and clause level emotion annotation, detection, and clas- sification in a multi-genre corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior research methods", "authors": [ { "first": "Amy", "middle": [ "Beth" ], "last": "Warriner", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Kuperman", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brysbaert", "suffix": "" } ], "year": 2013, "venue": "", "volume": "45", "issue": "", "pages": "1191--1207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and dom- inance for 13,915 english lemmas. Behavior re- search methods, 45(4):1191-1207.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "A technology prototype system for rating therapist empathy from audio recordings in addiction counseling", "authors": [ { "first": "Bo", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Chewei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "E", "middle": [], "last": "Zac", "suffix": "" }, { "first": "", "middle": [], "last": "Imel", "suffix": "" }, { "first": "C", "middle": [], "last": "David", "suffix": "" }, { "first": "Panayiotis", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "Shrikanth S", "middle": [], "last": "Georgiou", "suffix": "" }, { "first": "", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2016, "venue": "PeerJ Computer Science", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Xiao, Chewei Huang, Zac E Imel, David C Atkins, Panayiotis Georgiou, and Shrikanth S Narayanan. 2016. A technology prototype system for rating ther- apist empathy from audio recordings in addiction counseling. PeerJ Computer Science, 2:e59.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "rate my therapist\": automated detection of empathy in drug and alcohol counseling via speech and language processing", "authors": [ { "first": "Bo", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "E", "middle": [], "last": "Zac", "suffix": "" }, { "first": "", "middle": [], "last": "Imel", "suffix": "" }, { "first": "G", "middle": [], "last": "Panayiotis", "suffix": "" }, { "first": "", "middle": [], "last": "Georgiou", "suffix": "" }, { "first": "C", "middle": [], "last": "David", "suffix": "" }, { "first": "Shrikanth S", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2015, "venue": "PloS one", "volume": "10", "issue": "12", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Xiao, Zac E Imel, Panayiotis G Georgiou, David C Atkins, and Shrikanth S Narayanan. 2015. \" rate my therapist\": automated detection of empathy in drug and alcohol counseling via speech and language pro- cessing. PloS one, 10(12):e0143055.", "links": null } }, "ref_entries": { "TABREF0": { "text": "Distribution of emotion labels in the datasets.", "type_str": "table", "num": null, "html": null, "content": "
joy sadness disgust fear anger surprise no-emo
Train 82647149194349164275
Dev14981231761425
Test3328701224055
Total 129922189295547218355
Dataset Split
Train Dev Test Total
1860 270 525 2655
" }, "TABREF1": { "text": "Train, dev and test set splits.", "type_str": "table", "num": null, "html": null, "content": "
5 Results and Discussion
5.1 Empathy Prediction (EMP)
" }, "TABREF3": { "text": "Results of the teams participating in the EMP track (Pearson correlations).", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF5": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
" }, "TABREF7": { "text": "IITK 53 46 72 55 63 70 42 52 46 42 44 74 80 77 36 50 42", "type_str": "table", "num": null, "html": null, "content": "
JoySadnessDisgustFearAngerSurprise
TeamP R F1 P R F1 P R F1 P R F1 P R F1 P R F1
WASSA@Team Phoenix35 45 40 67 37 48 57 38 45 52 40 45 67 83 74 48 33 39
MilaNLP34 38 36 77 31 44 58 38 46 34 48 40 73 81 76 48 33 39
EmpNa23 28 25 31 25 28 28 30 29 30 28 29 60 59 59 10 15 12
" }, "TABREF8": { "text": "Breakdown EMO labels (MACRO)", "type_str": "table", "num": null, "html": null, "content": "
Predicted EMO labels
NoJSa DF A Su
labels Gold EMONo 23 J 4 18 2 Sa 7 0 141 7 6 D 3 0 4 14 0 0 8 F 7 3 14 4 29 4 1 5 13 9 10 2 2 3 0 7 0 5 8 A 5 1 13 12 1 81 9 Su 1 1 8 1 2 6 21
" }, "TABREF9": { "text": "", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF11": { "text": "Machine learning algorithms used by the different teams. We listed all the models that teams reported in their results.", "type_str": "table", "num": null, "html": null, "content": "
Features and Resources
Features# of team Emp System Emo System
n-gram1
Transformer embeddings1
[CLS] token from Transformer model2
Word embedding (fasttext)1
Affect/emotion/empathy lexicons1
Personality information3
Demographic infromation3
External dataset1
" }, "TABREF12": { "text": "Features and resources that are used by different teams. We listed all the features and resources that teams reported in their results.", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF13": { "text": "Naitian Zhou and David Jurgens. 2020. Condolences and empathy in online communities. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 609-626.", "type_str": "table", "num": null, "html": null, "content": "
Appendices
A Examples Track I (EMP)
Below examples are shown of four essays that re-
ceived an erroneous empathy or distress label by
the best-performing system. This is discussed in
Section 5.3.
Essay 1: This just totally breaks my heart. I'm
not one to get emotional you know that. But read-
ing about kids in the foster care system and how
messed up they come out its just heart breaking.
Kids that no one cared enough about to change
their ways is what it is. It's heartbreaking. Why
have kids if this is the kind of parent you are going
to be? Kids didn't have a shot straight from the
start. (Gold Emp: 7, Predicted Emp: 2.470)
" } } } }