ACL-OCL / Base_JSON /prefixW /json /wassa /2022.wassa-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:45.977818Z"
},
"title": "WASSA 2022 Shared Task: Predicting Empathy, Emotion and Personality in Reaction to News Stories",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Barriere",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Shabnam",
"middle": [],
"last": "Tafreshi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Sawsan",
"middle": [],
"last": "Alqahtani",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the results that were obtained from WASSA 2022 shared task on predicting empathy, emotion, and personality in reaction to news stories. Participants were given access to a dataset comprising empathic reactions to news stories where harm is done to a person, group, or other. These reactions consist of essays and Batson's empathic concern and personal distress scores. The dataset was further extended in WASSA 2021 shared task to include news articles, person-level demographic information (e.g. age, gender), personality information, and Ekman's six basic emotions at essay level Participation was encouraged in four tracks: predicting empathy and distress scores, predicting emotion categories, predicting personality and predicting interpersonal reactivity. In total, 14 teams participated in the shared task. We summarize the methods and resources used by the participating teams.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the results that were obtained from WASSA 2022 shared task on predicting empathy, emotion, and personality in reaction to news stories. Participants were given access to a dataset comprising empathic reactions to news stories where harm is done to a person, group, or other. These reactions consist of essays and Batson's empathic concern and personal distress scores. The dataset was further extended in WASSA 2021 shared task to include news articles, person-level demographic information (e.g. age, gender), personality information, and Ekman's six basic emotions at essay level Participation was encouraged in four tracks: predicting empathy and distress scores, predicting emotion categories, predicting personality and predicting interpersonal reactivity. In total, 14 teams participated in the shared task. We summarize the methods and resources used by the participating teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Emotion and empathy prediction and analysis, in its broader perspective, has been an active research area in the last two decades, with growing volume of studies that provide insightful findings and resources. Emotion classification in natural languages has been studied over two decades and many applications successfully used emotion as their major components. Empathy utterances can be emotional, therefore, examining emotion in textbased empathy possibly has a major impact on predicting empathy. Analyzing text-based empathy and emotion have different applications; empathy is a crucial component in applications such as empathic AI agents, effective gesturing of robots, and mental health, emotion has natural language applications such as commerce, public health, and disaster management.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the progress, improvements can be made to develop or further enhance the prediction and detection of emotions and psychological constructs in natural texts including empathy, distress, and personality. In this paper, we present the WASSA 2022 Shared Task: Predicting Empathy and Emotion in Reaction to News Stories. We used the same dataset provided by which is an extension of (Buechel et al., 2018) 's dataset that includes news articles that express harm to an entity (e.g. individual, group of people, nature). Each of these news articles is associated with essays in which authors expressed their empathy and distress in reactions to these news articles. Each assay is annotated for empathy and distress, and supplemented with personality traits and demographic information of the authors (age, gender, ethnicity, income, and education level) (Refer to Section 3 for more details).",
"cite_spans": [
{
"start": 386,
"end": 408,
"text": "(Buechel et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given this dataset as input, the shared task consists of four tracks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Predicting Empathy (EMP): Participants develop models to predict, for each essay, em-pathy and distress scores quantified with the Batson's empathic concern (\"feeling for someone\") and personal distress (\"suffering with someone\") (Batson et al., 1987) . 1 2. Emotion Label Prediction (EMO): Participants develop models to predict, for each essay, a categorical emotion tag from the following Ekman's six basic emotions (sadness, joy, disgust, surprise, anger, or fear) (Ekman, 1971) , as well as no-emotion tag.",
"cite_spans": [
{
"start": 233,
"end": 254,
"text": "(Batson et al., 1987)",
"ref_id": "BIBREF3"
},
{
"start": 472,
"end": 485,
"text": "(Ekman, 1971)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Personality Prediction (PER): Participants develop models to predict, for each essay, Big Five (OCEAN) personality traits (conscientiousness, openness, extraversion, agreeableness, emotional stability) (John et al., 1999) 4. Interpersonal Reactivity Index (IRI; Davis, 1980) : Participants develop models to predict, for each essay, interpersonal reactivity (perspective taking, personal distress (pd), fantasy, empathic concern).",
"cite_spans": [
{
"start": 205,
"end": 224,
"text": "(John et al., 1999)",
"ref_id": "BIBREF22"
},
{
"start": 265,
"end": 277,
"text": "Davis, 1980)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "14 teams participated in this shared task: 10 teams submitted results to EMP, 14 teams to EMO, 2 teams to IRI, and 2 teams to PER tracks. All task descriptions, datasets, and results were designed in CodaLab 2 and the teams were allowed to submit one official result during evaluation phase and several ones during the training phase. The best result for the empathy prediction was an average Pearson correlation of 0.541 and for distress was 0.547 and the best macro F1-score for the emotion track amounted to 69.8%. The best result for personality was an average Pearson correlation of 0.230 and for IRI was 0.255.WASSA 2022 shared task provide the second generated results for emotion and empathy (EMP and EMO tracks) and contribute with additional two new tracks (IRI and PER).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remainder of this paper, we first review related work (Section 2), after which we introduce the dataset used for both tracks (Section 3). The shared task is presented in Section 4 and the official results in Section 5. A discussion of the different systems participating in both tracks is presented in Section 6 and we conclude our work in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 Distress is a self-focused and negative affective state (suffering with someone) while empathy is a warm, tender, and compassionate state (feeling for someone).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 https://competitions.codalab.org/ competitions/28713",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide related work for each track: emotion predictions (Section 2.1), empathy and distress (Section 2.2), personality prediction, and interpersonal reactivity prediction (Section 2.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Emotion classification has been studied thoroughly in terms of modeling, resources, and features as part of SemEval shared tasks for Affect computing and emotion classification (Strapparava and Mihalcea, 2007; Mohammad and Bravo-Marquez, 2017; Mohammad et al., 2018; Chatterjee et al., 2019; Sharma et al., 2020b) . Emotion detection models can predict, per input, one emotion class or multilabel emotion classes for naturally co-occurring emotion classes in the same essay (Alhuzali and Ananiadou, 2021; Rajabi et al., 2020) . Most emotion prediction models are learned in a supervised manner with feature engineering or continuous representation learned through pretrained language models (Peters et al., 2018; Devlin et al., 2018) . Acheampong et al. (2020) ; Murthy and Kumar (2021); Nandwani and Verma (2021); Acheampong et al. (2021) survey state-of-the-art emotion detection techniques and resources and discuss open issues in this area.",
"cite_spans": [
{
"start": 177,
"end": 209,
"text": "(Strapparava and Mihalcea, 2007;",
"ref_id": "BIBREF38"
},
{
"start": 210,
"end": 243,
"text": "Mohammad and Bravo-Marquez, 2017;",
"ref_id": "BIBREF27"
},
{
"start": 244,
"end": 266,
"text": "Mohammad et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 267,
"end": 291,
"text": "Chatterjee et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 292,
"end": 313,
"text": "Sharma et al., 2020b)",
"ref_id": null
},
{
"start": 474,
"end": 504,
"text": "(Alhuzali and Ananiadou, 2021;",
"ref_id": "BIBREF2"
},
{
"start": 505,
"end": 525,
"text": "Rajabi et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 691,
"end": 712,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 713,
"end": 733,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 736,
"end": 760,
"text": "Acheampong et al. (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Prediction",
"sec_num": "2.1"
},
{
"text": "Prior work on modeling text-based empathy focused on the empathic concern which is to share others' emotions in the conversations (Litvak et al., 2016; Fung et al., 2016) . For instance, Xiao et al. (2015 ; Gibson et al. (2016) modeled empathy based on the ability of a therapist to adapt to the emotions of their clients; Zhou and Jurgens (2020) quantified empathy in condolences in social media using appraisal theory; Sharma et al. (2020a) developed a model based on fine-tuning contextualized language models to predict empathy specific to mental health in text-based platforms. Guda et al. (2021) additionally utilized demographic information (e.g. education, income, age) when fine-tuning contextualized language modeling for empathy and distress prediction.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Litvak et al., 2016;",
"ref_id": "BIBREF23"
},
{
"start": 152,
"end": 170,
"text": "Fung et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 187,
"end": 204,
"text": "Xiao et al. (2015",
"ref_id": "BIBREF44"
},
{
"start": 207,
"end": 227,
"text": "Gibson et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 421,
"end": 442,
"text": "Sharma et al. (2020a)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy and Distress",
"sec_num": "2.2"
},
{
"text": "Vora et al. 2020; Beck and Jackson (2022) survey and analyze personality prediction models, theories, and techniques. Ji et al. (2020) review such models specifically to detect suicidal behavior. Developing personality detection models range from feature engineering methods (Bharadwaj et al., 2018; Tadesse et al., 2018) to deep learning techniques (Yang et al., 2021; Ren et al., 2021) . Yang et al. (2021) developed a transformer based model to predict users' personality based on Myers-Briggs Type Indicator (Myers et al., 1985, MBTI; ) personality trait theory given multiple posts of the user instead of predicting personality for a single post. Ren et al. (2021) utilized deep learning techniques to develop a multi-label personality prediction and sentiment analysis model based on MBTI and Big 5 datasets.",
"cite_spans": [
{
"start": 18,
"end": 41,
"text": "Beck and Jackson (2022)",
"ref_id": "BIBREF4"
},
{
"start": 118,
"end": 134,
"text": "Ji et al. (2020)",
"ref_id": "BIBREF21"
},
{
"start": 275,
"end": 299,
"text": "(Bharadwaj et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 300,
"end": 321,
"text": "Tadesse et al., 2018)",
"ref_id": "BIBREF39"
},
{
"start": 350,
"end": 369,
"text": "(Yang et al., 2021;",
"ref_id": null
},
{
"start": 370,
"end": 387,
"text": "Ren et al., 2021)",
"ref_id": "BIBREF35"
},
{
"start": 390,
"end": 408,
"text": "Yang et al. (2021)",
"ref_id": null
},
{
"start": 512,
"end": 538,
"text": "(Myers et al., 1985, MBTI;",
"ref_id": null
},
{
"start": 539,
"end": 540,
"text": ")",
"ref_id": null
},
{
"start": 652,
"end": 669,
"text": "Ren et al. (2021)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Personality and Interpersonal Reactivity Prediction",
"sec_num": "2.3"
},
{
"text": "We used the same dataset provided in WASSA 2021 shared task . Table 1 represents the train, development, and test splits. We first briefly present how the initial/original dataset were collected and annotated in Section 3.1. We discuss the additional emotion annotation and make the dataset suitable for this shared task in Section 3.2. In Section 3.3, we discuss the annotation process and data statistics of PER and IRI tasks. ",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "The starting point was the dataset provided by (Buechel et al., 2018) which comprises of news articles, each is associated with essays produced by several participants in reaction to reading disturbing news about a person, group of people, or situations. We used this dataset as a training dataset in this shared task. 3",
"cite_spans": [
{
"start": 47,
"end": 69,
"text": "(Buechel et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Initial Dataset",
"sec_num": "3.1"
},
{
"text": "We used the same news articles (418 total) provided by Buechel et al. (2018) in which there is major or minor harm inflicted to an individual, group of people, or other by either a person, group of people, political organization, or nature. The stories were specifically selected to evoke varying degrees of empathy among readers.",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "Buechel et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "News article collection:",
"sec_num": null
},
{
"text": "Essay collection: The corpus acquisition was set up as a crowdsourcing task on MTurk.com pointing to a Qualtrics.com questionnaire. The participants completed background measures on demographics and personality and then proceeded to the main part of the survey where they read a random selection of five of the news articles. After reading each of the articles, participants were asked to rate their level of empathy and distress before describing their thoughts and feelings about it in writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "News article collection:",
"sec_num": null
},
{
"text": "As part of the efforts made by WASSA 2021 shared task , the dataset described in Section 3.1 was further augmented with development and testing datasets and enriched with emotion labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation and Enrichment",
"sec_num": "3.2"
},
{
"text": "These datasets were created following the same approach described in (Buechel et al., 2018) : 805 essays were written in response to the same news articles as (Buechel et al., 2018) by 161 participants and same Amazon Mechanical Turk qualifications as well as survey interface including Qualtrics.",
"cite_spans": [
{
"start": 69,
"end": 91,
"text": "(Buechel et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 159,
"end": 181,
"text": "(Buechel et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation and Enrichment",
"sec_num": "3.2"
},
{
"text": "Emotion Annotation: To extract emotion tags, WASSA 2021 shared task further enriched each essay with the 6 basic Ekman emotion labels in order to find out whether certain basic emotions are more correlated with empathy and distress. Emotion labels were first predicted automatically and then manually verified. For the automatic prediction, two different neural network models were applied to generate predictions at the essay level: 1) a Gated RNN with attention mechanism which is trained with multigenre corpus, i.e., news, tweets, blog posts, (Tafreshi, 2021, Thesis Chapter 5) , 2) fine-tuned RoBERTa model (Liu et al., 2019) on the GoEmotions dataset (Demszky et al., 2020) . For the manual verification another Amazon Mechanical Turk task was set up for which annotators with the Masters qualification (highest AMT quality rating) were recruited. 4 The distribution of the emotion tags per data split split is illustrated in Table 2 . As can be observed, the distribution of emotion tags is imbalanced. The majority of the essays have the emotion tag sadness, followed by anger, and subsequently an even distribution of the emotion tags disgust, fear and surprise and lastly joy. 5",
"cite_spans": [
{
"start": 547,
"end": 581,
"text": "(Tafreshi, 2021, Thesis Chapter 5)",
"ref_id": null
},
{
"start": 612,
"end": 630,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 657,
"end": 679,
"text": "(Demszky et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 854,
"end": 855,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 932,
"end": 939,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Augmentation and Enrichment",
"sec_num": "3.2"
},
{
"text": "As part of the original data collection of Buechel et al. (2018) the Big 5 personality traits 6 (PER) and Interpersonal Reactivity Index (IRI) were collected at the beginning of the Qualtrics questionnaire. The train, dev, and test splits are the same as the other tasks.",
"cite_spans": [
{
"start": 43,
"end": 64,
"text": "Buechel et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PER and IRI Annotation Process",
"sec_num": "3.3"
},
{
"text": "We setup all four tracks in CodaLab (https://competitions.codalab.org/ competitions/28713).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task",
"sec_num": "4"
},
{
"text": "We describe each task separately (objectives and metadata) in Section 4.1 and then describe dataset, resources, and evaluation metrics in Section 4.2. Note that the first two tracks are the same as offered by WASSA 2022 shared task while the last two tracks (PER and IRI) are new contributions of this shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task",
"sec_num": "4"
},
{
"text": "Track 1 -Empathy Prediction (EMP): The formulation of this task is to predict, for each essay, Batson's empathic concern (\"feeling for someone\") and personal distress (\"suffering with someone\") scores (Batson et al., 1987) . Participants are expected to develop models that predict the empathy score for each essay. Both empathy and distress scores are real-values between 0 and 7. Empathy score is an average of 7-point scale ratings, representing each of the following states (warm, tender, sympathetic, softhearted, moved, compassionate); distress score is an average of 7-point scale ratings, representing each of the the following states (worried, upset, troubled, perturbed, grieved, disturbed, alarmed, distressed). We made personality, demographic information, and emotion labels available for each essay and optional for use.",
"cite_spans": [
{
"start": 201,
"end": 222,
"text": "(Batson et al., 1987)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tracks",
"sec_num": "4.1"
},
{
"text": "The formulation of this task is to predict, for each essay, an emotion label from the following Ekman's six basic emotions (sadness, joy, disgust, surprise, anger, or fear) (Ekman, 1971) , as well as 5 At first, joy emotion tag seems somewhat counter-intuitive given the nature of the essays. However, explains that the position emotion that was assigned by the crowd workers could be attributed to the observation that authors of the essays were suggesting actions to hope to improve the situation and possibly contained political views.",
"cite_spans": [
{
"start": 173,
"end": 186,
"text": "(Ekman, 1971)",
"ref_id": "BIBREF13"
},
{
"start": 200,
"end": 201,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 2 -Emotion Label Prediction (EMO):",
"sec_num": null
},
{
"text": "6 Buechel et al. (2018) used the Ten Item Personality Inventory (TIPI; Gosling et al., 2003a) .",
"cite_spans": [
{
"start": 2,
"end": 23,
"text": "Buechel et al. (2018)",
"ref_id": "BIBREF7"
},
{
"start": 64,
"end": 70,
"text": "(TIPI;",
"ref_id": null
},
{
"start": 71,
"end": 93,
"text": "Gosling et al., 2003a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 2 -Emotion Label Prediction (EMO):",
"sec_num": null
},
{
"text": "no-emotion tag. 7 The same set of metadata that we described above were also provided for each essay in this task. Participants optionally could use this information as features to predict emotion labels.",
"cite_spans": [
{
"start": 16,
"end": 17,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 2 -Emotion Label Prediction (EMO):",
"sec_num": null
},
{
"text": "To code personality information, the Big 5 personality traits were provided, also known as the OCEAN model (Gosling et al., 2003b) . In the OCEAN model, the theory identifies five factors (openness to experience, conscientiousness, extraversion, agreeableness and neuroticism 8 ).",
"cite_spans": [
{
"start": 107,
"end": 130,
"text": "(Gosling et al., 2003b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 3 -Personality Prediction (PER):",
"sec_num": null
},
{
"text": "We use the Interpersonal Reactivity Index (Davis, 1980, IRI; ) . IRI is a measurement tool for the multi-dimensional assessment of empathy. The four subscales are: Perspective Taking, Fantasy, Empathic Concern and Personal Distress.",
"cite_spans": [
{
"start": 42,
"end": 60,
"text": "(Davis, 1980, IRI;",
"ref_id": null
},
{
"start": 61,
"end": 62,
"text": ")",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 4 -Interpersonal Reactivity Index Prediction (IRI):",
"sec_num": null
},
{
"text": "Dataset: Participants were provided the dataset described in 3. Participants were allowed to add the development set to the training set and submit systems trained on both. The test set was made available to the participants at the beginning of the evaluation period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "Resources and Systems Restrictions Participants were allowed to use any lexical resources (e.g., emotion or empathy dictionaries) of their choice, any additional training data, or any offthe-shelf emotion or empathy models. We did not put any restriction in this shared task nor did we suggest any baseline model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "The organizers published an evaluation script that calculates Pearson correlation for the predictions of the empathy, personality and IRI prediction tasks and precision, recall, and F1 measure for each emotion class as well as the micro and macro average for the emotion label prediction task. Pearson coefficient is the linear correlations between two variables, and it produces scores from -1 (perfectly inversely correlated) to 1 (perfectly correlated). A score of 0 indicates no joy sadness disgust fear anger surprise no-emo Train 82 647 149 194 349 164 275 Dev 14 98 12 31 76 14 25 Test 33 177 28 70 122 40 55 Total 129 922 189 295 547 218 355 Table 2 : Distribution of emotion labels in the datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 530,
"end": 688,
"text": "Train 82 647 149 194 349 164 275 Dev 14 98 12 31 76 14 25 Test 33 177 28 70 122 40 55 Total 129 922 189 295 547 218 355 Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Systems Evaluation:",
"sec_num": null
},
{
"text": "correlation. The official competition metric for the empathy prediction task (EMP) is the average of the two Pearson correlations. The official competition metric for the emotion evaluation is the macro F1-score, which is the harmonic mean between precision and recall. The official competition metric for the personality (resp. IRI prediction) task PER (resp. IRI) is the average of the Pearson correlations of the 5 (resp. 4) variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Evaluation:",
"sec_num": null
},
{
"text": "5 Results and Discussion Table 3 shows the main results of the track on empathy (Emp) and distress (Dis) prediction. 10 teams submitted results and the best scoring system is bunny_gg team (averaged r = .540). If we examine the results for the empathy and distress prediction separately, we observe that for empathy, team SINAI scored best (r = .541), whereas for distress chenyueg obtained the best result (r = .547). Comparison with previous results: In (Buechel et al., 2018) , the best-performing system obtained r=.404 for empathy and r=.444 for distress. These results were achieved only on the training set using ten-fold cross validation experiments which is not comparable to the results in this shared task. In WASSA 2021 , the best scor-ing system was PVG team (averaged r = .545). If we examine the results for the empathy and distress prediction separately, we observe that for empathy, team WASSA@IITK scored best (r = .558), whereas for distress PVG obtained the best result (r = .574).",
"cite_spans": [
{
"start": 456,
"end": 478,
"text": "(Buechel et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Systems Evaluation:",
"sec_num": null
},
{
"text": "Absolute difference between gold and predicted labels: Table 4 presents the absolute difference between the predicted and gold empathy and distress scores by the best-performing systems (SINAI for empathy and chenyueg for distress). It can be observed that the majority of predicted Batson emphatic concern and distress instances only differ in between zero or one point from the gold scores, i.e. 66% and 62%, respectively. For both labels the maximum difference amounts to 4-5 points and this in only a very few cases, no instances for empathy and 5 instance for distress. : Absolute difference in score between predicted and gold for both the empathy and distress scores of the best-performing system (expressed in number of instances and percentagewise).",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Team",
"sec_num": null
},
{
"text": "Table 5 presents the results for 13 teams for emotion prediction models. The best performing system in terms of Macro F1 (69.8%) as well as accuracy (75.4%) is LingJing which is significantly higher than remaining emotion prediction models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Label Prediction (EMO)",
"sec_num": "5.2"
},
{
"text": "To get more insight we also provide a breakdown of the macro-averaged results by emotion class in Table 6 . Correlated with label frequency in the dataset, sadness and anger are predicted with the highest performance by most systems. Remaining emotion labels have reasonable performance score given its limited number of training instances. In the breakdown for all emotion labels, the emotion model submitted by team LingJing outperforms remaining submitted models. Table 5 : Results of the teams participating in the EMO track (macro-averaged precision (P), recall (R), F1-score (F1) and accuracy (Acc)).",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 6",
"ref_id": null
},
{
"start": 467,
"end": 474,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Emotion Label Prediction (EMO)",
"sec_num": "5.2"
},
{
"text": "The results of the tracks on personality and IRI predictions are presented in Table 7 . Two teams submitted results and the best scoring system is the one of LingJing. For the PER task, it is interesting to note that the score of the second participant (IITP) is in general lower due to a negative correlation on the agreeableness, while the first team succeeded into performing well on this trait. They both performed similarly on consciousness and extroversion. For the IRI task, both the participants obtained good results for the empathic concern, nevertheless only the best performing team succeeded into performing well on perspective taking, personal distress and fantasy.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Personality and Interpersonal Reactivity Prediction (PER/IRI)",
"sec_num": "5.3"
},
{
"text": "We had a closer look at those instances that were predicted with a difference in score of between 4 and 5 by the best-performing system, you can find the actual essays in Appendix A. We discuss about 3 instances: in the first one (essay 1) the gold score was 7 and the predicted one 3.65, which is actually a pretty strange error as this describes a really typical high empathy -high distress essay. This essay has mild level distress which the model has predicted very well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy prediction",
"sec_num": "5.4.1"
},
{
"text": "For empathy there was one instance with a high discrepancy between the predicted (2.47) and gold (6) score. If we consider essay 3 we observe that there is no self-focus language at all. So a low empathy score does make sense here. Nonetheless this is not a typical low empathy response since there is some distress expressed. Same for essay 2, the difference between empathy and distress in gold label is high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy prediction",
"sec_num": "5.4.1"
},
{
"text": "Considering essays 2 and 3 we can state that these exhibit high distress/low empathy and vice versa low distress/ mild empathy. It is possible that models have difficulty in scenarios where there is empathy with a lack of distress and vice versa. Table 8 presents the confusion matrix of the topperforming team on the test data. It can be observed that the top three occurring labels in the training data, sadness (Sa) -anger (A) -no-emotion (No)are accurately classified most frequently and that anger and fear are most often confused with sadness, whereas the same goes for sadness being classified as anger.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Empathy prediction",
"sec_num": "5.4.1"
},
{
"text": "Assigning an emotion label at the document level is not a trivial task as certain sentences within an essay may exhibit different emotions or sentiment. In Appendix B we present for some labels one essay which was correctly/incorrectly classified by best performer system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion label prediction",
"sec_num": "5.4.2"
},
{
"text": "Looking at the correctly classified essays, we observe that in these essays many emotional words and phrases are being used and that there is not much discrepancy of emotions between the sentences. The same cannot be said for the erroneously classified essays, there we clearly observe that often many emotions are being presented within the same essay.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion label prediction",
"sec_num": "5.4.2"
},
{
"text": "In the meantime all essays have also been labeled with emotions at the sentence level using the same annotation procedure as described in Section 3, this dataset will also be made available for research purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion label prediction",
"sec_num": "5.4.2"
},
{
"text": "Surprisingly, we found out that the best scoring team system was predicting at the essay-level, and not using the fact that a writer wrote 5 different essays in order to aggregate at the writer-level. Tak and .331 (see last line Table 7 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 204,
"text": "Tak",
"ref_id": null
},
{
"start": 229,
"end": 236,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Personality and IRI prediction",
"sec_num": "5.4.3"
},
{
"text": "We looked over the writers that were the most difficult to tag for the winning team system, and they were outliers for both the tasks. For the PER task, this user has a very low values on conscientiousness and openness: 1.5 and 1.5, compared to 5.6 and 5 in average. For the IRI task, it seems that there is an issue with the labels. The personal distress score of the user is 1, which is the lowest of the dataset, and does not necessarily represent how the user is reacting at every essay. We also noticed that the winning system has low standard deviation when compared to the ones from the gold standards, for this reason it struggles to predict outliers and move not far away from the mean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Personality and IRI prediction",
"sec_num": "5.4.3"
},
{
"text": "A total of 14 teams participated in the shared tasks with 10 teams participating in both EMP and EMO and 2 participated in all tracks. In this section, we provide a summary of the machine learning models, features, resources, and lexicons that were used by the teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Submitted Systems",
"sec_num": "6"
},
{
"text": "All systems follow supervised machine learning models for empathy prediction and emotion classification (Table 9 ). Most teams built systems using pre-trained transformer language models, which were fine-tuned or from which features from different layers were extracted. CNN model were proposed by one team. Data augmentation methods and continuing to pre-training transformer model is proposed by one team. One team proposed a prompt-based architecture to integrate the metadata of the writer.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "(Table 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Machine Learning Architectures",
"sec_num": "6.1"
},
{
"text": "Detection and classification of emotion in text is challenging because marking textual emotional cues is difficult. Emotion model performance has been always improved when lexical features (e.g., emotion, sentiment, subjectivity, etc.), emotionspecific embedding, or different emotional datasets were augmented and used (Mohammad et al., 2018) to represent an emotion. Similar to emotion, predicting text-based empathy is challenging as well, and using lexical features, and external resources have an impact on empathy model performance. As such, it is quite common to use different resources and design different features in emotion and empathy models. As part of the dataset we provided to teams, we include personality, demographic, and categorical emotions as additional features for both emotion and empathy tasks. Teams were allowed to use any external resources or design any features of their choice and use them in their models. Table 10 summarizes the features and extra resources that teams used to build their models.",
"cite_spans": [
{
"start": 320,
"end": 343,
"text": "(Mohammad et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 939,
"end": 948,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Features and Resources",
"sec_num": "6.2"
},
{
"text": "The presence of emotion and empathic words are the first cues for a piece of text to be emotional or empathic, therefore, it is beneficial to use emotion/empathy lexicons to extract those words and create features. Table 11 summarizes the lexicons that were employed by the different teams.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Lexicons",
"sec_num": "6.3"
},
{
"text": "IUCL the team who ranked first in empathy track developed a transformer model using RoBERTa. They tuned RoBERTa model with the training set that is provided in this shared-task. They used demographic and personality features values and group them into different categories and add to each category a unique phrase. For example, the added sentence for \"age of 25\" is \"Age is 25, young adult.\", and the added sentence for \"income of 150,000\" is \"Income is 150000, high income, rich\". They represent each essay context with different input size and concatenated the context with the demographic and personality features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top three systems in EMP track",
"sec_num": "6.4"
},
{
"text": "SINAI The team developed Ensemble of Supervised and Zero-Shot Learning Models using Transformer Multi-output Regression and Emotion Analysis. For empathy and distress they built a Trans-former multi-output regression model to predict empathy and distress and some transformer models for emotion which eventually using them both in an ensemble manner with a fine-tune RoBERTa model. IUCL-2 the same team won the 3 place too. They used different hyperparameters while tuning RoBERTa model. They represent each sentence with higher input size and different learning rate and based on the empirical results it seems that increasing input size can impact the model performance in detecting empathy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top three systems in EMP track",
"sec_num": "6.4"
},
{
"text": "WENGSYX the team who ranked first developed a model by continuing on fine-tuning the pre-trained DeBERTa (He et al., 2020) by an opensource dataset collected by (\u00d6hman et al., 2020) . Then they fine-tuned this model with the dataset that is provided in this study. Then they further used data augmentation methods (random and balanced) augmentation using GoEmotions: A Dataset of Fine-Grained Emotions (Demszky et al., 2020) . Further they used Child-tuning Training (Xu et al., 2021) to continue fine-tuning DeBERTa. Finally, they used late fusion method (Colneri\u010d and Dem\u0161ar, 2018) with Bagging Prediction (Breiman, 1996) during prediction of emotion.",
"cite_spans": [
{
"start": 105,
"end": 122,
"text": "(He et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 161,
"end": 181,
"text": "(\u00d6hman et al., 2020)",
"ref_id": null
},
{
"start": 402,
"end": 424,
"text": "(Demszky et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 467,
"end": 484,
"text": "(Xu et al., 2021)",
"ref_id": "BIBREF35"
},
{
"start": 556,
"end": 583,
"text": "(Colneri\u010d and Dem\u0161ar, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 608,
"end": 623,
"text": "(Breiman, 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Team rank 1 and 3 systems in EMO track",
"sec_num": "6.5"
},
{
"text": "himanshu.1007 the team developed an ensemble approach. First model is fine-tuning RoBERTa on GoEmotions: A Dataset of Fine-Grained Emotions (Demszky et al., 2020) , then fine-tuning BART model to get the best representation for essay-based text, then fine-tuning RoBERTa with the dataset that is provided for this shared-task. The authors empirical results suggests that all three steps in the training is necessary to reach the best performance, and how BART can capture the contextual features in multiple sentences.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "(Demszky et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Team rank 1 and 3 systems in EMO track",
"sec_num": "6.5"
},
{
"text": "The two approaches proposed by the participants were very different. The IITP team proposed a system that is not using at all neither the essay nor the news article texts. They employed demographic information such as gender, race, education, age, and income to train support vector machine systems. The features used as input were selected regarding the task and variable to predict. For example, only the age was used as input feature to predict conscientiousness and agreeableness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PER and IRI Systems",
"sec_num": "6.6"
},
{
"text": "Machine Learning Algorithms ML Algorithm # of team Emp System Emo System RoBERTa-large 3 \u2713 bert-base-go-emotion 1 The best performing system for both the tasks was the one proposed by LingJing team. They employed intensively all the meta-data available and integrated them inside a DeBERTa-v3-large model in a textual form: \"A female, with fourth grade education, third race, 22 and income of 100000\". They proceeded to a data augmentation technique using random punctuation, used an ensemble method using the bagging algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PER and IRI Systems",
"sec_num": "6.6"
},
{
"text": "\u2713 distil-BERT-uncased-emotion 1 \u2713 NLI 1 \u2713 \u2713 GPT-3 1 \u2713 \u2713 Vanilla RoBERTa 1 \u2713 RoBERTa 4 \u2713 \u2713 GlobalMaxPooling 1 \u2713 \u2713 BART-large 1 \u2713 \u2713 Bert-base-uncased 1 \u2713 \u2713 Longformer-base-4096 1 \u2713 \u2713 DeBERTA 1 \u2713",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PER and IRI Systems",
"sec_num": "6.6"
},
{
"text": "In this paper we presented the shared task on empathy and emotion prediction of essays that were written in response to news stories to which five teams participated. Based on the analysis of the systems we can conclude that fine-tuning a transformer language model or relying on features extracted from transformer models along with jointly learning related tasks can lead to a robust modeling of empathy, distress, and emotion. Despite the strength of these strong contextualized features, we also observed that task-specific lexical features extracted from emotion and sentiment lexicons can still create a significant impact on empathy, distress, and emotion models. Furthermore, the topperforming emotion models used external datasets to further fine-tune the language models, which indicates that data augmentation is important when modeling emotion, even if the text genre is different from the genre of the task at hand. Finally, using demographic and personality information as features revealed a significant impact on empathy, distress, and emotion models. Particularly, joint modeling of distress and empathy coupled with those features yielded the best results for most of the top-ranked systems that were developed as part of this shared task. Below examples are shown of four essays that received an erroneous empathy or distress label by the best-performing system. This is discussed in Section 5.4. Essay 1: even though it was a old article from the archives i still think it was horrible that those officers tortured that man like that. attacking his private parts with flashlights, arms, elbows and pretty everything else you can think of. thats horrible that we live in a world that would allow these type of actions to take place. (Gold Emp: 7, Predicted Emp: 3.65)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Essay 2: I understand that businesses need to worry about profits. But It really angers me when governments and companies throw away lives in order to protect their bottom line. When people riot and chaos breaks out, it is always for a reason. It is up to the government and our police forces to protect the everyday citizens, not take their lives to protect their own. It angers me so much, all the needless violence and lives lost for no good reason. (Gold Emp: 1, Predicted Emp: 3.67)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Essay 3: As a person who grew up around large birds and knows how temperamental they can be, I was really curious where the story was going to go. It made me laugh that the officers were able to catch the runaway so easily without any humans or birds getting hurt when I'm sure the thought of trying made them more than a little nervous. The world needs more nice stories like this and I hope the emu got a stern talking to when it got home. (Gold Emp: 6, Predicted Emp: 2.47)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Below examples are shown of essays that received one of the seven labels and for each label we present one essay that was correctly classified by all teams (i) and one that was misclassified by most systems (ii). This is discussed in closer detail in Section 5.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Examples Track II (EMO)",
"sec_num": null
},
{
"text": "Joy: (i) Hello friend i will like to tell you that India to ratify Paris climate deal in October -India, one of the world's largest greenhouse gas emitters, will ratify the Paris global climate agreement pact next month, Prime Minister Narendra Modi has said. CO2 emissions are believed to be the driving force behind climate change. The Paris deal is the world's first comprehensive climate agreement. It will only come into force legally after it is ratified by at least 55 countries, which between them produce 55% of global carbon emissions. (Predicted as: neutral, Gold: joy). (ii) \"I like this article. It's about how the woman still gave birth to her child, even though it was a c-section. It seems as though some mothers look down upon those who have had to have c-sections because they didn't physically push the child out. Some consider it \"\"easier\"\" but the effects of a c-section and the scarring shows how difficult it is.\" (Predicted as: joy, Gold: joy) Sadness: (i) I read an article about civilian causalities in Afghanistan. It is alleged that US forces struck a make shift doctors with out borders hospital. There was heavy fighting and confusion during the event. There were other civilian casualties. I feel it is unfortunate. I feel wars create much pain for non involved people. I wish people would get along and respect human life. (Predicted: sadness, Gold: Sadness). (ii) I don't get why people want to blow us up. Why people want to intentionally harm others. They don't know these people. It's hard to feel for the one blowing up people. People are just trying to live their lives and go about their business. Suddenly your whole world changes and any innocence you had left is gone. You are harmed in ways that can;t be imagined until they manifest later. I hate that people have to endure this. (Predicted: Anger, Gold: Sadness).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Examples Track II (EMO)",
"sec_num": null
},
{
"text": "Disgust: (i) seems like paris is getting worse and worse every year. ever since they brought in all those refugees i believe the crime rates has risen and risen. things are getting out of control. where are the police? why is nothing being done to stop the rise in crime? even celebs are getting robbed or attacked in public. this is getting insane. it keeps getting worse also. (Predicted: anger, Gold: Disgust). (ii) Have you seen this? I am so tired of these stories! Something needs to be done about this already! How many more women will come forward with these stories before action is finally taken to get these monsters put away for good? Every single day I read about another story like this and I am sickened that this is continuing to happen. (Predicted: Disgust, Gold: Disgust).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Examples Track II (EMO)",
"sec_num": null
},
{
"text": "Fear: (i) scientists have been studying the zika virus for some time now and still, don't know much about it. it is a big threat to humans everywhere though. zika is mainly carried by mosquitos and contact with an infected mosquito will give you the virus. however, you can get it from having sex with someone that has the virus even if they are not showing symptoms yet. that is horrible. (Predicted:fear, Gold: fear). (ii) April I just read a very interesting article concerning climate change. It is hard for me to believe that there are still deniers out there on climate change. Especially when 375 top scientists and 30 prize winners all state with certainty humans are the cause. If we do not take action now we are going to leave a Horrible planet for our kids, grand kids and their kids. This is something that we need to address on a daily basis. (Predicted: anger, Gold: fear).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Examples Track II (EMO)",
"sec_num": null
},
{
"text": "Anger: (i) Keith is a person who is willing to save the Albatross from house mice. Those animals are getting killed because of those rodents and he is doing whatever he can in order to prolong their lives. He does not celebrate birthdays and chooses to place bait traps on the island in order to kill as many rodents that he can. (Predicted: no-emotion, Gold: anger). (ii) he horror of what we have done is beyond the comprehension of most Americans. People are being treated like animals by our own soldiers. If any one goes in innocent and good, they will come out damaged and insane or nearly so. It destroys good people with conscience ( of which there are few) that work in these areas. This has been going on for decades, and the evil is off the charts. The only way that this gets fixed is if the people are identified as torturers, sought, hunted down, and burned at the stake. Psychopaths run the nation and are drawn to the military and police. As, horrible as it is, good people will have to remove these damaged individual or they WILL suffer under their boots. (Predicted: anger, Gold: anger).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Examples Track II (EMO)",
"sec_num": null
},
{
"text": "Surprise: (i) I think it's silly that this is even a debate. This homeless dude hopped over a fence and attacked a security guard, the security guard defended himself despite getting stabbed. The fact that this guy hasn't already been charged with attempted murder is asinine, and I'm surprised this is even a chance he may get off. The security guard did what he should have done and defended him-self and the property. (Predicted: anger, Gold: Surprise). (ii) The article is so shocking. I had heard a little about it before but I had no idea that it was so drastic. And now I am not surprised about how the weather has been so screwy for the past few years. It doesn't seem like there is anything that we can do about it though. So I feel kind of helpless about that. (Predicted: surprise, Gold: Surprise) No-emo: (i) Hello friend I will like to let you know Leonard Cohen Died In His Sleep After A Fall, Manager Says -Songwriter and poet Leonard Cohen died in his sleep after a fall in his Los Angeles home in the middle of the night, his manager has said. \"The death was sudden, unexpected, and peaceful,\" his manager Robert Kory said in a statement published on the Cohencentric website. Cohen, music's man of letters whose songs fused religious imagery with themes of redemption and sexual desire, died on Nov. 7, He was 82 when he died. (Predicted: no-emotion, Gold: no-emotion). (ii) What do you think, would you bring an 11 year old to a game? There's a chance of something like this happening, although I'm sure it was unintentional that it hit the kid. I guess it seems like this is a case where the one outlier makes the news, and probably the other 10000 kids at the game were completely fine, or at all the other games this same day. I'm now subject to a 1000 character limit, so even though my email is finished I have to keep typing. I don't usually write such long emails to friends, I would probably talk to them instead if it was this volume of information. Or wait maybe that's a maximum and I can just click next. (Predicted: fear, Gold: no-emotion).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Examples Track II (EMO)",
"sec_num": null
},
{
"text": "Below an example of 3 essays from a user with a very low conscientiousness and openness scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Examples Track III (PER)",
"sec_num": null
},
{
"text": "Essay 1: The pressure we put on our entertainers is unreal. I don't know how most of them manage to make it through alive. We idolize them, and yet also criticize them so much that they are nearly pushed to their breaking. For their status we loathe them, love them, and tell them what they have to be for us. I think I would still choose to be a celebrity, if I could, but it doesn't seem as easy as people imply.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Examples Track III (PER)",
"sec_num": null
},
{
"text": "Essay 2: It's incredibly sad that this happens. While we do need to move to more environmentally sound methods of producing energy, it sucks that innocent birds are caught in the path of this progress. I hope we learn new ways to deter them from flying into them, and can better protect the world, while we try to counter our damage to it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Examples Track III (PER)",
"sec_num": null
},
{
"text": "Below an example of 3 essays from a user with a very low personal distress score of 1/5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Examples Track IV (IRI)",
"sec_num": null
},
{
"text": "Essay 1 (pd predicted: 2.79: This just totally breaks my heart. I'm not one to get emotional you know that. But reading about kids in the foster care system and how messed up they come out its just heart breaking. Kids that no one cared enough about to change their ways is what it is. It's heartbreaking. Why have kids if this is the kind of parent you are going to be? Kids didn't have a shot straight from the start.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Examples Track IV (IRI)",
"sec_num": null
},
{
"text": "Essay 2 (pd predicted: 2.81): We need more training for police. Police shouldn't be getting killed in the line of duty. It's not fair to their families because people are stupid and can't follow the law. People need to stop being so selfish and we need to make it less easy to obtain guns if people didn't have such easy access to them there wouldn't be so many deaths overall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Examples Track IV (IRI)",
"sec_num": null
},
{
"text": "We refer the readers to the original paper(Buechel et al., 2018) for more details about the collection of news articles and essays.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We refer the readers to for more details about emotion annotation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Psychological emotion modeling suggested different categorical labeling schemes including the Ekman 6 basic emotions(Ekman, 1971), the Plutchik 8 basic emotions(Plutchik, 1984), and 4 basic emotions(Frijda, 1988). We opted for the Ekman emotions since it is well adopted in different emotionbased downstream NLP tasks and mostly suited to the dataset we aim to study in this shared task.8 Here the neuroticism has been reverse coded as emotional stability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Empathic or Emotion Lexicons Lexicons # of team Emp System Emo System NRC EmoLex (Mohammad and Turney, 2010) 1 \u2713 Table 11 : Empathic or Emotion Lexicons that are used by different teams. We listed all the lexicons that teams reported in their results.",
"cite_spans": [
{
"start": 81,
"end": 108,
"text": "(Mohammad and Turney, 2010)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 113,
"end": 121,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Transformer models for textbased emotion detection: a review of bert-based approaches",
"authors": [
{
"first": "Francisca",
"middle": [
"Adoma"
],
"last": "Acheampong",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Nunoo-Mensah",
"suffix": ""
},
{
"first": "Wenyu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "Artificial Intelligence Review",
"volume": "54",
"issue": "8",
"pages": "5789--5829",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisca Adoma Acheampong, Henry Nunoo-Mensah, and Wenyu Chen. 2021. Transformer models for text- based emotion detection: a review of bert-based ap- proaches. Artificial Intelligence Review, 54(8):5789- 5829.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Text-based emotion detection: Advances, challenges, and opportunities",
"authors": [
{
"first": "Francisca",
"middle": [
"Adoma"
],
"last": "Acheampong",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wenyu",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Nunoo-Mensah",
"suffix": ""
}
],
"year": 2020,
"venue": "Engineering Reports",
"volume": "2",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisca Adoma Acheampong, Chen Wenyu, and Henry Nunoo-Mensah. 2020. Text-based emotion detection: Advances, challenges, and opportunities. Engineering Reports, 2(7):e12189.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spanemo: Casting multi-label emotion classification as span-prediction",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Alhuzali",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.10038"
]
},
"num": null,
"urls": [],
"raw_text": "Hassan Alhuzali and Sophia Ananiadou. 2021. Spanemo: Casting multi-label emotion classi- fication as span-prediction. arXiv preprint arXiv:2101.10038.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Daniel Batson",
"suffix": ""
},
{
"first": "Patricia",
"middle": [
"A"
],
"last": "Fultz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schoenrade",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of personality",
"volume": "55",
"issue": "1",
"pages": "19--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Daniel Batson, Jim Fultz, and Patricia A Schoenrade. 1987. Distress and empathy: Two qualitatively dis- tinct vicarious emotions with different motivational consequences. Journal of personality, 55(1):19-39.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A megaanalysis of personality prediction: Robustness and boundary conditions",
"authors": [
{
"first": "D",
"middle": [],
"last": "Emorie",
"suffix": ""
},
{
"first": "Joshua J",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jackson",
"suffix": ""
}
],
"year": 2022,
"venue": "Journal of Personality and Social Psychology",
"volume": "122",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emorie D Beck and Joshua J Jackson. 2022. A mega- analysis of personality prediction: Robustness and boundary conditions. Journal of Personality and Social Psychology, 122(3):523.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Persona traits identification based on myers-briggs type indicator (mbti)-a text classification approach",
"authors": [
{
"first": "Srilakshmi",
"middle": [],
"last": "Bharadwaj",
"suffix": ""
},
{
"first": "Srinidhi",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "Ramamoorthy",
"middle": [],
"last": "Srinath",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 international conference on advances in computing, communications and informatics (ICACCI)",
"volume": "",
"issue": "",
"pages": "1076--1082",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srilakshmi Bharadwaj, Srinidhi Sridhar, Rahul Choud- hary, and Ramamoorthy Srinath. 2018. Persona traits identification based on myers-briggs type indicator (mbti)-a text classification approach. In 2018 in- ternational conference on advances in computing, communications and informatics (ICACCI), pages 1076-1082. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bagging predictors",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 1996,
"venue": "Machine learning",
"volume": "24",
"issue": "2",
"pages": "123--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 1996. Bagging predictors. Machine learning, 24(2):123-140.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modeling empathy and distress in reaction to news stories",
"authors": [
{
"first": "Sven",
"middle": [],
"last": "Buechel",
"suffix": ""
},
{
"first": "Anneke",
"middle": [],
"last": "Buffone",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Slaff",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4758--4765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Un- gar, and Jo\u00e3o Sedoc. 2018. Modeling empathy and distress in reaction to news stories. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4758-4765.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "SemEval-2019 task 3: EmoContext contextual emotion detection in text",
"authors": [
{
"first": "Ankush",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Kedhar",
"middle": [],
"last": "Nath Narahari",
"suffix": ""
},
{
"first": "Meghana",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Puneet",
"middle": [],
"last": "Agrawal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2005"
]
},
"num": null,
"urls": [],
"raw_text": "Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019. SemEval-2019 task 3: EmoContext contextual emotion detection in text. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 39-48, Minneapo- lis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Emotion recognition on twitter: Comparative study and training a unison model",
"authors": [
{
"first": "Niko",
"middle": [],
"last": "Colneri\u010d",
"suffix": ""
},
{
"first": "Janez",
"middle": [],
"last": "Dem\u0161ar",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE transactions on affective computing",
"volume": "11",
"issue": "3",
"pages": "433--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niko Colneri\u010d and Janez Dem\u0161ar. 2018. Emotion recog- nition on twitter: Comparative study and training a unison model. IEEE transactions on affective com- puting, 11(3):433-446.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Interpersonal Reactivity Index",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark H Davis. 1980. Interpersonal Reactivity Index. Edwin Mellen Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Goemotions: A dataset of fine-grained emotions",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Movshovitz-Attias",
"suffix": ""
},
{
"first": "Jeongwoo",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Cowen",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00547"
]
},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. Goemotions: A dataset of fine-grained emo- tions. arXiv preprint arXiv:2005.00547.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Universals and cultural differences in facial expressions of emotion. In Nebraska symposium on motivation",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1971. Universals and cultural differences in facial expressions of emotion. In Nebraska sympo- sium on motivation. University of Nebraska Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The laws of emotion",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nico",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frijda",
"suffix": ""
}
],
"year": 1988,
"venue": "American psychologist",
"volume": "43",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nico H Frijda. 1988. The laws of emotion. American psychologist, 43(5):349.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards empathetic human-robot interactions",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Bertero",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Anik",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Ricky",
"middle": [],
"last": "Ho Yin Chan",
"suffix": ""
},
{
"first": "Farhad",
"middle": [],
"last": "Bin Siddique",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ruixi",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "173--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung, Dario Bertero, Yan Wan, Anik Dey, Ricky Ho Yin Chan, Farhad Bin Siddique, Yang Yang, Chien-Sheng Wu, and Ruixi Lin. 2016. Towards empathetic human-robot interactions. In Interna- tional Conference on Intelligent Text Processing and Computational Linguistics, pages 173-193. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A deep learning approach to modeling empathy in addiction counseling",
"authors": [
{
"first": "James",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Dogan",
"middle": [],
"last": "Can",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Imel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Georgiou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2016,
"venue": "Commitment",
"volume": "111",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Gibson, Dogan Can, Bo Xiao, Zac E Imel, David C Atkins, Panayiotis Georgiou, and Shrikanth Narayanan. 2016. A deep learning approach to mod- eling empathy in addiction counseling. Commitment, 111:21.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A very brief measure of the bigfive personality domains",
"authors": [
{
"first": "D",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gosling",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rentfrow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "William B Swann",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Research in personality",
"volume": "37",
"issue": "6",
"pages": "504--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel D Gosling, Peter J Rentfrow, and William B Swann Jr. 2003a. A very brief measure of the big- five personality domains. Journal of Research in personality, 37(6):504-528.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A very brief measure of the bigfive personality domains",
"authors": [
{
"first": "D",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gosling",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rentfrow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams B Swann",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Research in Personality",
"volume": "37",
"issue": "",
"pages": "504--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel D Gosling, Peter J Rentfrow, and Williams B Swann Jr. 2003b. A very brief measure of the big- five personality domains. Journal of Research in Personality, 37:504-528.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Empathbert: A bert-based framework for demographic-aware empathy prediction",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Bhanu Prakash Reddy Guda",
"suffix": ""
},
{
"first": "Niyati",
"middle": [],
"last": "Garimella",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chhaya",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.00272"
]
},
"num": null,
"urls": [],
"raw_text": "Bhanu Prakash Reddy Guda, Aparna Garimella, and Niyati Chhaya. 2021. Empathbert: A bert-based framework for demographic-aware empathy predic- tion. arXiv preprint arXiv:2102.00272.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deberta: Decoding-enhanced bert with disentangled attention",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.03654"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Suicidal ideation detection: A review of machine learning methods and applications",
"authors": [
{
"first": "Shaoxiong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Shirui",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Xue",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Computational Social Systems",
"volume": "8",
"issue": "1",
"pages": "214--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaoxiong Ji, Shirui Pan, Xue Li, Erik Cambria, Guodong Long, and Zi Huang. 2020. Suicidal ideation detection: A review of machine learning methods and applications. IEEE Transactions on Computational Social Systems, 8(1):214-226.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The Big-Five trait taxonomy: History, measurement, and theoretical perspectives",
"authors": [
{
"first": "Sanjay",
"middle": [],
"last": "Oliver P John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Srivastava",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver P John, Sanjay Srivastava, et al. 1999. The Big- Five trait taxonomy: History, measurement, and the- oretical perspectives, volume 2. University of Cali- fornia Berkeley.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Social and linguistic behavior and its correlation to trait empathy",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Litvak",
"suffix": ""
},
{
"first": "Jahna",
"middle": [],
"last": "Otterbacher",
"suffix": ""
},
{
"first": "Chee",
"middle": [],
"last": "Siang Ang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Atkins",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media (PEOPLES)",
"volume": "",
"issue": "",
"pages": "128--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Litvak, Jahna Otterbacher, Chee Siang Ang, and David Atkins. 2016. Social and linguistic behavior and its correlation to trait empathy. In Proceedings of the Workshop on Computational Modeling of Peo- ple's Opinions, Personality, and Emotions in Social Media (PEOPLES), pages 128-137.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semeval-2018 task 1: Affect in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval- 2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation, pages 1-17.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using me- chanical turk to create an emotion lexicon. In Pro- ceedings of the NAACL HLT 2010 workshop on com- putational approaches to analysis and generation of emotion in text, pages 26-34.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Wassa-2017 shared task on emotion intensity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.03700"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Felipe Bravo-Marquez. 2017. Wassa-2017 shared task on emotion intensity. arXiv preprint arXiv:1708.03700.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A review of different approaches for detecting emotion from text",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ashritha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2021,
"venue": "IOP Conference Series: Materials Science and Engineering",
"volume": "1110",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashritha R Murthy and KM Anil Kumar. 2021. A re- view of different approaches for detecting emotion from text. In IOP Conference Series: Materials Sci- ence and Engineering, volume 1110, page 012009. IOP Publishing.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Manual, a guide to the development and use of the Myers-Briggs type indicator",
"authors": [
{
"first": "Isabel",
"middle": [
"Briggs"
],
"last": "Myers",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"H"
],
"last": "Mccaulley",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Most",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabel Briggs Myers, Mary H McCaulley, and Robert Most. 1985. Manual, a guide to the development and use of the Myers-Briggs type indicator. consulting psychologists press.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A review on sentiment analysis and emotion detection from text. Social Network Analysis and Mining",
"authors": [
{
"first": "Pansy",
"middle": [],
"last": "Nandwani",
"suffix": ""
},
{
"first": "Rupali",
"middle": [],
"last": "Verma",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "11",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pansy Nandwani and Rupali Verma. 2021. A review on sentiment analysis and emotion detection from text. Social Network Analysis and Mining, 11(1):1-19.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Xed: A multilingual dataset for sentiment analysis and emotion detection",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "\u00d6hman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "P\u00e0mies",
"suffix": ""
},
{
"first": "Kaisla",
"middle": [],
"last": "Kajava",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.01612"
]
},
"num": null,
"urls": [],
"raw_text": "Emily \u00d6hman, Marc P\u00e0mies, Kaisla Kajava, and J\u00f6rg Tiedemann. 2020. Xed: A multilingual dataset for sentiment analysis and emotion detection. arXiv preprint arXiv:2011.01612.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proc. of NAACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Emotions: A general psychoevolutionary theory. Approaches to emotion",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "197--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 1984. Emotions: A general psychoevo- lutionary theory. Approaches to emotion, 1984:197- 219.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A multi-channel bilstm-cnn model for multilabel emotion classification of informal text",
"authors": [
{
"first": "Zahra",
"middle": [],
"last": "Rajabi",
"suffix": ""
},
{
"first": "Amarda",
"middle": [],
"last": "Shehu",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE 14th International Conference on Semantic Computing (ICSC)",
"volume": "",
"issue": "",
"pages": "303--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zahra Rajabi, Amarda Shehu, and Ozlem Uzuner. 2020. A multi-channel bilstm-cnn model for multilabel emotion classification of informal text. In 2020 IEEE 14th International Conference on Semantic Comput- ing (ICSC), pages 303-306. IEEE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A sentiment-aware deep learning approach for personality detection from text. Information Processing & Management",
"authors": [
{
"first": "Zhancheng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiaolei",
"middle": [],
"last": "Diao",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "58",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhancheng Ren, Qiang Shen, Xiaolei Diao, and Hao Xu. 2021. A sentiment-aware deep learning approach for personality detection from text. Information Process- ing & Management, 58(3):102532.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A computational approach to understanding empathy expressed in text-based mental health support",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Althoff",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.08441"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Sharma, Adam S Miner, David C Atkins, and Tim Althoff. 2020a. A computational approach to un- derstanding empathy expressed in text-based mental health support. arXiv preprint arXiv:2009.08441.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Viswanath Pulabaigari, and Bjorn Gamback. 2020b. Semeval-2020 task 8: Memotion analysis-the visuolingual metaphor! arXiv preprint",
"authors": [
{
"first": "Chhavi",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Deepesh",
"middle": [],
"last": "Bhageria",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Pykl",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.03781"
]
},
"num": null,
"urls": [],
"raw_text": "Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas PYKL, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bjorn Gamback. 2020b. Semeval-2020 task 8: Memotion analysis-the visuo- lingual metaphor! arXiv preprint arXiv:2008.03781.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Semeval-2007 task 14: Affective text",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. Semeval- 2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evalua- tions (SemEval-2007), pages 70-74.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Personality predictions based on user behavior on the facebook social media platform",
"authors": [
{
"first": "Hongfei",
"middle": [],
"last": "Michael M Tadesse",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "61959--61969",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael M Tadesse, Hongfei Lin, Bo Xu, and Liang Yang. 2018. Personality predictions based on user be- havior on the facebook social media platform. IEEE Access, 6:61959-61969.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Cross-Genre, Cross-Lingual, and Low-Resource Emotion Classification",
"authors": [
{
"first": "Shabnam",
"middle": [],
"last": "Tafreshi",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shabnam Tafreshi. 2021. Cross-Genre, Cross-Lingual, and Low-Resource Emotion Classification. Ph.D. thesis, The George Washington University.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "WASSA 2021 shared task: Predicting empathy and emotion in reaction to news stories",
"authors": [
{
"first": "Shabnam",
"middle": [],
"last": "Tafreshi",
"suffix": ""
},
{
"first": "Orphee",
"middle": [],
"last": "De Clercq",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Barriere",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Buechel",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "92--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shabnam Tafreshi, Orphee De Clercq, Valentin Barriere, Sven Buechel, Jo\u00e3o Sedoc, and Alexandra Balahur. 2021. WASSA 2021 shared task: Predicting empathy and emotion in reaction to news stories. In Proceed- ings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 92-104, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Personality prediction from social media text: An overview",
"authors": [
{
"first": "Hetal",
"middle": [],
"last": "Vora",
"suffix": ""
},
{
"first": "Mamta",
"middle": [],
"last": "Bhamare",
"suffix": ""
},
{
"first": "Dr K Ashok",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Int. J. Eng. Res",
"volume": "9",
"issue": "05",
"pages": "352--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hetal Vora, Mamta Bhamare, and Dr K Ashok Kumar. 2020. Personality prediction from social media text: An overview. Int. J. Eng. Res, 9(05):352-357.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A technology prototype system for rating therapist empathy from audio recordings in addiction counseling",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Chewei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Imel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "Shrikanth S",
"middle": [],
"last": "Georgiou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2016,
"venue": "PeerJ Computer Science",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Xiao, Chewei Huang, Zac E Imel, David C Atkins, Panayiotis Georgiou, and Shrikanth S Narayanan. 2016. A technology prototype system for rating ther- apist empathy from audio recordings in addiction counseling. PeerJ Computer Science, 2:e59.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "rate my therapist\": automated detection of empathy in drug and alcohol counseling via speech and language processing",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Imel",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Panayiotis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Georgiou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Shrikanth S",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2015,
"venue": "PloS one",
"volume": "10",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Xiao, Zac E Imel, Panayiotis G Georgiou, David C Atkins, and Shrikanth S Narayanan. 2015. \" rate my therapist\": automated detection of empathy in drug and alcohol counseling via speech and language processing. PloS one, 10(12):e0143055.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Songfang Huang, and Fei Huang. 2021. Raise a child in large language model: Towards effective and generalizable fine-tuning",
"authors": [
{
"first": "Runxin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Fuli",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2109.05687"
]
},
"num": null,
"urls": [],
"raw_text": "Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, and Fei Huang. 2021. Raise a child in large language model: To- wards effective and generalizable fine-tuning. arXiv preprint arXiv:2109.05687.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Train, dev and test set splits.",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Results of the teams participating in the EMP track (Pearson correlations).",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table><tr><td/><td>Joy</td><td>Sadness</td><td>Disgust</td><td>Fear</td><td>Anger</td><td>Surprise</td></tr><tr><td>Team</td><td colspan=\"6\">P R F1 P R F1 P R F1 P R F1 P R F1 P R F1</td></tr><tr><td>LingJing CAISA</td><td colspan=\"6\">82 61 71 90 82 86 82 50 62 64 77 70 72 88 79 62 62 62 72 55 62 78 79 79 57 43 49 66 59 62 66 74 70 46 55 50</td></tr><tr><td colspan=\"7\">himanshu.1007 62 70 66 76 84 80 43 36 39 63 53 57 69 67 68 45 57 51 chenyueg 58 45 51 78 77 78 31 46 37 65 56 60 63 73 68 55 45 49</td></tr><tr><td>SURREY-CTS-NLP</td><td colspan=\"6\">73 58 64 70 86 77 38 36 37 62 54 58 69 62 66 48 57 52</td></tr><tr><td>SINAI mantis</td><td colspan=\"6\">65 45 54 74 82 78 53 36 43 69 47 56 64 71 67 47 47 48 70 48 57 71 79 75 50 21 30 62 57 59 60 72 65 49 50 49</td></tr><tr><td>blueyellow</td><td colspan=\"6\">74 52 61 68 80 74 36 32 34 56 50 53 69 67 68 42 53 47</td></tr><tr><td>bunny_gg</td><td colspan=\"6\">66 58 61 69 79 74 20 36 25 65 47 55 69 61 64 55 55 55</td></tr><tr><td>shantpat</td><td colspan=\"6\">61 42 50 75 81 78 31 39 35 65 43 52 69 65 67 41 45 43</td></tr><tr><td>PHG</td><td colspan=\"6\">71 45 56 71 84 77 31 39 34 62 43 51 70 57 62 41 60 49</td></tr><tr><td>IITP-</td><td colspan=\"6\">60 64 62 66 75 70 35 46 40 53 46 49 67 57 62 41 45 43</td></tr><tr><td>AINLPML</td><td/><td/><td/><td/><td/></tr><tr><td>PVG AI Club</td><td colspan=\"6\">44 33 38 72 79 75 24 32 27 55 40 46 61 53 57 37 47 41</td></tr></table>",
"type_str": "table",
"text": "ing the average mean of LingJing predictions on each user allow to increase the Pearson's correlations for PER and IRI from .230 and .255 to .306",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">LingJing IITP</td><td>.165 .134</td><td>.337 .092</td><td colspan=\"2\">.098 .102 -.176 .246</td><td>.305 .230 .086 .047</td><td>.139 .039</td><td>.245 .377 .257 .255 .004 .011 .252 .076</td></tr><tr><td/><td colspan=\"2\">Aggreg (Org.)</td><td>.207</td><td>.506</td><td>.123</td><td>.310</td><td>.383 .306</td><td>.166</td><td>.29</td><td>.495 .374 .331</td></tr><tr><td/><td/><td colspan=\"5\">Predicted EMO labels A D F J No Sa Su</td><td/><td/></tr><tr><td>Gold EMO labels</td><td>A D F J No Sa Su</td><td colspan=\"5\">107 11 14 3 6 0 54 1 1 1 0 1 20 0 0 2 7 0 7 1 30 2 1 2 8 8 0 18 0 5 146 4 1 2 3 4 8 0 2 0 2 3 25 5 0 4 0 6 0</td><td/><td/></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table><tr><td>: Confusion matrix best performing team on EMO for the following labels: Anger (A), Disgust (D), Fear (F), Joy (J), Sadness (Sa), Surprise (Su), no emo-tion (No).</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Features and Resources</td><td/><td/></tr><tr><td>Features</td><td colspan=\"3\"># of team Emp System Emo System</td></tr><tr><td>Emotion-Enriched Word Embedding Transformer embeddings [CLS] token from Transformer model Affect/emotion/empathy lexicons Personality information Demographic infromation External dataset</td><td>1 1 2 1 8 8 8</td><td>\u2713 \u2713 \u2713 \u2713</td><td>\u2713 \u2713 \u2713 \u2713 \u2713 \u2713</td></tr></table>",
"type_str": "table",
"text": "Machine learning algorithms used by the different teams. We listed all the models that teams reported in their results.",
"num": null
},
"TABREF10": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Features and resources that are used by different teams. We listed all the features and resources that teams reported in their results.",
"num": null
},
"TABREF11": {
"html": null,
"content": "<table><tr><td>Appendices</td></tr><tr><td>A Examples Track I (EMP)</td></tr></table>",
"type_str": "table",
"text": "Feifan Yang, Xiaojun Quan, Yunyi Yang, and Jianxing Yu. 2021. Multi-document transformer for personality detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14221-14229. Naitian Zhou and David Jurgens. 2020. Condolence and empathy in online communities. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"num": null
}
}
}
}