ACL-OCL / Base_JSON /prefixW /json /wassa /2021.wassa-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:48.066859Z"
},
"title": "PVG at WASSA 2021: A Multi-Input, Multi-Task, Transformer-Based Architecture for Empathy and Distress Prediction",
"authors": [
{
"first": "Atharva",
"middle": [],
"last": "Kulkarni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Sunanda",
"middle": [],
"last": "Somwase",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Shivam",
"middle": [],
"last": "Rajput",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Manisha",
"middle": [],
"last": "Marathe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Active research pertaining to the affective phenomenon of empathy and distress is invaluable for improving human-machine interaction. Predicting intensities of such complex emotions from textual data is difficult, as these constructs are deeply rooted in the psychological theory. Consequently, for better prediction, it becomes imperative to take into account ancillary factors such as the psychological test scores, demographic features, underlying latent primitive emotions, along with the text's undertone and its psychological complexity. This paper proffers team PVG's solution to the WASSA 2021 Shared Task on Predicting Empathy and Emotion in Reaction to News Stories. Leveraging the textual data, demographic features, psychological test score, and the intrinsic interdependencies of primitive emotions and empathy, we propose a multi-input, multi-task framework for the task of empathy score prediction. Here, the empathy score prediction is considered the primary task, while emotion and empathy classification are considered secondary auxiliary tasks. For the distress score prediction task, the system is further boosted by the addition of lexical features. Our submission ranked 1 st based on the average correlation (0.545) as well as the distress correlation (0.574), and 2 nd for the empathy Pearson correlation (0.517).",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Active research pertaining to the affective phenomenon of empathy and distress is invaluable for improving human-machine interaction. Predicting intensities of such complex emotions from textual data is difficult, as these constructs are deeply rooted in the psychological theory. Consequently, for better prediction, it becomes imperative to take into account ancillary factors such as the psychological test scores, demographic features, underlying latent primitive emotions, along with the text's undertone and its psychological complexity. This paper proffers team PVG's solution to the WASSA 2021 Shared Task on Predicting Empathy and Emotion in Reaction to News Stories. Leveraging the textual data, demographic features, psychological test score, and the intrinsic interdependencies of primitive emotions and empathy, we propose a multi-input, multi-task framework for the task of empathy score prediction. Here, the empathy score prediction is considered the primary task, while emotion and empathy classification are considered secondary auxiliary tasks. For the distress score prediction task, the system is further boosted by the addition of lexical features. Our submission ranked 1 st based on the average correlation (0.545) as well as the distress correlation (0.574), and 2 nd for the empathy Pearson correlation (0.517).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, substantial progress has been made in the NLP domain, with sentiment analysis and emotion identification at its core. The advent of attention-based models and complex deep learning architectures has led to substantial headways in sentiment/ emotion classification and their intensity prediction. However, the studies addressing the prediction of affective phenomenons of empathy and distress have been relatively limited. Factors such as lack of large-scale quality labeled datasets, the weak notion of the constructs themselves, and inter-disciplinary dependencies have hindered the progress. The WASSA 2021 Shared Task on Predicting Empathy and Emotion in Reaction to News Stories (Tafreshi et al., 2021 ) provides a quality, gold-standard dataset of the empathic reactions to news stories to predict Batson's empathic concern and personal distress scores.",
"cite_spans": [
{
"start": 700,
"end": 722,
"text": "(Tafreshi et al., 2021",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Empathy, as defined by Davis (1983) , is considered as \"reactions of one individual to the observed experiences of another.\" It is more succinctly summarized by Levenson and Ruef (1992) in three key components, \"(a) knowing what another person is feeling (cognitive), (b) feeling what another person is feeling (emotional), and (c) responding compassionately to another person's distress (behavioral).\" Distress, on the other hand, as delineated by Dowling (2018) , is \"a strong aversive and self-oriented response to the suffering of others, accompanied by the desire to withdraw from a situation in order to protect oneself from excessive negative feelings.\" Empathy and distress are multifaceted interactional processes that are not always self-evident and often depend on the text's undertone. Moreover, along with the textual data, multiple psychological and demographic features also play a vital role in determining these complex emotions. Evidence by Fabi et al. (2019) suggests that empathy and distress are not independent of the basic emotions (happiness, sadness, disgust, fear, surprise, and anger) the subject feels during a given scenario. This appositeness of the primitive emotions with empathy and distress can be aptly exploited using a multi-task learning approach.",
"cite_spans": [
{
"start": 23,
"end": 35,
"text": "Davis (1983)",
"ref_id": "BIBREF5"
},
{
"start": 161,
"end": 185,
"text": "Levenson and Ruef (1992)",
"ref_id": "BIBREF16"
},
{
"start": 449,
"end": 463,
"text": "Dowling (2018)",
"ref_id": "BIBREF7"
},
{
"start": 959,
"end": 977,
"text": "Fabi et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multi-task learning has led to successes in many applications of NLP such as machine translation (McCann et al., 2017) , speech recognition (Ar\u0131k et al., 2017) , representation learning (Hashimoto et al., 2017) , semantic parsing (Peng et al., 2017) , and information retrieval (Liu et al., 2015) to name a few. NLP literature (Standley et al., 2020) suggests that under specific circumstances and with well-crafted tasks, multi-task learning frameworks can often aid models to achieve state-of-the-art performances. Standley et al. (2020) further asserts that seemingly related tasks can often have similar underlying dynamics. With the same intuition, Deep et al. (2020) designed a multi-task learning model for sentiment classification and their corresponding intensity predictions. Building on these findings, we propose a multi-input, multitask, transformer-based architecture for the prediction of empathy and distress scores. The multiinput nature of the framework aggregates information from textual, categorical, and numeric data to generate robust representations for the regression task at hand. Exploiting the latent interdependencies between primitive emotions and empathy/ distress, we formulate the multi-task learning problem as a combination of classification and regression. The model simultaneously classifies the text into its correct basic emotion, detects if it exhibits high empathy/ distress, and accordingly predicts its appropriate empathy/ distress intensity score according to Batson's scale (Batson et al., 1987) . This multi-input, multi-task learning paradigm is further bolstered with the addition of NRC Emotion Intensity Lexicons (Mohammad, 2018b) ; NRC Valence, Arousal, and Dominance Lexicons (Mohammad, 2018a) ; and relevant features from Empath (Fast et al., 2016) . Moreover, our proposed models have less than 110k trainable parameters and are still able to achieve relatively high Pearson's correlation of 0.517 and 0.574 for empathy and distress, respectively, and 0.545 for average correlation, outperforming other teams.",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "(McCann et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 140,
"end": 159,
"text": "(Ar\u0131k et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 186,
"end": 210,
"text": "(Hashimoto et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 230,
"end": 249,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 278,
"end": 296,
"text": "(Liu et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 327,
"end": 350,
"text": "(Standley et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 517,
"end": 539,
"text": "Standley et al. (2020)",
"ref_id": "BIBREF29"
},
{
"start": 1520,
"end": 1541,
"text": "(Batson et al., 1987)",
"ref_id": "BIBREF3"
},
{
"start": 1664,
"end": 1681,
"text": "(Mohammad, 2018b)",
"ref_id": "BIBREF23"
},
{
"start": 1729,
"end": 1746,
"text": "(Mohammad, 2018a)",
"ref_id": "BIBREF21"
},
{
"start": 1783,
"end": 1802,
"text": "(Fast et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the last few years, earnest endeavours have been made in the NLP community to analyze empathy and distress. Earlier work in empathy mostly addressed the presence or absence of empathy in spoken dialogue (Gibson et al., 2015; Alam et al., 2016; Fung et al., 2016; P\u00e9rez-Rosas et al., 2017; Alam et al., 2018) . For text-based empathy prediction, Buechel et al. (2018) laid a firm foundation for predicting Batson's empathic concern and personal distress scores in reaction to news articles. They present the first publicly available gold-standard dataset for text-based empathy and distress prediction. Sharma et al. (2020) plated a computational approach for understanding empathy in text-based health support. They developed a multi-task RoBERTa-based bi-encoder paradigm for identifying empathy in conversations and extracting rationales underlying its predictions. Wagner (2020) analysed the linguistic undertones for empathy present in avid fiction readers. Computational work done for predicting distress is relatively modest. Shapira et al. (2020) analysed textual data to examine associations between linguistic features and clients distress during psychotherapy. They combined linguistic features like positive and negative emotion words with psychological measures like Outcome Questionnaire-45 (Lambert et al., 2004) and Outcome Rating Scale (Miller, 2003) . Zhou and Jurgens (2020) studied the affiliation between distress, condolence, and empathy in online support groups using nested regression models.",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "(Gibson et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 230,
"end": 248,
"text": "Alam et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 249,
"end": 267,
"text": "Fung et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 268,
"end": 293,
"text": "P\u00e9rez-Rosas et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 294,
"end": 312,
"text": "Alam et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 350,
"end": 371,
"text": "Buechel et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 607,
"end": 627,
"text": "Sharma et al. (2020)",
"ref_id": "BIBREF28"
},
{
"start": 1037,
"end": 1058,
"text": "Shapira et al. (2020)",
"ref_id": "BIBREF27"
},
{
"start": 1292,
"end": 1331,
"text": "Questionnaire-45 (Lambert et al., 2004)",
"ref_id": null
},
{
"start": 1357,
"end": 1371,
"text": "(Miller, 2003)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The WASSA 2021 Shared Task (Tafreshi et al., 2021) provides an extended dataset to the one compiled by Buechel et al. (2018) . The dataset has a total of 14 features spanning textual, categorical, and numeric data types. Essays represent the subject's empathic reactions to news stories he/she has read. The demographic features of gender, race, education, and the essay's gold-standard emotion label cover the categorical input features. The numeric features include the subject's age and income, followed by personality traits scores (conscientiousness, openness, extraversion, agreeableness, stability) and interpersonal reactivity index (IRI) scores (fantasy, perspective taking, empathetic concern, personal distress). The train-development-test split of the dataset is illustrated in Table 1 .",
"cite_spans": [
{
"start": 27,
"end": 50,
"text": "(Tafreshi et al., 2021)",
"ref_id": null
},
{
"start": 103,
"end": 124,
"text": "Buechel et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 790,
"end": 797,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Description",
"sec_num": "3"
},
{
"text": "In this section, we posit two multi-task learning frameworks for empathy and distress score prediction. They are elucidated in detail as follows: Figure 1 : System architecture for empathy score prediction. The Dense layers in red have an 2 kernel regularization applied to them.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Methodology",
"sec_num": "4"
},
{
"text": "Figure 1 depicts the system architecture for empathy score prediction. We formulate the task of empathy score prediction as a multi-input, multitask learning problem. The proposed multi-task learning framework aims to leverage the empathy bin 1 and the text's emotion to predict its empathy score. Here, the empathy score prediction is treated as the primary task, whereas emotion and empathy classification are considered secondary auxiliary tasks. The multi-input, multi-task nature of the framework efficiently fuses the diverse set of information (textual, categorical, and numeric) provided in the dataset to generate robust representations for empathy score prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "For the task of empathy and emotion classification, we make use of the pre-trained RoBERTa base model (Liu et al., 2019) . The contextualized representations generated by RoBERTa help extract the context-aware information and capture the undertone of the text better than standard deep learning models. For each word in the essay, we extract the default pre-trained embeddings from the last hidden layer of RoBERTa. The 768-dimensional word 1 empathy bin is a feature given in the training dataset, where its value is 1 if empathy score greater than or equal to 4.0, and 0 if empathy score is less than 4.0. Thus, an essay exhibits high empathy if its empathy bin is 1 and exhibits low empathy if its empathy bin is 0. The same analogy is true for distress bin. Figure 2 : System architecture for distress score prediction. The Dense layers in red have an 2 kernel regularization applied to them.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 762,
"end": 770,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "embeddings are averaged to generate essay-level representations, followed by a hidden layer (128 units) for dimensionality reduction. The model further branches into two parallel fully-connected layers (16 units each), which form the task-specific layers for empathy and emotion classification, respectively. Let T 1 and T 2 denote the generated task-specific representations for empathy and emotion classification, respectively, of dimension d 1 . Finally, a classification layer (binary classification for empathy and multi-class classification for emotion) is added to each of the task-specific layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "To incorporate the demographic information encoded in the categorical variables, we make use of entity embeddings (Guo and Berkhahn, 2016) . Formally, entity embeddings are domain-specific multi-dimensional representations of categorical variables, automatically learned by a neural network when trained on a particular task. For each of the demographic features (gender, education, race, and age 2 ), 3-dimensional entity embeddings are generated. All the resultant embeddings are flattened, concatenated, and passed through two fullyconnected layers (32 and 16 units, respectively), generating a layer having relevant information from each categorical input. Let this representation be denoted as C of dimension d 2 .",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "(Guo and Berkhahn, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "The numeric inputs of the personality and the Interpersonal Reactivity Index (IRI) scores are incorporated in the model by passing them individually through a single hidden layer (8 units). The results are concatenated and further passed on to a fully-connected layer (32 units) to generate their combined representations. Let this representation be denoted as N of dimension d 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "The task-specific layers for empathy and emotion classification and the representations generated by the final hidden layer of the entity embeddings and the numeric psychological score inputs are concatenated as given in equation 1 to generate the final representation F 1 for empathy score prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "F 1 = [T 1 ; T 2 ; C; N ] \u2208 R d 1 +d 2 +d 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "It is further passed to another hidden layer (16 units). Thus, this layer contains the compressed information from different knowledge views of the input data. A final regression layer is added for the empathy score prediction task. The multi-input, multi-task model is trained end-to-end with the objective loss calculated as the sum of the loss for each of the three tasks. Figure 2 depicts the system architecture for distress score prediction. The distress score prediction model is the same as that of the empathy score prediction model, but with the addition of handcrafted lexical features. We use 6 NRC Emotion Intensity Lexicons (Mohammad, 2018b) ; 2 NRC Valence, Arousal, and Dominance Lexicons (Mohammad, 2018a); and 15 features from Empath (Fast et al., 2016) as stated in Table 2 . These features are chosen as they exhibit a high Pearson correlation with the training data. For each essay, the respective NRC and Empath score is calculated as the sum of each word's score in the essay. The NRC lexicons and Empath features are passed to a single hidden layer (8 and 16 units for NRC and Empath, respectively.) independently before concatenation. The resultant representation is further passed through another hidden layer (48 units). Let this representation be denoted by L of dimension d 4 . It is concatenated with the final layer representations from other inputs to generate the final representation F 2 for distress score prediction, as given by equation 2.",
"cite_spans": [
{
"start": 638,
"end": 655,
"text": "(Mohammad, 2018b)",
"ref_id": "BIBREF23"
},
{
"start": 752,
"end": 771,
"text": "(Fast et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 376,
"end": 384,
"text": "Figure 2",
"ref_id": null
},
{
"start": 785,
"end": 792,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Empathy Score Prediction",
"sec_num": "4.1"
},
{
"text": "F 2 = [T 1 ; T 2 ; C; N ; L] \u2208 R d 1 +d 2 +d 3 +d 4 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distress Score Prediction",
"sec_num": "4.2"
},
{
"text": "5 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distress Score Prediction",
"sec_num": "4.2"
},
{
"text": "We applied standard text cleaning steps for each essay in the dataset, such as removing the punctuations, special characters, digits, single characters, multiple spaces, and accented words. The essays are further normalized by removing wordplay, replacing acronyms with full forms, and expanding contractions 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "5.1"
},
{
"text": "Each essay is tokenized and padded to a maximum length of 200 tokens. Longer essays are truncated. Each Empath feature is converted into its percentage value. For distress score prediction, the numeric features, NRC lexicon scores, and Empath features are standardized by removing the mean and scaling to unit variance before being passed to the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "5.1"
},
{
"text": "Given the small amount of data, the weights of the RoBERTa layers were freezed and not updated during the training. The multi-task model is trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1 \u00d7 10 \u22123 and a batch size of 32. We used Hyperbolic Tangent (tanh) activation for all the hidden layers as it performed better than ReLU activation and its other variants. The model is trained for 200 epochs with early stopping applied if the validation loss does not improve after 20 epochs. Furthermore, the learning rate is reduced by a factor of 0.2 if validation loss does not decline after ten successive epochs. The model with the best validation loss is selected. 2 kernel regularizer of 5 \u00d7 10 \u22124 applied to certain hidden layers as shown in Figures 1 and 2 . A dropout of probability 0.2 is applied after the average pooling of contextual RoBERTa embeddings. Our code is available at our GitHub repository. 4",
"cite_spans": [],
"ref_spans": [
{
"start": 782,
"end": 797,
"text": "Figures 1 and 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Setting and Training Environment",
"sec_num": "5.2"
},
{
"text": "The performance comparison of the various models on the validation data is reported in Table 3 . As illustrated in the Table 3 , we compare the performances of a RoBERTa based regression model, a multi-input model (text + categorical + numeric); a multi-input (text + categorical + numeric), multitask model; and a multi-input (text + categorical + numeric), multi-task model with added lexical features (NRC + Empath). The systems submitted by our team are highlighted in Table 3 . It is evident from Table 3 that the addition of categorical and numeric input features leads to an appreciable improvement in the models performance. This further attests that the demographic features and the psychological scores contribute valuable information for predicting scores of complex emotions like empathy and distress. The performance is further improved by adopting a multi-task approach, thus, reinstating our belief in the interdependencies between the primitive emotions and the complex emotions of empathy and distress. The addition of lexical features, however, leads to an improvement for distress prediction but a decrease in performance for empathy prediction. This may be explained by the theories in the affective neuroscience literature, wherein empathy is considered a neocortex emotion, describing it as an emotional, cognitive, and behavioral process. Thus, ones empathic quotient is reflected in how one expresses ones feelings and the undertone of it, rather than the mere words one uses. Moreover, as stated by Sedoc et al. (2020) in their work, there exist no clear set of lexicons that can accurately distinguish empathy from selffocused distress. Another reason for the decrease in the performance for empathy score prediction may be attributed to the fact that the Empath features for empathy, as reported in Table 2 lack some interpretability from a human perspective. Empath features such as domestic work, party, celebration, leisure, and air travel are not innately empathic categories and perhaps show high correlation due to corpus-based topical bias. The Empath features for distress, on the other hand, seem quite relevant and thus, might explain the increase in performance for distress prediction. We encourage further research in this direction.",
"cite_spans": [
{
"start": 1524,
"end": 1543,
"text": "Sedoc et al. (2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 119,
"end": 126,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 473,
"end": 480,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 502,
"end": 509,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 1826,
"end": 1833,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "In this work, we propose a multi-input, multi-task, transformer-based architecture to predict Batson's empathic concern and personal distress scores. Leveraging the dependency between the basic emotions and empathy/ distress, as well as incorporating textual, categorical, and numeric data, our proposed model generates robust representations for the regression task at hand. The addition of certain lexical features further improves the model's performance for distress score prediction. Our submission ranked 1 st based on average correlation (0.545) as well as distress correlation (0.574), and 2 nd for empathy Pearson correlation (0.517) on the test data. As for the future work of this research, a weighted loss scheme could be employed to enhance the results. From a psychological and linguistic standpoint, features such as part-of-speech tags, syntax parse tree, and the text's subjectivity and polarity scores could also be exploited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In the dataset age is given as a numeric input. We split it into intervals of below 25, 26-40, 41-60, and 61 and above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.org/project/ pycontractions/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ mr-atharva-kulkarni/ WASSA-2021-Shared-Task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Can we detect speakers' empathy?: A real-life case study",
"authors": [
{
"first": "F",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Danieli",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2016,
"venue": "7th IEEE International Conference on Cognitive Infocommunications (CogInfo-Com)",
"volume": "",
"issue": "",
"pages": "59--000064",
"other_ids": {
"DOI": [
"10.1109/CogInfoCom.2016.7804525"
]
},
"num": null,
"urls": [],
"raw_text": "F. Alam, M. Danieli, and G. Riccardi. 2016. Can we detect speakers' empathy?: A real-life case study. In 2016 7th IEEE International Confer- ence on Cognitive Infocommunications (CogInfo- Com), pages 000059-000064.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Annotating and modeling empathy in spoken conversations",
"authors": [
{
"first": "Firoj",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Morena",
"middle": [],
"last": "Danieli",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2018,
"venue": "Comput. Speech Lang",
"volume": "50",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.csl.2017.12.003"
]
},
"num": null,
"urls": [],
"raw_text": "Firoj Alam, Morena Danieli, and Giuseppe Riccardi. 2018. Annotating and modeling empathy in spoken conversations. Comput. Speech Lang., 50(C):4061.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep voice: Real-time neural text-to-speech",
"authors": [
{
"first": "",
"middle": [],
"last": "Sercan\u00f6",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Ar\u0131k",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Chrzanowski",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Coates",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Diamos",
"suffix": ""
},
{
"first": "Yongguo",
"middle": [],
"last": "Gibiansky",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Shubho",
"middle": [],
"last": "Raiman",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Sengupta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shoeybi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "195--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sercan\u00d6. Ar\u0131k, Mike Chrzanowski, Adam Coates, Gre- gory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, Shubho Sengupta, and Mohammad Shoeybi. 2017. Deep voice: Real-time neural text-to-speech. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 195-204, In- ternational Convention Centre, Sydney, Australia. PMLR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Daniel Batson",
"suffix": ""
},
{
"first": "Patricia",
"middle": [
"A"
],
"last": "Fultz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schoenrade",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of personality",
"volume": "55",
"issue": "1",
"pages": "19--39",
"other_ids": {
"DOI": [
"10.1111/j.1467-6494.1987.tb00426.x"
]
},
"num": null,
"urls": [],
"raw_text": "C Daniel Batson, Jim Fultz, and Patricia A Schoenrade. 1987. Distress and empathy: Two qualitatively dis- tinct vicarious emotions with different motivational consequences. Journal of personality, 55(1):19-39.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Modeling empathy and distress in reaction to news stories",
"authors": [
{
"first": "Sven",
"middle": [],
"last": "Buechel",
"suffix": ""
},
{
"first": "Anneke",
"middle": [],
"last": "Buffone",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Slaff",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4758--4765",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1507"
]
},
"num": null,
"urls": [],
"raw_text": "Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Un- gar, and Jo\u00e3o Sedoc. 2018. Modeling empathy and distress in reaction to news stories. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4758-4765, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The effects of dispositional empathy on emotional reactions and helping: A multidimensional approach",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 1983,
"venue": "Journal of personality",
"volume": "51",
"issue": "2",
"pages": "167--184",
"other_ids": {
"DOI": [
"10.1111/j.1467-6494.1983.tb00860.x"
]
},
"num": null,
"urls": [],
"raw_text": "Mark H Davis. 1983. The effects of dispositional empathy on emotional reactions and helping: A multidimensional approach. Journal of personality, 51(2):167-184.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Related tasks can share! a multi-task framework for affective language",
"authors": [
{
"first": "Md",
"middle": [
"Shad"
],
"last": "Kumar Shikhar Deep",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.02154"
]
},
"num": null,
"urls": [],
"raw_text": "Kumar Shikhar Deep, Md Shad Akhtar, Asif Ekbal, and Pushpak Bhattacharyya. 2020. Related tasks can share! a multi-task framework for affective lan- guage. arXiv preprint arXiv:2002.02154.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Compassion does not fatigue! The Canadian Veterinary Journal",
"authors": [
{
"first": "Trisha",
"middle": [],
"last": "Dowling",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "59",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trisha Dowling. 2018. Compassion does not fatigue! The Canadian Veterinary Journal, 59(7):749.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Empathic concern and personal distress depend on situational but not dispositional factors",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Fabi",
"suffix": ""
},
{
"first": "Lydia",
"middle": [
"Anna"
],
"last": "Weber",
"suffix": ""
},
{
"first": "Hartmut",
"middle": [],
"last": "Leuthold",
"suffix": ""
}
],
"year": 2019,
"venue": "PLoS ONE",
"volume": "14",
"issue": "11",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0225102"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah Fabi, Lydia Anna Weber, and Hartmut Leuthold. 2019. Empathic concern and personal distress de- pend on situational but not dispositional factors. PLoS ONE, 14(11):e0225102.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Empath: Understanding topic signals in largescale text",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Fast",
"suffix": ""
},
{
"first": "Binbin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"S"
],
"last": "Bernstein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2858036.2858535"
]
},
"num": null,
"urls": [],
"raw_text": "Ethan Fast, Binbin Chen, and Michael S. Bernstein. 2016. Empath: Understanding topic signals in large- scale text. In Proceedings of the 2016 CHI Confer- ence on Human Factors in Computing Systems, CHI '16, page 46474657, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Zara the Supergirl: An empathetic personality recognition system",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Anik",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Ruixi",
"middle": [],
"last": "Farhad Bin Siddique",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ho Yin Ricky",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "87--91",
"other_ids": {
"DOI": [
"10.18653/v1/N16-3018"
]
},
"num": null,
"urls": [],
"raw_text": "Pascale Fung, Anik Dey, Farhad Bin Siddique, Ruixi Lin, Yang Yang, Yan Wan, and Ho Yin Ricky Chan. 2016. Zara the Supergirl: An empathetic personal- ity recognition system. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demon- strations, pages 87-91, San Diego, California. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predicting therapist empathy in motivational interviews using language features inspired by psycholinguistic norms",
"authors": [
{
"first": "James",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Malandrakis",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Shrikanth S",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth annual conference of the international speech communication association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Gibson, Nikolaos Malandrakis, Francisco Romero, David C Atkins, and Shrikanth S Narayanan. 2015. Predicting therapist empathy in motivational interviews using language features in- spired by psycholinguistic norms. In Sixteenth an- nual conference of the international speech commu- nication association.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Entity embeddings of categorical variables",
"authors": [
{
"first": "Cheng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Berkhahn",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng Guo and Felix Berkhahn. 2016. Entity embed- dings of categorical variables.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A joint many-task model: Growing a neural network for multiple NLP tasks",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1923--1933",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1206"
]
},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923-1933, Copenhagen, Denmark. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The outcome questionnaire-45. The use of psychological testing for treatment planning and outcomes assessment: Instruments for adults",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Ann",
"middle": [
"T"
],
"last": "Lambert",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"M"
],
"last": "Gregersen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burlingame",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Lambert, Ann T Gregersen, and Gary M Burlingame. 2004. The outcome questionnaire-45. The use of psychological testing for treatment plan- ning and outcomes assessment: Instruments for adults.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Empathy: a physiological substrate",
"authors": [
{
"first": "W",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Anna",
"middle": [
"M"
],
"last": "Levenson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruef",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of personality and social psychology",
"volume": "63",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1037/0022-3514.63.2.234"
]
},
"num": null,
"urls": [],
"raw_text": "Robert W Levenson and Anna M Ruef. 1992. Empathy: a physiological substrate. Journal of personality and social psychology, 63(2):234.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Ye-Yi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "912--921",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1092"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 912-921, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learned in translation: Contextualized word vectors",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Proceedings of the 31st International Conference on Neural Informa- tion Processing Systems, NIPS'17, page 62976308, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The outcome rating scale: A preliminary study of the reliability, validity, and feasibility of a brief visual analog measure",
"authors": [
{
"first": "D",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of brief Therapy",
"volume": "2",
"issue": "2",
"pages": "91--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott D Miller. 2003. The outcome rating scale: A pre- liminary study of the reliability, validity, and feasi- bility of a brief visual analog measure. Journal of brief Therapy, 2(2):91-100.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Obtaining reliable human ratings of valence, arousal, and dominance for",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad. 2018a. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "English words",
"authors": [],
"year": null,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "174--184",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1017"
]
},
"num": null,
"urls": [],
"raw_text": "English words. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 174- 184, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Word affect intensities",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad. 2018b. Word affect intensities. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep multitask learning for semantic dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2037--2048",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1186"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2037-2048, Van- couver, Canada. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Understanding and predicting empathic behavior in counseling therapy",
"authors": [
{
"first": "Ver\u00f3nica",
"middle": [],
"last": "P\u00e9rez-Rosas",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Resnicow",
"suffix": ""
},
{
"first": "Satinder",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "An",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1426--1435",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1131"
]
},
"num": null,
"urls": [],
"raw_text": "Ver\u00f3nica P\u00e9rez-Rosas, Rada Mihalcea, Kenneth Resni- cow, Satinder Singh, and Lawrence An. 2017. Un- derstanding and predicting empathic behavior in counseling therapy. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426- 1435, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning word ratings for empathy and distress from documentlevel user responses",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Buechel",
"suffix": ""
},
{
"first": "Yehonathan",
"middle": [],
"last": "Nachmany",
"suffix": ""
},
{
"first": "Anneke",
"middle": [],
"last": "Buffone",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1664--1673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Sedoc, Sven Buechel, Yehonathan Nachmany, An- neke Buffone, and Lyle Ungar. 2020. Learning word ratings for empathy and distress from document- level user responses. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 1664-1673, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Using computerized text analysis to examine associations between linguistic features and clients distress during psychotherapy",
"authors": [
{
"first": "Natalie",
"middle": [],
"last": "Shapira",
"suffix": ""
},
{
"first": "Gal",
"middle": [],
"last": "Lazarus",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Gilboa-Schechtman",
"suffix": ""
},
{
"first": "Rivka",
"middle": [],
"last": "Tuval-Mashiach",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Juravski",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Atzil-Slonim",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of counseling psychology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://content.apa.org/doi/10.1037/cou0000440"
]
},
"num": null,
"urls": [],
"raw_text": "Natalie Shapira, Gal Lazarus, Yoav Goldberg, Eva Gilboa-Schechtman, Rivka Tuval-Mashiach, Daniel Juravski, and Dana Atzil-Slonim. 2020. Using com- puterized text analysis to examine associations be- tween linguistic features and clients distress during psychotherapy. Journal of counseling psychology.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A computational approach to understanding empathy expressed in text-based mental health support",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Miner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Althoff",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5263--5276",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.425"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to un- derstanding empathy expressed in text-based men- tal health support. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Which tasks should be learned together in multi-task learning?",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Standley",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zamir",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Leonidas",
"middle": [],
"last": "Guibas",
"suffix": ""
},
{
"first": "Jitendra",
"middle": [],
"last": "Malik",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Savarese",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "119",
"issue": "",
"pages": "9120--9132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Standley, Amir Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. 2020. Which tasks should be learned together in multi-task learning? In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9120-9132. PMLR.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Jo\u00e3o Sedoc, and Alexandra Balahur. 2021. WASSA2021 Shared Task: Predicting Empathy and Emotion in Reaction to News Stories",
"authors": [
{
"first": "Shabnam",
"middle": [],
"last": "Tafreshi",
"suffix": ""
},
{
"first": "Orph\u00e9e",
"middle": [],
"last": "De Clercq",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Barriere",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Buechel",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shabnam Tafreshi, Orph\u00e9e De Clercq, Valentin Bar- riere, Sven Buechel, Jo\u00e3o Sedoc, and Alexandra Bal- ahur. 2021. WASSA2021 Shared Task: Predicting Empathy and Emotion in Reaction to News Stories. In Proceedings of the Eleventh Workshop on Compu- tational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "It's not what you say, it's how you say it: a linguistic analysis of empathy in fiction readers",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.7282/t3-3v3h-9345"
]
},
"num": null,
"urls": [],
"raw_text": "Rachel Wagner. 2020. It's not what you say, it's how you say it: a linguistic analysis of empathy in fiction readers. Ph.D. thesis, Rutgers University-Camden Graduate School.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Condolence and empathy in online communities",
"authors": [
{
"first": "Naitian",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "609--626",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.45"
]
},
"num": null,
"urls": [],
"raw_text": "Naitian Zhou and David Jurgens. 2020. Condolence and empathy in online communities. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 609- 626, Online. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Data distribution."
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Selected NRC and Empath features and their pearson's correlation with training data's empathy and distress score."
},
"TABREF7": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Performance comparison of various models as per Pearson's correlation (p < 0.05)."
}
}
}
}