{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:01.911537Z" }, "title": "Team Phoenix at WASSA 2021: Emotion Analysis on News Stories with Pre-Trained Language Models", "authors": [ { "first": "Yash", "middle": [], "last": "Butala", "suffix": "", "affiliation": {}, "email": "yashbutala@iitkgp.ac.in" }, { "first": "Kanishk", "middle": [], "last": "Singh", "suffix": "", "affiliation": {}, "email": "kanishksingh@iitkgp.ac.in" }, { "first": "Adarsh", "middle": [], "last": "Kumar", "suffix": "", "affiliation": {}, "email": "adarshkumar712@iitkgp.ac.in" }, { "first": "Shrey", "middle": [], "last": "Shrivastava", "suffix": "", "affiliation": {}, "email": "shrivastava.shrey@iitkgp.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Emotion is fundamental to humanity. The ability to perceive, understand and respond to social interactions in a human-like manner is one of the most desired capabilities in artificial agents, particularly in social-media bots. Over the past few years, computational understanding and detection of emotional aspects in language have been vital in advancing human-computer interaction. The WASSA Shared Task 2021 released a dataset of newsstories across two tracks, Track-1 for Empathy and Distress Prediction and Track-2 for Multi-Dimension Emotion prediction at the essaylevel. We describe our system entry for the WASSA 2021 Shared Task (for both Track-1 and Track-2), where we leveraged the information from Pre-trained language models for Track specific Tasks. Our proposed models achieved an Average Pearson Score of 0.417, and a Macro-F1 Score of 0.502 in Track 1 and Track 2, respectively. In the Shared Task leaderboard, we secured 4 th rank in Track 1 and 2 nd rank in Track 2.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Emotion is fundamental to humanity. The ability to perceive, understand and respond to social interactions in a human-like manner is one of the most desired capabilities in artificial agents, particularly in social-media bots. Over the past few years, computational understanding and detection of emotional aspects in language have been vital in advancing human-computer interaction. The WASSA Shared Task 2021 released a dataset of newsstories across two tracks, Track-1 for Empathy and Distress Prediction and Track-2 for Multi-Dimension Emotion prediction at the essaylevel. We describe our system entry for the WASSA 2021 Shared Task (for both Track-1 and Track-2), where we leveraged the information from Pre-trained language models for Track specific Tasks. Our proposed models achieved an Average Pearson Score of 0.417, and a Macro-F1 Score of 0.502 in Track 1 and Track 2, respectively. In the Shared Task leaderboard, we secured 4 th rank in Track 1 and 2 nd rank in Track 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sentiment analysis over texts has been a widely researched area in NLP. The number of papers published in sentiment analysis related domains has increased from 37 papers in 2000 to 6996 in 2016 (M\u00e4ntyl\u00e4 et al., 2016 . Sentiment analysis is a trending research topic, possibly due to its applications that automatically collect and analyze a large corpus of opinions with text mining tools. From the conventional task of predicting polarity as positive, negative or neutral, the researchers are now increasingly focused on sophisticated tasks such as emotion recognition, aspect level sentiment analysis, intensity prediction, etc.", "cite_spans": [ { "start": 189, "end": 193, "text": "2016", "ref_id": "BIBREF18" }, { "start": 194, "end": 215, "text": "(M\u00e4ntyl\u00e4 et al., 2016", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, the researchers started exploring more sophisticated models of human emotion on a larger scale. Several datasets and corpora have been curated in this domain, such as (Mohammad et al., 2018) , (Alm and Sproat, 2005) , and larger datasets like (Demszky et al., 2020) . (Buechel et al., 2018) presented an interesting computational work distinguishing between multiple forms of empathy, empathic concern, and personal distress. This data of empathic concern and personal distress along with Multi-dimension Emotions Labelling on newsstories across seven classes, namely: sadness, fear, neutral, anger, disgust, joy, surprise, has been released a Shared Task (Tafreshi et al., 2021) in WASSA-2021 Workshop as Two Tracks 1 .", "cite_spans": [ { "start": 177, "end": 200, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF19" }, { "start": 203, "end": 225, "text": "(Alm and Sproat, 2005)", "ref_id": "BIBREF1" }, { "start": 253, "end": 275, "text": "(Demszky et al., 2020)", "ref_id": "BIBREF5" }, { "start": 278, "end": 300, "text": "(Buechel et al., 2018)", "ref_id": "BIBREF3" }, { "start": 666, "end": 689, "text": "(Tafreshi et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe our system entry for both the tracks of WASSA 2021 Shared Task. The primary contributions of the paper are as follows: Track 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate the efficacy of multi-tasking through parameter sharing which further strengthens the belief that empathic concern and personal distress are co-related. \u2022 We amalgamate the information from sentence embeddings with normalized additional information to finally predict the empathic concern and personal distress using regression. Track 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We provide a comparative analysis of generation modelling against classification modelling for the task of Emotion Prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Incremental Fine-Tuning approach 4.2 on Pre-Trained Models for a small sized dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 We illustrate the efficiency of Task Specific", "sec_num": null }, { "text": "Pre-trained language models have proved to be a breakthrough in analyzing a person's emotional state. We now describe briefly some of these highly influential works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Over the past few years, pre-trained language models have progressed greatly in learning contextualized representations. Transformer (Vaswani et al., 2017) , first proposed for machine translation, has enabled faster learning of complex representations of text. GPT (Radford et al., 2018) , BERT (Devlin et al., 2018) , RoBERTa , XLNet all leverage transformer architecture along with statistical tokenizers. ELECTRA (Clark et al., 2020) a recent generator-discriminatorbased pre-training approach offers a competitive performance despite requiring lesser compute. Domain specific language models also leads to a significant performance gain Beltagy et al., 2019) .", "cite_spans": [ { "start": 133, "end": 155, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 266, "end": 288, "text": "(Radford et al., 2018)", "ref_id": "BIBREF21" }, { "start": 296, "end": 317, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 417, "end": 437, "text": "(Clark et al., 2020)", "ref_id": "BIBREF4" }, { "start": 642, "end": 663, "text": "Beltagy et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Language Models", "sec_num": "2.1" }, { "text": "Emotion recognition through facial expressions and speech data has been the subject of extensive study in the past. (Tarnowski et al., 2017) presents an approach for recognition of seven emotional states based on facial expressions. (Yoon et al., 2018) utilizes a novel deep dual recurrent encoder model to obtain a better understanding of speech data using text data and audio signals simultaneously. For text, various approaches have been proposed for emotion recognition. Deshmukh and Kirange (2012) proposed an SVM-based approach for predicting opinions on news headlines. Acheampong et al. (2020) paper analyses the efficacy of utilizing transformer encoders for detecting emotions. Kant et al. (2018) demonstrates the practical efficiency of large pre-trained language models for Multi-Emotion sentiment classification.", "cite_spans": [ { "start": 116, "end": 140, "text": "(Tarnowski et al., 2017)", "ref_id": "BIBREF27" }, { "start": 233, "end": 252, "text": "(Yoon et al., 2018)", "ref_id": "BIBREF32" }, { "start": 475, "end": 502, "text": "Deshmukh and Kirange (2012)", "ref_id": "BIBREF6" }, { "start": 577, "end": 601, "text": "Acheampong et al. (2020)", "ref_id": "BIBREF0" }, { "start": 688, "end": 706, "text": "Kant et al. (2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Emotion Recognition", "sec_num": "2.2" }, { "text": "Empathy and distress are core components of a person's emotional state, and there has been a growing interest in computational approaches to model them. Considering language variations across different regions, empathy and distress can also vary with demographics (Lin et al., 2018; Loveys et al., 2018) , and recently (Guda et al., 2021) proposed a demographic-aware empathy modelling framework using BERT and demographics features.", "cite_spans": [ { "start": 264, "end": 282, "text": "(Lin et al., 2018;", "ref_id": "BIBREF15" }, { "start": 283, "end": 303, "text": "Loveys et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Computation of Empathy", "sec_num": "2.3" }, { "text": "Understanding empathy and distress are crucial for analyzing mental health and providing aid. Recently (Sharma et al., 2020) explored language models for identifying empathetic conversations in the mental health support system.", "cite_spans": [ { "start": 103, "end": 124, "text": "(Sharma et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Computation of Empathy", "sec_num": "2.3" }, { "text": "Our experiments' data consists of the emotionlabels to news stories released as part of the WASSA 2021 shared task. The dataset provided (Buechel et al., 2018) contained essays of 300-800 characters length, Batson empathetic concern, and personal distress scores along with other additional demographic and personality information.", "cite_spans": [ { "start": 137, "end": 159, "text": "(Buechel et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Task and Dataset Description", "sec_num": "3" }, { "text": "The training corpus of WASSA-2021 shared task consists of 1860 training pairs containing seven emotion labels, namely: sadness, fear, neutral, anger, disgust, joy, surprise. The dataset also includes person-level demographic information (age, gender, ethnicity, income, education level) and personality information. We normalized these information before using in our model for Track 1. We excluded this information in Track 2 model. The objective of the Track-1 is to predict the Batson empathic concern and personal distress using the essay and any of the additional information to improve Pearson corelation between the predicted labels and gold standard labels. The task can formally be described as following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Dataset Description", "sec_num": "3" }, { "text": "Empathic concern and personal distress prediction: Given a paragraph t, additional information i, learn a model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Dataset Description", "sec_num": "3" }, { "text": "g(t, I) \u2192 (x, y) where x \u2208 R + and y \u2208 R + .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Dataset Description", "sec_num": "3" }, { "text": "The Track 2 is formulated as essay-level Multi-Dimension emotion prediction task. It is defined formally as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Dataset Description", "sec_num": "3" }, { "text": "Emotion Prediction: Given a paragraph t, the classification task aims to learn a model g(t) \u2192 {l 1 , l 2 , .., l k } where l i is a label \u2208 {sadness, anger, ..etc}. The system architecture for Empathy and Distress Prediction is shown in Figure 1 . The approach is primarily based on fine-tuning pre-trained language models for down-stream tasks. We enforce the technique of hard-parameter sharing through concatenation of BERT-fine-tuned embeddings trained separately for Empathy and Distress Prediction. The final shared parameters are then concatenated with the scaled demographic and personality features given in the dataset. These separately fine-tuned BERT-embeddings for distress and empathy prediction are then concatenated with rest of the features before feeding them to the regression models. This parameter shared multi-task framework allows for the use of the same model, loss function, and hyperparameters for the task of empathy prediction as well as Distress Prediction.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 245, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Task and Dataset Description", "sec_num": "3" }, { "text": "M SE(ytrue, y pred ) = ( 1 n ) n i=1 (ytrue i \u2212 y pred i ) 2 r = n i=1 (ytrue i \u2212\u0233true)(y pred i \u2212\u0233 pred ) n i=1 (ytrue i \u2212\u0233true) 2 n i=1 (y pred i \u2212\u0233 pred ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy and Distress Prediction Model", "sec_num": "4.1" }, { "text": "where y true are the gold-standard and y pred being the predicted scores for empathy and distress. The final Pearson correlation score used for final evaluation was: Our proposed approach for Emotion Prediction is shown in Figure 2 . The approach is primarily based on T5 Model (Raffel et al., 2019) for conditional generation of emotion labels. Hence before feeding into the network, the emotion prediction task is cast as feeding the essay text as input and training it to generate target emotion labels as text. This allows for the use of the same model, loss function, and hyper-parameters for the task of emotion prediction as is done in other Text Generation tasks. More formally, the modeling of the task can be described as:", "cite_spans": [ { "start": 278, "end": 299, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 223, "end": 231, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Empathy and Distress Prediction Model", "sec_num": "4.1" }, { "text": "r avg = r empathy + r distress 2 4.2 Emotion Label Generation Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empathy and Distress Prediction Model", "sec_num": "4.1" }, { "text": "p(x|c) = 2 i=1 p(x i |x ", "type_str": "table", "text": "Composition of Training and Development dataset", "num": null }, "TABREF3": { "html": null, "content": "
shows the model performance discussed
in the development dataset section (used as valida-
tion set) for Track 1. Our final approach, which
is described in Section 4.1 outperforms other ap-
proaches by an appreciable margin on development
set but it failed on empathy prediction on the latest
test set submission in the Codalab. It was due to an
erroneous submission from us. Though during the
post-evaluation phase, the model performed better
", "type_str": "table", "text": "", "num": null }, "TABREF4": { "html": null, "content": "
: Performance of our system on the develop-
ment datatset of Track 1. Our final submission ap-
proach for Track 1 is marked with *.
Predictor Distress Empathy Average
MLP0.4760.3580.417
", "type_str": "table", "text": "", "num": null }, "TABREF5": { "html": null, "content": "
: Performance of our final submission on the
held-out Test datatset of Track 1. Our final submission
secured 4 th rank on the Shared Task Leaderboard
on test set than the
", "type_str": "table", "text": "", "num": null }, "TABREF6": { "html": null, "content": "
MetricResult
Macro F1 Score 0.502
Micro F1 Score 0.594
Accuracy0.594
Macro Precision 0.550
Micro Precision 0.594
Macro Recall0.483
Micro Recall0.594
", "type_str": "table", "text": "Macro F1 score for Emotion Prediction on Development set. Model descriptions are provided in Section 6. Our final submission approach for Track 2 is marked with *.", "num": null } } } }