{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:06:54.778089Z" }, "title": "Continuing Pre-trained Model with Multiple Training Strategies for Emotional Classification", "authors": [ { "first": "Bin", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hunan University", "location": {} }, "email": "" }, { "first": "Yixuan", "middle": [], "last": "Weng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hunan University", "location": {} }, "email": "wengsyx@gmail.com" }, { "first": "Qiya", "middle": [], "last": "Song", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hunan University", "location": {} }, "email": "" }, { "first": "Shutao", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hunan University", "location": {} }, "email": "shutao_li@hnu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Emotion is the essential attribute of human beings. Perceiving and understanding emotions in a human-like manner is the most central part of developing emotional intelligence. This paper describes the contribution of the LingJing team's method to the Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA) 2022 shared task on Emotion Classification. The participants are required to predict seven emotions from empathic responses to news or stories that caused harm to individuals, groups, or others. This paper describes the continual pre-training method for the masked language model (MLM) to enhance the DeBERTa pre-trained language model. Several training strategies are designed to further improve the final downstream performance including the data augmentation with the supervised transfer, child-tuning training, and the late fusion method. Extensive experiments on the emotional classification dataset show that the proposed method outperforms other state-of-the-art methods, demonstrating our method's effectiveness. Moreover, our submission ranked Top-1 with all metrics in the evaluation phase for the Emotion Classification task.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Emotion is the essential attribute of human beings. Perceiving and understanding emotions in a human-like manner is the most central part of developing emotional intelligence. This paper describes the contribution of the LingJing team's method to the Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA) 2022 shared task on Emotion Classification. The participants are required to predict seven emotions from empathic responses to news or stories that caused harm to individuals, groups, or others. This paper describes the continual pre-training method for the masked language model (MLM) to enhance the DeBERTa pre-trained language model. Several training strategies are designed to further improve the final downstream performance including the data augmentation with the supervised transfer, child-tuning training, and the late fusion method. Extensive experiments on the emotional classification dataset show that the proposed method outperforms other state-of-the-art methods, demonstrating our method's effectiveness. Moreover, our submission ranked Top-1 with all metrics in the evaluation phase for the Emotion Classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Emotion is an important component of human daily communication. However, with the growing interest in human-computer interfaces, machines still lag in possessing and perceiving emotions. Understanding human emotional states in dialogue is crucial for building natural human-machine interaction, which aims to generate appropriate responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Emotion classification (EMO) in the text is concentrated on projecting words, sentences, and documents to a set of emotions according to psychological models proposed by (Ekman, 1992) , which is an interdisciplinary field of study that span psychology and computer science. This task has evolved * These authors contribute equally to this work. from a purely research-oriented topic to play a role in various applications, including mental health assessment, intelligent agents, social media mining (Calvo et al., 2017; Rambocas and Pacheco, 2018) . Therefore, emotion classification has become a hot topic in the field of natural language processing (NLP), and lots of research efforts have been devoted to its development.", "cite_spans": [ { "start": 170, "end": 183, "text": "(Ekman, 1992)", "ref_id": "BIBREF11" }, { "start": 499, "end": 519, "text": "(Calvo et al., 2017;", "ref_id": "BIBREF4" }, { "start": 520, "end": 547, "text": "Rambocas and Pacheco, 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With the rapid development of artificial intelligence technology, especially deep learning, researchers have made substantial progress on EMO tasks over the past few decades. Before the era of deep learning, traditional EMO methods not only ignore the order of occurrence of words in written text, but are also limited by fixed input sizes. However, obtaining contextual relations between words from the sequence texts plays a crucial role in understanding the complete meaning of sentences. With the popularity of data-driven techniques, deep learning based methods improve the shortcomings of traditional methods and achieve superior EMO performance (Ran et al., 2018; Rajabi et al., 2020; Nandwani and Verma, 2021) .", "cite_spans": [ { "start": 652, "end": 670, "text": "(Ran et al., 2018;", "ref_id": "BIBREF34" }, { "start": 671, "end": 691, "text": "Rajabi et al., 2020;", "ref_id": "BIBREF32" }, { "start": 692, "end": 717, "text": "Nandwani and Verma, 2021)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More recently, the transformer self-attention architecture based (Vaswani et al., 2017) pre-trained models have been successfully applied for learning language representations by exploiting large amounts of unlabeled data. These models mainly include BERT (Devlin et al., 2018a) , OpenAI GPT (Radford et al., 2018) , RoBERTa (Liu et al., 2019) . These architectures show superior performance when fine-tuning different downstream tasks, including machine translation (Imamura and Sumita, 2019 ), text classification (Sun et al., 2019) , emotion classification (Luo and Wang, 2019) and question answering (Garg et al., 2020) . Recent works have shown that transformer-based pre-trained methods can achieve state-of-the-art performance in EMO tasks (Acheampong et al., 2021; Luo and Wang, 2019) . Motivated by this, we adopt the DeBERTa model (He et al., 2020) with continual pre-training method for the masked language model (MLM) (Devlin et al., 2018b) in this Track 2 to improve final downstream performance. More feasible training strategies are designed to improve the final results further. In this paper, we describe our work for Track 2 of the WASSA Shared Task 2022, addressing the issue of emotion classification.", "cite_spans": [ { "start": 65, "end": 87, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF37" }, { "start": 256, "end": 278, "text": "(Devlin et al., 2018a)", "ref_id": "BIBREF8" }, { "start": 292, "end": 314, "text": "(Radford et al., 2018)", "ref_id": "BIBREF30" }, { "start": 325, "end": 343, "text": "(Liu et al., 2019)", "ref_id": "BIBREF22" }, { "start": 467, "end": 492, "text": "(Imamura and Sumita, 2019", "ref_id": "BIBREF18" }, { "start": 516, "end": 534, "text": "(Sun et al., 2019)", "ref_id": "BIBREF36" }, { "start": 560, "end": 580, "text": "(Luo and Wang, 2019)", "ref_id": "BIBREF24" }, { "start": 604, "end": 623, "text": "(Garg et al., 2020)", "ref_id": "BIBREF13" }, { "start": 747, "end": 772, "text": "(Acheampong et al., 2021;", "ref_id": "BIBREF0" }, { "start": 773, "end": 792, "text": "Luo and Wang, 2019)", "ref_id": "BIBREF24" }, { "start": 841, "end": 858, "text": "(He et al., 2020)", "ref_id": "BIBREF16" }, { "start": 930, "end": 952, "text": "(Devlin et al., 2018b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we will elaborate on the main methods for Track 2 of the WASSA 2022 Shared Task. More details about the training strategies are detailed at the end of this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "It is a wise choice for further continual pre-training (Gururangan et al., 2020) to enhance the pre-trained model, i.e., DeBERTa model (He et al., 2020) . It will be helpful to alleviate the task and domain discrepancy between the upstream and the downstream tasks (Qiu et al., 2020) . As a result, we adopt the continual pre-training method for the masked language model (MLM) (Devlin et al., 2018b) in this Track 2 to directly improve final downstream performance. The available datasets are chosen from the open-source resources (Demszky et al., 2020; \u00d6hman et al., 2020) . The optimization function is written as follows", "cite_spans": [ { "start": 135, "end": 152, "text": "(He et al., 2020)", "ref_id": "BIBREF16" }, { "start": 265, "end": 283, "text": "(Qiu et al., 2020)", "ref_id": "BIBREF29" }, { "start": 378, "end": 400, "text": "(Devlin et al., 2018b)", "ref_id": "BIBREF9" }, { "start": 532, "end": 554, "text": "(Demszky et al., 2020;", "ref_id": "BIBREF7" }, { "start": 555, "end": 574, "text": "\u00d6hman et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Continuing Pre-training", "sec_num": "2.1" }, { "text": "max \u03b8 log p \u03b8 (X |X) = max \u03b8 i\u2208C log p \u03b8 xi = xi |X (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuing Pre-training", "sec_num": "2.1" }, { "text": "where the C is the index set of the masked tokens in the sequence. We adopt the implementation of the original paper (Devlin et al., 2018b) to keep 10% of the masked tokens unchanged, another 10% replaced with randomly picked tokens and the rest replaced with the [MASK] token.", "cite_spans": [ { "start": 117, "end": 139, "text": "(Devlin et al., 2018b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Continuing Pre-training", "sec_num": "2.1" }, { "text": "Track 2 is a classic emotion classification task, where seven emotional labels are required to be classified. We adopt the DeBERTa-v2 (He et al., 2020) model with continuing pre-training method for processing this classification task, where the main method structure is shown in Figure 1 . The given sentence is separated into tokens and then sent to the pre-trained language model (PLM) as the input. To obtain the complete meaning of the whole sentence, we take the output embedding of each token to be averaged by the averaged pooling layer. The seven-categories task is designed by passing the averaged encoding into the fully connected layer with dropout.", "cite_spans": [ { "start": 134, "end": 151, "text": "(He et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 279, "end": 287, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Emotion Classification with DeBERTa Model", "sec_num": "2.2" }, { "text": "We introduce some training strategies used in the Track 2 emotional classification, where the data augmentation with supervised transfer, childtuning training, and late fusion will be introduced in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Strategies", "sec_num": "2.3" }, { "text": "When fine-tuning on the English emotional classification datasets, we shall transfer the supervised knowledge into the Track 2 emotional task from the other datasets. Specifically, inspired by the work (Kulkarni et al., 2021) , we adopt the data augmentation strategies with Random Augmentation (RA) and Balanced Augmentation (BA), where the GoEmotions (Demszky et al., 2020) and the XED dataset (\u00d6hman et al., 2020) are adopted for implementation. It provides more useful knowledge transferred from the same resources to the downstream task (Durrani et al., 2021) . As a result, the continuing pre-trained DeBERTa model fine-tuned on these similar datasets in English may achieve better results.", "cite_spans": [ { "start": 202, "end": 225, "text": "(Kulkarni et al., 2021)", "ref_id": "BIBREF20" }, { "start": 353, "end": 375, "text": "(Demszky et al., 2020)", "ref_id": "BIBREF7" }, { "start": 396, "end": 416, "text": "(\u00d6hman et al., 2020)", "ref_id": null }, { "start": 542, "end": 564, "text": "(Durrani et al., 2021)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation with Supervised Transfer", "sec_num": "2.3.1" }, { "text": "The efficient Child-tuning (Xu et al., 2021) method is used for fine-tuning the DeBEATa model, where the parameters of the Child network are updated with the gradients mask. For the Track 2 task, the task-independent algorithm is used. In the phase of the fine-tuning, the gradient masks are obtained by Bernoulli distribution (Chen and Liu, 1997) sampling from in each step of iterative update, which is equivalent to randomly dividing a part of the network parameters when updating. The equation of the above steps is shown as follows", "cite_spans": [ { "start": 27, "end": 44, "text": "(Xu et al., 2021)", "ref_id": null }, { "start": 337, "end": 347, "text": "Liu, 1997)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Child-tuning Training", "sec_num": "2.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w t+1 = w t \u2212 \u03b7 \u2202L (w t ) \u2202w t M t M t \u223c Bernoulli (p F )", "eq_num": "(2)" } ], "section": "Child-tuning Training", "sec_num": "2.3.2" }, { "text": "where the notation represents the dot production, p F is the partial network parameter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Child-tuning Training", "sec_num": "2.3.2" }, { "text": "Due to the complementary performance between different emotion prediction models (Colneri\u010d and Dem\u0161ar, 2018) , we design the late fusion method with the Bagging algorithm (Breiman, 1996) to vote on the results of the various models. The Bagging algorithm is used during the prediction, which can effectively reduce the variance of the final prediction by bridging the prediction bias of different models, augmenting the overall generalization ability of the system.", "cite_spans": [ { "start": 81, "end": 108, "text": "(Colneri\u010d and Dem\u0161ar, 2018)", "ref_id": "BIBREF6" }, { "start": 171, "end": 186, "text": "(Breiman, 1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Late Fusion", "sec_num": "2.4" }, { "text": "This section will subsequently present emotion dataset, our experimental models, experimental settings, control of variables experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "3" }, { "text": "Computational detection and understanding of empathy is an important factor in advancing humancomputer interaction (Liu, 2015) . Buechel et al. (2018) presented the first publicly available gold standard for the text-based empathy prediction 1 . Two researchers collected articles from news websites. After that, they asked the participants to read the article. Moreover, participants were asked to rate their level of urgency and distress before describing their ideas and feelings about it in writing. Each participant rating 6 items for empathy (e.g., warm,tender, moved) and 8 items for distress (e.g., trou-bled, disturbed, alarmed) using a 7-point scale for each of those. The final data set has 1860 samples in total. The author obtains their gold scores by averaging the submissions from different participants.", "cite_spans": [ { "start": 115, "end": 126, "text": "(Liu, 2015)", "ref_id": "BIBREF21" }, { "start": 129, "end": 150, "text": "Buechel et al. (2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1" }, { "text": "1 Data and code are available at: https://github. com/wwbp/empathic_reactions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1" }, { "text": "We train the model using the Pytorch 2 (Paszke et al., 2019) on the NVIDIA A100 GPU and use the hugging-face 3 (Wolf et al., 2020) framework. For all uninitialized layers, We set the dimension of all the hidden layers in the model as 1024. The AdamW (Loshchilov and Hutter, 2018) optimizer which is a fixed version of Adam (Kingma and Ba, 2014) with weight decay, and set \u03b2 1 to 0.9, \u03b2 2 to 0.99 for the optimizer. We set the learning rate to 1e-6 with the warm-up (He et al., 2016) . The batch size is 1. We set the maximum length of 512, and delete the excess. Linear decay of learning rate and gradient clipping is set to 1e-6. Dropout (Srivastava et al., 2014) of 0.1 is applied to prevent over-fitting. All experiments select the best parameters in the valid set. Finally, we report the score of the best model (valid set) in the test set.", "cite_spans": [ { "start": 39, "end": 60, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF28" }, { "start": 111, "end": 130, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF38" }, { "start": 250, "end": 279, "text": "(Loshchilov and Hutter, 2018)", "ref_id": "BIBREF23" }, { "start": 465, "end": 482, "text": "(He et al., 2016)", "ref_id": "BIBREF15" }, { "start": 639, "end": 664, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.2" }, { "text": "We use the DeBERTa-v2-xxl (He et al., 2021) as our pre-trained model, and fine-tune the model. The DeBERTa 4 model comes with 48 layers and a hidden size of 1536. The total parameters are 1.5B, and it is trained with 160GB raw data. We spent three weeks on this continuing pre-training step.", "cite_spans": [ { "start": 26, "end": 43, "text": "(He et al., 2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.2" }, { "text": "We compare our methods with Baseline methods on the datasets (Buechel et al., 2018) . Results of comparative methods are reported on website 5 . IITK@WASSA (Mundra et al., 2021) fine-tuned the ELECTRA model with ensemble method. The [CLS] token was passed through a single linear layer to produce a vector of size 7, representing class probabilities. Moreover, they save the snapshots with the best validation scores.", "cite_spans": [ { "start": 61, "end": 83, "text": "(Buechel et al., 2018)", "ref_id": "BIBREF2" }, { "start": 156, "end": 177, "text": "(Mundra et al., 2021)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baseline Methods", "sec_num": "3.3" }, { "text": "Phoenix's approach (Butala et al., 2021) is primarily based on T5 Model (Raffel et al., 2020) or conditional generation of emotion labels. Hence before feeding into the network, the emotion prediction task is cast as feeding the essay text as input and training it to generate target emotion labels as text. This allows for the use of the same model, loss function, and hyper-parameters for the task of emotion prediction as is done in other Text Generation tasks.", "cite_spans": [ { "start": 19, "end": 40, "text": "(Butala et al., 2021)", "ref_id": "BIBREF3" }, { "start": 72, "end": 93, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baseline Methods", "sec_num": "3.3" }, { "text": "Micro Given the availability of further dependent variables (Fornaciari et al., 2021) , create a Multi-Task Learning (MTL) model that takes the text as only input and jointly predicts emotions (classification task with categorical cross-entropy), empathy, and distress (regression task) (MTL2). They implemented a MIMTL model with text, gender, income, and IRI as input to predict emotions, empathy, and distress (MI3-MTL2).", "cite_spans": [ { "start": 60, "end": 85, "text": "(Fornaciari et al., 2021)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Macro F1", "sec_num": null }, { "text": "The experiment results of the various methods on the evaluation dataset are displayed in Table 1 . As presented in Table 1 , our method achieves the best results in all evaluation metrics. Compared with the method from team IITK@WASSA that was the Top-1 last year, the adopted method gets a 0.110 increase of Macro F1, 0.137 increase of Macro Precision, 0.095 increase of Macro Recall, and 0.099 increase of Micro F1, Micro Precision, Micro Precision, and Accuracy. From this, we conclude that the proposed method outperforms the previous state-of-the-art method by an appreciable margin. It demonstrates the effectiveness of our method.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 115, "end": 122, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4" }, { "text": "The Results of Top-5 teams participating in the EMO track for the post-evaluation are shown in Table 2 . The results from our proposed method greatly exceed the second team in the different evaluation metrics. Compared with the method from the second team, our method gains a 0.126 increase of Macro F1, 0.141 increase of Macro Precision, 0.124 increase of Macro Recall, and 0.108 increase of Micro F1, Micro Precision, Micro Precision, and Accuracy. The proposed method obtains the state-of-the-art performance from the perspective of emotion classification and achieves substantial improvements over other methods.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 102, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4" }, { "text": "As for the ablation study part, we implement different ablation settings to show the effectiveness of the proposed method. As shown in Table 3 , the PLM model contributes a lot for the emotional classification. The continuing pre-training can further improve the emotion classification on the three metrics based on the original pre-trained language model. Other experimental results also demonstrate that the training strategies are important for better results. More concretely, the proposed supervised transfer, child-tuning, and late fusion methods help improve the final results.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4" }, { "text": "This paper illustrates our contributions to the WASSA shared work on Emotion Classification. We use the DeBERTa pre-trained language model enhanced by the continual pre-training method (MLM) and some training strategies to improve the EMO performance. During the evaluation phase, our submission achieves Top-1 on all metrics for the Emotion Classification task. In the future, we will explore more efficient pre-training methods to further improve the final results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://pytorch.org 3 https://github.com/huggingface/ transformers 4 microsoft/deberta-v2-xxlarge 5 https://competitions.codalab.org/ competitions/28713#results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the National Key R&D Program of China (2018YFB1305200), the National Natural Science Fund of China (62171183).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Transformer models for textbased emotion detection: a review of bert-based approaches", "authors": [ { "first": "Francisca", "middle": [ "Adoma" ], "last": "Acheampong", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Nunoo-Mensah", "suffix": "" }, { "first": "Wenyu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "Artificial Intelligence Review", "volume": "54", "issue": "8", "pages": "5789--5829", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francisca Adoma Acheampong, Henry Nunoo-Mensah, and Wenyu Chen. 2021. Transformer models for text- based emotion detection: a review of bert-based ap- proaches. Artificial Intelligence Review, 54(8):5789- 5829.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bagging predictors", "authors": [ { "first": "Leo", "middle": [], "last": "Breiman", "suffix": "" } ], "year": 1996, "venue": "Machine learning", "volume": "24", "issue": "2", "pages": "123--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leo Breiman. 1996. Bagging predictors. Machine learning, 24(2):123-140.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Modeling empathy and distress in reaction to news stories. arXiv: Computation and Language", "authors": [ { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "" }, { "first": "Anneke", "middle": [], "last": "Buffone", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Slaff", "suffix": "" }, { "first": "Lyle", "middle": [ "H" ], "last": "Ungar", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sven Buechel, Anneke Buffone, Barry Slaff, Lyle H. Ungar, and Jo\u00e3o Sedoc. 2018. Modeling empathy and distress in reaction to news stories. arXiv: Com- putation and Language.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Team phoenix at wassa 2021: Emotion analysis on news stories with pre-trained language models. arXiv: Computation and Language", "authors": [ { "first": "Yash", "middle": [], "last": "Butala", "suffix": "" }, { "first": "Kanishk", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Adarsh", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Shrey", "middle": [], "last": "Shrivastava", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Butala, Kanishk Singh, Adarsh Kumar, and Shrey Shrivastava. 2021. Team phoenix at wassa 2021: Emotion analysis on news stories with pre-trained lan- guage models. arXiv: Computation and Language.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Natural language processing in mental health applications using non-clinical texts", "authors": [ { "first": "A", "middle": [], "last": "Rafael", "suffix": "" }, { "first": "", "middle": [], "last": "Calvo", "suffix": "" }, { "first": "N", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Milne", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Sazzad Hussain", "suffix": "" }, { "first": "", "middle": [], "last": "Christensen", "suffix": "" } ], "year": 2017, "venue": "Natural Language Engineering", "volume": "23", "issue": "5", "pages": "649--685", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafael A Calvo, David N Milne, M Sazzad Hussain, and Helen Christensen. 2017. Natural language process- ing in mental health applications using non-clinical texts. Natural Language Engineering, 23(5):649- 685.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical applications of the poisson-binomial and conditional bernoulli distributions", "authors": [ { "first": "X", "middle": [], "last": "Sean", "suffix": "" }, { "first": "Jun S", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1997, "venue": "Statistica Sinica", "volume": "", "issue": "", "pages": "875--892", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sean X Chen and Jun S Liu. 1997. Statistical ap- plications of the poisson-binomial and conditional bernoulli distributions. Statistica Sinica, pages 875- 892.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Emotion recognition on twitter: Comparative study and training a unison model", "authors": [ { "first": "Niko", "middle": [], "last": "Colneri\u010d", "suffix": "" }, { "first": "Janez", "middle": [], "last": "Dem\u0161ar", "suffix": "" } ], "year": 2018, "venue": "IEEE transactions on affective computing", "volume": "11", "issue": "3", "pages": "433--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niko Colneri\u010d and Janez Dem\u0161ar. 2018. Emotion recog- nition on twitter: Comparative study and training a unison model. IEEE transactions on affective com- puting, 11(3):433-446.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Goemotions: A dataset of fine-grained emotions", "authors": [ { "first": "Dorottya", "middle": [], "last": "Demszky", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Movshovitz-Attias", "suffix": "" }, { "first": "Jeongwoo", "middle": [], "last": "Ko", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Cowen", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Nemade", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4040--4054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. Goemotions: A dataset of fine-grained emo- tions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4040-4054.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018a. Bert: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018b. Bert: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "How transfer learning impacts linguistic knowledge in deep nlp models?", "authors": [ { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "4947--4957", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nadir Durrani, Hassan Sajjad, and Fahim Dalvi. 2021. How transfer learning impacts linguistic knowledge in deep nlp models? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4947-4957.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An argument for basic emotions", "authors": [ { "first": "Paul", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1992, "venue": "Cognition & emotion", "volume": "6", "issue": "3-4", "pages": "169--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Milanlp @ wassa: Does bert feel sad when you cry", "authors": [ { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tommaso Fornaciari, Federico Bianchi, Debora Nozza, and Dirk Hovy. 2021. Milanlp @ wassa: Does bert feel sad when you cry?", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection", "authors": [ { "first": "Siddhant", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Thuy", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "7780--7788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. Tanda: Transfer and adapt pre-trained trans- former models for answer sentence selection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7780-7788.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8342--8360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "770--778", "other_ids": { "DOI": [ "10.1109/CVPR.2016.90" ] }, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recogni- tion. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deberta: Decoding-enhanced bert with disentangled attention", "authors": [ { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.03654" ] }, "num": null, "urls": [], "raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deberta: Decoding-enhanced bert with disentangled attention", "authors": [ { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Recycling a pre-trained bert encoder for neural machine translation", "authors": [ { "first": "Kenji", "middle": [], "last": "Imamura", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "23--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Imamura and Eiichiro Sumita. 2019. Recycling a pre-trained bert encoder for neural machine transla- tion. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 23-31.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv: Learning.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Pvg at wassa 2021: A multi-input, multi-task, transformer-based architecture for empathy and distress prediction", "authors": [ { "first": "Atharva", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Sunanda", "middle": [], "last": "Somwase", "suffix": "" }, { "first": "Shivam", "middle": [], "last": "Rajput", "suffix": "" }, { "first": "Manisha", "middle": [], "last": "Marathe", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.03296" ] }, "num": null, "urls": [], "raw_text": "Atharva Kulkarni, Sunanda Somwase, Shivam Rajput, and Manisha Marathe. 2021. Pvg at wassa 2021: A multi-input, multi-task, transformer-based archi- tecture for empathy and distress prediction. arXiv preprint arXiv:2103.03296.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentiment analysis: Mining opinions, sentiments, and emotions", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu. 2015. Sentiment analysis: Mining opinions, sentiments, and emotions.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Confer- ence on Learning Representations.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Emotionx-hsu: Adopting pre-trained bert for emotion classification", "authors": [ { "first": "Linkai", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.09669" ] }, "num": null, "urls": [], "raw_text": "Linkai Luo and Yue Wang. 2019. Emotionx-hsu: Adopt- ing pre-trained bert for emotion classification. arXiv preprint arXiv:1907.09669.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Wassa@iitk at wassa 2021: Multi-task learning and transformer finetuning for emotion classification and empathy prediction. arXiv: Computation and Language", "authors": [ { "first": "Jay", "middle": [], "last": "Mundra", "suffix": "" }, { "first": "Rohan", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Sagnik", "middle": [], "last": "Mukherjee", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay Mundra, Rohan Gupta, and Sagnik Mukherjee. 2021. Wassa@iitk at wassa 2021: Multi-task learning and transformer finetuning for emotion classification and empathy prediction. arXiv: Computation and Lan- guage.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A review on sentiment analysis and emotion detection from text. Social Network Analysis and Mining", "authors": [ { "first": "Pansy", "middle": [], "last": "Nandwani", "suffix": "" }, { "first": "Rupali", "middle": [], "last": "Verma", "suffix": "" } ], "year": 2021, "venue": "", "volume": "11", "issue": "", "pages": "1--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pansy Nandwani and Rupali Verma. 2021. A review on sentiment analysis and emotion detection from text. Social Network Analysis and Mining, 11(1):1-19.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Xed: A multilingual dataset for sentiment analysis and emotion detection", "authors": [ { "first": "Emily", "middle": [], "last": "\u00d6hman", "suffix": "" }, { "first": "Marc", "middle": [], "last": "P\u00e0mies", "suffix": "" }, { "first": "Kaisla", "middle": [], "last": "Kajava", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "6542--6552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily \u00d6hman, Marc P\u00e0mies, Kaisla Kajava, and J\u00f6rg Tiedemann. 2020. Xed: A multilingual dataset for sentiment analysis and emotion detection. In Pro- ceedings of the 28th International Conference on Computational Linguistics, pages 6542-6552.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning li- brary. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Pre-trained models for natural language processing: A survey", "authors": [ { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Tianxiang", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yunfan", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "Science China Technological Sciences", "volume": "63", "issue": "10", "pages": "1872--1897", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872- 1897.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. Journal of Machine Learning Research, 21:1- 67.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A multi-channel bilstm-cnn model for multilabel emotion classification of informal text", "authors": [ { "first": "Zahra", "middle": [], "last": "Rajabi", "suffix": "" }, { "first": "Amarda", "middle": [], "last": "Shehu", "suffix": "" }, { "first": "Ozlem", "middle": [], "last": "Uzuner", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE 14th International Conference on Semantic Computing (ICSC)", "volume": "", "issue": "", "pages": "303--306", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zahra Rajabi, Amarda Shehu, and Ozlem Uzuner. 2020. A multi-channel bilstm-cnn model for multilabel emotion classification of informal text. In 2020 IEEE 14th International Conference on Semantic Comput- ing (ICSC), pages 303-306. IEEE.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Online sentiment analysis in marketing research: a review", "authors": [ { "first": "Meena", "middle": [], "last": "Rambocas", "suffix": "" }, { "first": "G", "middle": [], "last": "Barney", "suffix": "" }, { "first": "", "middle": [], "last": "Pacheco", "suffix": "" } ], "year": 2018, "venue": "Journal of Research in Interactive Marketing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meena Rambocas and Barney G Pacheco. 2018. Online sentiment analysis in marketing research: a review. Journal of Research in Interactive Marketing.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Text emotion analysis: A survey", "authors": [ { "first": "Li", "middle": [], "last": "Ran", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Hailun", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Weiping", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Dan", "suffix": "" } ], "year": 2018, "venue": "Journal of Computer Research and Development", "volume": "55", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Ran, Lin Zheng, Lin Hailun, Wang Weiping, and Meng Dan. 2018. Text emotion analysis: A sur- vey. Journal of Computer Research and Develop- ment, 55(1):30.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "How to fine-tune bert for text classification?", "authors": [ { "first": "Chi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "China national conference on Chinese computational linguistics", "volume": "", "issue": "", "pages": "194--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China national conference on Chinese computational linguistics, pages 194-206. Springer.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transform- ers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Songfang Huang, and Fei Huang. 2021. Raise a child in large language model: Towards effective and generalizable fine-tuning", "authors": [ { "first": "Runxin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chuanqi", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2109.05687" ] }, "num": null, "urls": [], "raw_text": "Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, and Fei Huang. 2021. Raise a child in large language model: To- wards effective and generalizable fine-tuning. arXiv preprint arXiv:2109.05687.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Main structure of the method in Track 2." }, "TABREF2": { "html": null, "num": null, "content": "
Team NameMacro F1 Micro F1 AccuracyMacro Precison Macro RecallMicro Precision Micro Recall
mantis0.5480.6320.6320.5940.5280.6320.632
SURREY-CTS-NLP0.5480.6340.6340.5760.5320.6340.634
SINAI0.5530.6360.6360.5890.5350.6360.636
IUCL0.5720.6460.6460.5990.5550.6460.646
Ours0.6980.7540.7540.7400.6790.7540.754
", "type_str": "table", "text": "Comparison with state-of-the-art methods. The best results are in bold." }, "TABREF3": { "html": null, "num": null, "content": "
MethodsMacro F1 Micro F1 Accuracy
Ours0.6980.7540.754
w/o continuing pre-training 0.6390.6780.678
w/o supervised transfer0.6520.6960.696
w/o child-tuning0.6560.6990.699
w/o late fusion0.6640.6640.664
w/o PLM0.2540.3120.312
", "type_str": "table", "text": "Results of the Top-5 teams participating in the EMO track for the post-evaluation. The best results are in bold." }, "TABREF4": { "html": null, "num": null, "content": "", "type_str": "table", "text": "Results of the ablation study." } } } }