ACL-OCL / Base_JSON /prefixW /json /wassa /2022.wassa-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:31.587383Z"
},
"title": "IUCL at WASSA 2022 Shared Task: A Text-Only Approach to Empathy and Emotion Detection",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indiana University",
"location": {}
},
"email": ""
},
{
"first": "Yingnan",
"middle": [],
"last": "Ju",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indiana University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Our system, IUCL, participated in the WASSA 2022 Shared Task on Empathy Detection and Emotion Classification. Our main goal in building this system is to investigate how the use of demographic attributes influences performance. Our results show that our text-only systems perform very competitively, ranking first in the empathy detection task, reaching an average Pearson correlation of 0.54, and second in the emotion classification task, reaching a Macro-F of 0.572. Our systems that use both text and demographic data are less competitive.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Our system, IUCL, participated in the WASSA 2022 Shared Task on Empathy Detection and Emotion Classification. Our main goal in building this system is to investigate how the use of demographic attributes influences performance. Our results show that our text-only systems perform very competitively, ranking first in the empathy detection task, reaching an average Pearson correlation of 0.54, and second in the emotion classification task, reaching a Macro-F of 0.572. Our systems that use both text and demographic data are less competitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Emotion classification has become increasingly important due to the large-scale deployment of artificial emotional intelligence. In various aspects of our lives, these systems now play a crucial role. For example, customer care solutions are now gradually shifting to a hybrid mode where an AI will try to solve the problem first, and only when it fails, will a human intervene. The WASSA 2022 Shared Task covers four different tasks on Empathy Detection, Emotion Classification, Personality Prediction, and Interpersonal Reactivity Index Prediction. We participated in task 1 on Empathy Detection and task 2 on Emotion Classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the existing emotion classification tasks are restricted to only using signals such as video, audio, or text, but seldom using demographic data, partly because such information is often not available. However, using demographic information also raises ethical concerns. In the current shared task, additional demographic information was made available, thus implicitly inviting participants to investigate the interaction between empathy, emotion, and demographic information. In this work, we will compare two different systems, one using demographic data and one that does not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our text-only system performs very competitively. In the evaluation, we ranked first in the empathy detection task and second in the emotion classification task 1 . Adding demographic information to the systems makes them less competitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is structured as follows: In section 2, we will discuss the related work on emotion classification. In section 3, we will present our two systems and discuss their differences. We will also discuss the challenges we encountered and how we addressed them. In section 4, we will present the evaluation results of our systems and the performance of our other systems. We will also discuss the implications of these results. In section 5 we will conclude and discuss future research efforts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though empathy detection is relatively new, a considerable amount of work has been carried out in the related areas of emotion detection (e.g. Acheampong et al., 2020; Canales and Mart\u00ednez-Barco, 2014) , sentiment analysis (e.g. Pestian et al., 2012; Kiritchenko et al., 2014) , and stance detection (e.g. K\u00fc\u00e7\u00fck and Can, 2020; AlDayel and Magdy, 2021; Liu et al., 2016) .",
"cite_spans": [
{
"start": 143,
"end": 167,
"text": "Acheampong et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 168,
"end": 201,
"text": "Canales and Mart\u00ednez-Barco, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 229,
"end": 250,
"text": "Pestian et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 251,
"end": 276,
"text": "Kiritchenko et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 306,
"end": 326,
"text": "K\u00fc\u00e7\u00fck and Can, 2020;",
"ref_id": "BIBREF11"
},
{
"start": 327,
"end": 351,
"text": "AlDayel and Magdy, 2021;",
"ref_id": "BIBREF1"
},
{
"start": 352,
"end": 369,
"text": "Liu et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "After initial success using SVMs (e.g. Mullen and Collier, 2004) , BERT and other transformerbased models (Devlin et al., 2019; Liu et al., 2019) have become the mainstream architecture for handling these related tasks (e.g. Hoang et al., 2019; Liao et al., 2021) .",
"cite_spans": [
{
"start": 39,
"end": 64,
"text": "Mullen and Collier, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 106,
"end": 127,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 128,
"end": 145,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 225,
"end": 244,
"text": "Hoang et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 245,
"end": 263,
"text": "Liao et al., 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While most data sets use Twitter feed, the current task uses essays as data points, which are considerably longer than tweets, and thus necessitates procedures to mitigate problems arising from the length of the input sequence. In such settings, transformerbased models have evolved to handle longer input sequences by strategic truncating (Sun et al., 2019; Ding et al., 2020) , either taking the front, the end, or the middle part of the text or using a sliding window method.",
"cite_spans": [
{
"start": 340,
"end": 358,
"text": "(Sun et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 359,
"end": 377,
"text": "Ding et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Additionally, packages such as the one by Gu and Budhkar (2021) provide us with methods and implementations to incorporate categorical and numerical features. Categorical and numerical features can be treated as additional tokens, or they can be treated as a different modality and handled by co-attention (Tsai et al., 2019) .",
"cite_spans": [
{
"start": 306,
"end": 325,
"text": "(Tsai et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section we will describe our systems and how we approach the empathy prediction and emotion classification tasks with two different systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We use RoBERTa large as the base model for both empathy prediction and emotion classification tasks (Liu et al., 2019) . RoBERTa extends BERT by changing key hyper-parameters, such as much larger mini-batches and higher learning rates, removing the next-sentence pre-training objective, and using a byte-level Byte-Pair Encoding (BPE) (Sennrich et al., 2016) ) as the tokenizer. We finetuned the model on the training data of the shared task, and created two different fine-tuned models, a regression model for empathy and distress detection, and a classification model for emotion classification respectively. For the regression task, the regression model consists of a transformer model topped by a fully-connected layer. A single output neuron predicts the target in the fully-connected layer.",
"cite_spans": [
{
"start": 100,
"end": 118,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 335,
"end": 358,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.1"
},
{
"text": "Since empathy prediction and personal distress level are combined into the same task, we developed one unified model that addressed both tasks. The architecture of the model remains the same while different training set can be used to fine-tune the model for the two tasks. This system obtained the best performance across both tasks. Details of the configurations for the models are listed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 401,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "3.1"
},
{
"text": "One of the challenges in this task is handling long sequences. Most widely used data sets in the areas of emotion detection consist of collections of tweets as data points. This data set consists of essays, which are considerably longer than tweets. The essays are between 300 and 800 characters, with an average of 450 in the training set. Because of their quadratically increasing memory and time consumption, the transformer-based models are incapable of processing long texts (Ding et al., 2020) .",
"cite_spans": [
{
"start": 480,
"end": 499,
"text": "(Ding et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT for Long Sequences",
"sec_num": "3.2"
},
{
"text": "The results based on this strategy were higher than when using more complex hierarchical approaches that chunk the article, process the chunks, and assemble the results. However, in our task, our experiments show that cutting text (either from the beginning or the middle of the text) always results in lower scores than using the whole text. Another method of dealing with long sequences is to change the maximum sequence length that the model can receive. Our experiments for the second task show that the model with the maximum sequence length of 512 reaches the highest scores. In the empathy and distress prediction task, the best model uses 128 as the maximum sequence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT for Long Sequences",
"sec_num": "3.2"
},
{
"text": "The data set also includes person-level demographic information including age (19-71), gender (1-5), ethnicity (1-6), income (0-1,000,000), and education level (2-7)). In some of our experiments, we added this demographic information to the text. Our goal was to determine whether such information was useful for the tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Demographic Attributes as Features",
"sec_num": "3.3"
},
{
"text": "Since adding numerical or categorical information to a transformer-based model is a non-trivial task, we decided to follow Gu and Budhkar (2021) and group continuous values into bins and, in addition to the value, represent each bin with a unique word in a plain narrative sentence. For example, the added sentence for \"age of 25\" is \"Age is 25, young adult.\", and the added sentence for \"income of 150,000\" is \"Income is 150000, high income, rich\". Since the demographic information for education level, gender, and ethnicity is represented by numbers, and no explanation was provided, we had to guess the scale for education level, assuming that a higher number corresponds to a higher level. For gender and ethnicity, we used neutral words and unique proper nouns, not related to gender or ethnicity, i.e., chemical elements for gender and planets for ethnicity. For example, the added sentences for \"gender of 1 and ethnicity of 2\" are \"Gender is gender one, hydrogen. Ethnicity is ethnicity two, Venus.\". In theory, this would allow us to test whether there are correlations between certain gender/ethnicity categories and empathy/emotion, without accessing the gender and ethnicity biases inherent in RoBERTa (Bhardwaj et al., 2021; Bartl et al., 2020) However, in practice, the small size of the training data does not allow meaningful conclusions.",
"cite_spans": [
{
"start": 123,
"end": 144,
"text": "Gu and Budhkar (2021)",
"ref_id": "BIBREF8"
},
{
"start": 1215,
"end": 1238,
"text": "(Bhardwaj et al., 2021;",
"ref_id": null
},
{
"start": 1239,
"end": 1258,
"text": "Bartl et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Demographic Attributes as Features",
"sec_num": "3.3"
},
{
"text": "It is important to point out that predicting empathy concern, personal distress, and emotion using demographic attributes at best introduces bias into machine learning systems, and at worst raises ethical concerns (Conway and O'Connor, 2016) . The demographic attributes used here are gender, education level, ethnicity, age, and income. This data set is small, so the correlation between these attributes and the prediction is not strong, but likely the model would be able to use them to make \"more accurate\" predictions if there were more data points available. The situation would be considerably more sensitive if actual categories had been given for the demographic information, thus allowing a transformer-based model to access the bias inherent in our society and thus in the training data for RoBERTa.",
"cite_spans": [
{
"start": 214,
"end": 241,
"text": "(Conway and O'Connor, 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Concerns",
"sec_num": "3.4"
},
{
"text": "In this section, we discuss our results for the two tasks, empathy detection and emotion classification. Table 2 shows the evaluation results for the empathy prediction task 2 . The task consists of predicting an empathy score and a distress score, both on a continuous 7 point scale.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "Our system, IUCL, ranks first in this task with an averaged Pearson correlation coefficient of 0.54. We achieved Pearson correlation coefficients of 0.537 and 0.543 respectively for empathy concern and personal distress prediction. The second best system ranks first in the empathy subtask but only fourth in the distress subtask. Another system of ours, IUCL-2, is the third best system. IUCL-2 is a variant of IUCL with changes in hyper-parameter choices: we increased the sequence length to 256 and decreased the batch size to 8. While this system performs best at detecting distress, it ranks third for detecting empathy. This shows how sensitive such a model is to hyper-parameter tuning. Although our IUCL system ranked second in both subtasks, it is the most balanced system, and according to the main evaluation metric the best performing overall system for task 1. In order to create simpler models, we also made a conscious effort to unify these two sub-tasks. This indicates that while our joint model is not optimal when only one of the subtasks is of interest, but the optimization across both subtasks results in a balanced system with reliable performance across both subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1: Empathy Prediction",
"sec_num": "4.1"
},
{
"text": "We then compared the system using only textual information with the system additionally using demographic information (IUCL/Dem). The scores for the latter system are considerably lower, even resulting in a negative correlation for distress. This shows that this information is detrimental to the given task. Table 3 shows the evaluation results for the emotion classification task 3 . The task consists of predicting a categorical emotion label from one of the following: anger, disgust, fear, joy, neutral, sadness, and surprise.",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Task 1: Empathy Prediction",
"sec_num": "4.1"
},
{
"text": "Our system, IUCL, ranks second in this task with a macro-averaged F1 of 0.572. Our macroaveraged precision of 0.599 is the highest reported score, but our macro recall of 0.555 is the 2nd highest. In this task, systems are performing relatively balanced across different evaluation metrics. A further analysis of the results will have to wait until a more detailed evaluation is released.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Emotion Classification",
"sec_num": "4.2"
},
{
"text": "We compared the results of a system trained only on the textual data with a system that was additionally given demographic information (IUCL/Dem). Again, we see a drop in performance, with all scores about 2-3 percent points lower than for the text-only system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Emotion Classification",
"sec_num": "4.2"
},
{
"text": "We noticed that during the training phase of the emotion detection task, our model performed best when we only fine-tuned for two epochs. This is also true for the empathy task when demographic information is used, though the results for this task are not satisfactory. Overall, we experimented with the number of epochs ranging between 2 and 50. The general trend is that the optimal number of epochs is low for this task. We hypothesize that this is due to the small training set (1 861 instances). This is a small sample given that the system needs to decide between seven emotions, and each emotion can be expressed very differently in language. It is likely that with more epochs, RoBERTa is fine-tuned to overfit to our training set and loses its ability to generalize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Analysis",
"sec_num": "4.3"
},
{
"text": "The optimal number of epochs is higher for the empathy task, 25. This is likely due to the higher complexity of a regression task. As much as we believe that using demographic data raises ethical concerns, we still decided to explore using them as features to see how damaging the results may be. In both tasks, the demographic data does not increase system performance; on the contrary, results are considerably lower. For the emotion detection task, including demographic data decreased our macro F1 score from 0.585 to 0.544. For the empathy and distress task, including them was even more harmful: The Pearson correlation coefficients dropped from 0.537 to 0.295 and 0.543 to -0.047 respectively. This may again be due to the small size of the training data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Analysis",
"sec_num": "4.3"
},
{
"text": "Our system, IUCL, participated in the empathy detection and the emotion classification tasks of the WASSA 2022 shared task. Our text-only systems rank first in the empathy task and second in the emotion task. We come to the following conclusions: 1. There is a complex interaction between the size of the training data and the complexity of the task, classification for emotion detection and regression for empathy. Given a small training data set and a small set of labels, only minimal finetuning is required. 2. Using demographic attributes as features decreases performance given the small training set, and it may raise ethical concerns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We plan to further investigate the biases in this data set and their implications to both the machine learning systems and society in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We only consider submissions made before the shared task deadline",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These results are copied from the shared task leader board on 03/20/2022, considering only submissions made before the deadline, as no official report was released.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These results are copied from the shared task leader board on 03/20/2022, considering only submissions made before the deadline, as no official report was released.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is based on research partly supported by US National Science Foundation (NSF) Grant #2123618.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Text-based emotion 231 detection: Advances, challenges, and opportunities",
"authors": [
{
"first": "Francisca",
"middle": [
"Adoma"
],
"last": "Acheampong",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wenyu",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Nunoo-Mensah",
"suffix": ""
}
],
"year": 2020,
"venue": "Engineering Reports",
"volume": "2",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisca Adoma Acheampong, Chen Wenyu, and Henry Nunoo-Mensah. 2020. Text-based emotion 231 detection: Advances, challenges, and opportunities. Engineering Reports, 2(7):e12189.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Stance detection on social media: State of the art and trends",
"authors": [
{
"first": "Abeer",
"middle": [],
"last": "Aldayel",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
}
],
"year": 2021,
"venue": "formation Processing & Management",
"volume": "58",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abeer AlDayel and Walid Magdy. 2021. Stance detec- tion on social media: State of the art and trends. In- formation Processing & Management, 58(4):102597.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unmasking contextual stereotypes: Measuring and mitigating bert's gender bias",
"authors": [
{
"first": "Marion",
"middle": [],
"last": "Bartl",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Measuring and mitigating bert's gender bias. In Proceedings of the Second Workshop on Gender Bias in Natural Lan- guage Processing, pages 1-16.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Navonil Majumder, and Soujanya Poria. 2021. Investigating gender bias in BERT. Cognitive Computation",
"authors": [
{
"first": "Rishabh",
"middle": [],
"last": "Bhardwaj",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "13",
"issue": "",
"pages": "1008--1018",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rishabh Bhardwaj, Navonil Majumder, and Soujanya Poria. 2021. Investigating gender bias in BERT. Cog- nitive Computation, 13(4):1008-1018.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Emotion detection from text: A survey",
"authors": [
{
"first": "Lea",
"middle": [],
"last": "Canales",
"suffix": ""
},
{
"first": "Patricio",
"middle": [],
"last": "Mart\u00ednez-Barco",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the workshop on natural language processing in the 5th information systems research working days (JISIC)",
"volume": "",
"issue": "",
"pages": "37--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lea Canales and Patricio Mart\u00ednez-Barco. 2014. Emo- tion detection from text: A survey. In Proceedings of the workshop on natural language processing in the 5th information systems research working days (JISIC), pages 37-43.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Social media, big data, and mental health: Current advances and ethical implications",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2016,
"venue": "Current Opinion in Psychology",
"volume": "9",
"issue": "",
"pages": "77--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Conway and Daniel O'Connor. 2016. Social me- dia, big data, and mental health: Current advances and ethical implications. Current Opinion in Psy- chology, 9:77-82.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4171-4186, Minneapolis, Minnesota.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CogLTX: Applying BERT to long texts. Advances in Neural Information Processing Systems",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "33",
"issue": "",
"pages": "12792--12804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Ding, Chang Zhou, Hongxia Yang, and Jie Tang. 2020. CogLTX: Applying BERT to long texts. Ad- vances in Neural Information Processing Systems, 33:12792-12804.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A package for learning on tabular and text data with transformers",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Budhkar",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Third Workshop on Multimodal Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "69--73",
"other_ids": {
"DOI": [
"10.18653/v1/2021.maiworkshop-1.10"
]
},
"num": null,
"urls": [],
"raw_text": "Ken Gu and Akshay Budhkar. 2021. A package for learning on tabular and text data with transformers. In Proceedings of the Third Workshop on Multimodal Artificial Intelligence, pages 69-73, Mexico City, Mexico.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Aspect-based sentiment analysis using BERT",
"authors": [
{
"first": "Mickel",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alija",
"middle": [],
"last": "Oskar",
"suffix": ""
},
{
"first": "Jacobo",
"middle": [],
"last": "Bihorac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rouces",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "187--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mickel Hoang, Oskar Alija Bihorac, and Jacobo Rouces. 2019. Aspect-based sentiment analysis using BERT. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 187-196.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentiment analysis of short informal texts",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Saif M",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "50",
"issue": "",
"pages": "723--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko, Xiaodan Zhu, and Saif M Mo- hammad. 2014. Sentiment analysis of short informal texts. Journal of Artificial Intelligence Research, 50:723-762.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Stance detection: A survey",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "K\u00fc\u00e7\u00fck",
"suffix": ""
},
{
"first": "Fazli",
"middle": [],
"last": "Can",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "53",
"issue": "1",
"pages": "1--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilek K\u00fc\u00e7\u00fck and Fazli Can. 2020. Stance detection: A survey. ACM Computing Surveys (CSUR), 53(1):1- 37.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An improved aspect-category sentiment analysis model for text sentiment analysis based on roBERTa",
"authors": [
{
"first": "Wenxiong",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Bi",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Xiuwen",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2021,
"venue": "Applied Intelligence",
"volume": "51",
"issue": "6",
"pages": "3522--3533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenxiong Liao, Bi Zeng, Xiuwen Yin, and Pengfei Wei. 2021. An improved aspect-category sentiment analysis model for text sentiment analysis based on roBERTa. Applied Intelligence, 51(6):3522-3533.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "IUCL: An ensemble model for stance detection in twitter",
"authors": [
{
"first": "Can",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bradford",
"middle": [],
"last": "Demarest",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Couture",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Dakota",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Haduong",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Kaufman",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lamont",
"suffix": ""
},
{
"first": "Manan",
"middle": [],
"last": "Pancholi",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Steimel",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Can Liu, Wen Li, Bradford Demarest, Yue Chen, Sara Couture, Daniel Dakota, Nikita Haduong, Noah Kauf- man, Andrew Lamont, Manan Pancholi, Kenneth Steimel, and Sandra K\u00fcbler. 2016. IUCL: An en- semble model for stance detection in twitter. In Pro- ceedings of the International Workshop on Semantic Evaluation, San Diego, CA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sentiment analysis using Support Vector Machines with diverse information sources",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Mullen",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "412--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Mullen and Nigel Collier. 2004. Sentiment analy- sis using Support Vector Machines with diverse infor- mation sources. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 412-418.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentiment analysis of suicide notes: A shared task",
"authors": [
{
"first": "Pawel",
"middle": [],
"last": "John P Pestian",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Matykiewicz",
"suffix": ""
},
{
"first": "Brett",
"middle": [],
"last": "Linn-Gust",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "South",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Hurdle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2012,
"venue": "Biomedical Informatics Insights",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John P Pestian, Pawel Matykiewicz, Michelle Linn- Gust, Brett South, Ozlem Uzuner, Jan Wiebe, K Bre- tonnel Cohen, John Hurdle, and Christopher Brew. 2012. Sentiment analysis of suicide notes: A shared task. Biomedical Informatics Insights, 5:BII-S9042.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "How to fine-tune BERT for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "China National Conference on Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "194--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune BERT for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multimodal transformer for unaligned multimodal language sequences",
"authors": [
{
"first": "Yao-Hung Hubert",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Shaojie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "J",
"middle": [
"Zico"
],
"last": "Kolter",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6558--6569",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1656"
]
},
"num": null,
"urls": [],
"raw_text": "Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for un- aligned multimodal language sequences. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6558-6569, Flo- rence, Italy.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"text": "",
"content": "<table><tr><td>: Optimized settings for task 1 and 2</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"text": "Official results ( Pearson correlations) for task 1: empathy detection.",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF4": {
"html": null,
"text": "TeamF1 macro R F1 micro R Acc. R Pr macro R Re macro R Pr micro R Re micro",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td>R</td></tr><tr><td>BEST</td><td>0.585 1</td><td colspan=\"2\">0.661 1 0.661 1</td><td>0.594 2</td><td>0.584 1</td><td>0.661 1</td><td>0.661 1</td></tr><tr><td>IUCL</td><td>0.572 2</td><td colspan=\"2\">0.646 2 0.646 2</td><td>0.599 1</td><td>0.555 2</td><td>0.646 2</td><td>0.646 2</td></tr><tr><td>SINAI</td><td>0.553 3</td><td colspan=\"2\">0.636 3 0.636 3</td><td>0.589 4</td><td>0.535 4</td><td>0.636 3</td><td>0.636 3</td></tr><tr><td>IUCL/Dem</td><td>0.544</td><td>0.611</td><td>0.611</td><td>0.564</td><td>0.539</td><td>0.611</td><td>0.611</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"text": "Official results for task 2: emotion classification.",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}