Datasets:

bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
sequencelengths
1
45
title
stringlengths
21
199
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
28
upvotes
int64
-1
255
num_comments
int64
-1
23
n_authors
int64
-1
35
proceedings
stringlengths
38
47
Models
sequencelengths
0
57
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.wassa-1.30.bib
@inproceedings{giorgi-etal-2024-findings, title = "Findings of {WASSA} 2024 Shared Task on Empathy and Personality Detection in Interactions", author = "Giorgi, Salvatore and Sedoc, Jo{\~a}o and Barriere, Valentin and Tafreshi, Shabnam", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.30", pages = "369--379", abstract = "This paper presents the results of the WASSA 2024 shared task on predicting empathy, emotion, and personality in conversations and reactions to news articles. Participating teams were given access to a new, unpublished extension of the WASSA 2023 shared task dataset. This task is both multi-level and multi-modal: data is available at the person, essay, dialog, and dialog-turn levels and includes formal (news articles) and informal text (essays and dialogs), self-report data (personality and distress), and third-party annotations (empathy and emotion). The shared task included a new focus on conversations between humans and LLM-based virtual agents which occur immediately after reading and reacting to the news articles. Participants were encouraged to explore the multi-level and multi-modal nature of this data. Participation was encouraged in four tracks: (i) predicting the perceived empathy at the dialog level, (ii) predicting turn-level empathy, emotion polarity, and emotion intensity in conversations, (iii) predicting state empathy and distress scores, and (iv) predicting personality. In total, 14 teams participated in the shared task. We summarize the methods and resources used by the participating teams.", }
This paper presents the results of the WASSA 2024 shared task on predicting empathy, emotion, and personality in conversations and reactions to news articles. Participating teams were given access to a new, unpublished extension of the WASSA 2023 shared task dataset. This task is both multi-level and multi-modal: data is available at the person, essay, dialog, and dialog-turn levels and includes formal (news articles) and informal text (essays and dialogs), self-report data (personality and distress), and third-party annotations (empathy and emotion). The shared task included a new focus on conversations between humans and LLM-based virtual agents which occur immediately after reading and reacting to the news articles. Participants were encouraged to explore the multi-level and multi-modal nature of this data. Participation was encouraged in four tracks: (i) predicting the perceived empathy at the dialog level, (ii) predicting turn-level empathy, emotion polarity, and emotion intensity in conversations, (iii) predicting state empathy and distress scores, and (iv) predicting personality. In total, 14 teams participated in the shared task. We summarize the methods and resources used by the participating teams.
[ "Giorgi, Salvatore", "Sedoc, Jo{\\~a}o", "Barriere, Valentin", "Tafreshi, Shabnam" ]
Findings of WASSA 2024 Shared Task on Empathy and Personality Detection in Interactions
wassa-1.30
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.30/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.31.bib
@inproceedings{kong-moon-2024-ru, title = "{RU} at {WASSA} 2024 Shared Task: Task-Aligned Prompt for Predicting Empathy and Distress", author = "Kong, Haein and Moon, Seonghyeon", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.31", pages = "380--384", abstract = "This paper describes our approach for the WASSA 2024 Shared Task on Empathy Detection and Emotion Classification and Personality Detection in Interactions at ACL 2024. We focused on Track 3: Empathy Prediction (EMP) which aims to predict the empathy and distress of writers based on their essays. Recently, LLMs have been used to detect the psychological status of the writers based on the texts. Previous studies showed that the performance of LLMs can be improved by designing prompts properly. While diverse approaches have been made, we focus on the fact that LLMs can have different nuances for psychological constructs such as empathy or distress to the specific task. In addition, people can express their empathy or distress differently according to the context. Thus, we tried to enhance the prediction performance of LLMs by proposing a new prompting strategy: Task-Aligned Prompt (TAP). This prompt consists of aligned definitions for empathy and distress to the original paper and the contextual information about the dataset. Our proposed prompt was tested using ChatGPT and GPT4o with zero-shot and few-shot settings and the performance was compared to the plain prompts. The results showed that the TAP-ChatGPT-zero-shot achieved the highest average Pearson correlation of empathy and distress on the EMP track.", }
This paper describes our approach for the WASSA 2024 Shared Task on Empathy Detection and Emotion Classification and Personality Detection in Interactions at ACL 2024. We focused on Track 3: Empathy Prediction (EMP) which aims to predict the empathy and distress of writers based on their essays. Recently, LLMs have been used to detect the psychological status of the writers based on the texts. Previous studies showed that the performance of LLMs can be improved by designing prompts properly. While diverse approaches have been made, we focus on the fact that LLMs can have different nuances for psychological constructs such as empathy or distress to the specific task. In addition, people can express their empathy or distress differently according to the context. Thus, we tried to enhance the prediction performance of LLMs by proposing a new prompting strategy: Task-Aligned Prompt (TAP). This prompt consists of aligned definitions for empathy and distress to the original paper and the contextual information about the dataset. Our proposed prompt was tested using ChatGPT and GPT4o with zero-shot and few-shot settings and the performance was compared to the plain prompts. The results showed that the TAP-ChatGPT-zero-shot achieved the highest average Pearson correlation of empathy and distress on the EMP track.
[ "Kong, Haein", "Moon, Seonghyeon" ]
RU at WASSA 2024 Shared Task: Task-Aligned Prompt for Predicting Empathy and Distress
wassa-1.31
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.31/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.32.bib
@inproceedings{li-etal-2024-chinchunmei, title = "Chinchunmei at {WASSA} 2024 Empathy and Personality Shared Task: Boosting {LLM}{'}s Prediction with Role-play Augmentation and Contrastive Reasoning Calibration", author = "Li, Tian and Rusnachenko, Nicolay and Liang, Huizhi", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.32", pages = "385--392", abstract = "This paper presents the Chinchunmei team{'}s contributions to the WASSA2024 Shared-Task 1: Empathy Detection and Emotion Classification. We participated in Tracks 1, 2, and 3 to predict empathetic scores based on dialogue, article, and essay content. We choose Llama3-8b-instruct as our base model. We developed three supervised fine-tuning schemes: standard prediction, role-play, and contrastive prediction, along with an innovative scoring calibration method called Contrastive Reasoning Calibration during inference. Pearson Correlation was used as the evaluation metric across all tracks. For Track 1, we achieved 0.43 on the devset and 0.17 on the testset. For Track 2 emotion, empathy, and polarity labels, we obtained 0.64, 0.66, and 0.79 on the devset and 0.61, 0.68, and 0.58 on the testset. For Track 3 empathy and distress labels, we got 0.64 and 0.56 on the devset and 0.33 and 0.35 on the testset.", }
This paper presents the Chinchunmei team{'}s contributions to the WASSA2024 Shared-Task 1: Empathy Detection and Emotion Classification. We participated in Tracks 1, 2, and 3 to predict empathetic scores based on dialogue, article, and essay content. We choose Llama3-8b-instruct as our base model. We developed three supervised fine-tuning schemes: standard prediction, role-play, and contrastive prediction, along with an innovative scoring calibration method called Contrastive Reasoning Calibration during inference. Pearson Correlation was used as the evaluation metric across all tracks. For Track 1, we achieved 0.43 on the devset and 0.17 on the testset. For Track 2 emotion, empathy, and polarity labels, we obtained 0.64, 0.66, and 0.79 on the devset and 0.61, 0.68, and 0.58 on the testset. For Track 3 empathy and distress labels, we got 0.64 and 0.56 on the devset and 0.33 and 0.35 on the testset.
[ "Li, Tian", "Rusnachenko, Nicolay", "Liang, Huizhi" ]
Chinchunmei at WASSA 2024 Empathy and Personality Shared Task: Boosting LLM's Prediction with Role-play Augmentation and Contrastive Reasoning Calibration
wassa-1.32
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.32/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.33.bib
@inproceedings{numanoglu-etal-2024-empathify, title = "Empathify at {WASSA} 2024 Empathy and Personality Shared Task: Contextualizing Empathy with a {BERT}-Based Context-Aware Approach for Empathy Detection", author = {Numano{\u{g}}lu, Arda and Ate{\c{s}}, S{\"u}leyman and Cicekli, Nihan and K{\"u}{\c{c}}{\"u}k, Dilek}, editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.33", pages = "393--398", abstract = "Empathy detection from textual data is a complex task that requires an understanding of both the content and context of the text. This study presents a BERT-based context-aware approach to enhance empathy detection in conversations and essays. We participated in the WASSA 2024 Shared Task, focusing on two tracks: empathy and emotion prediction in conversations (CONV-turn) and empathy and distress prediction in essays (EMP). Our approach leverages contextual information by incorporating related articles and emotional characteristics as additional inputs, using BERT-based Siamese (parallel) architecture. Our experiments demonstrated that using article summaries as context significantly improves performance, with the parallel BERT approach outperforming the traditional method of concatenating inputs with the {`}[SEP]{`} token. These findings highlight the importance of context-awareness in empathy detection and pave the way for future improvements in the sensitivity and accuracy of such systems.", }
Empathy detection from textual data is a complex task that requires an understanding of both the content and context of the text. This study presents a BERT-based context-aware approach to enhance empathy detection in conversations and essays. We participated in the WASSA 2024 Shared Task, focusing on two tracks: empathy and emotion prediction in conversations (CONV-turn) and empathy and distress prediction in essays (EMP). Our approach leverages contextual information by incorporating related articles and emotional characteristics as additional inputs, using BERT-based Siamese (parallel) architecture. Our experiments demonstrated that using article summaries as context significantly improves performance, with the parallel BERT approach outperforming the traditional method of concatenating inputs with the {`}[SEP]{`} token. These findings highlight the importance of context-awareness in empathy detection and pave the way for future improvements in the sensitivity and accuracy of such systems.
[ "Numano{\\u{g}}lu, Arda", "Ate{\\c{s}}, S{\\\"u}leyman", "Cicekli, Nihan", "K{\\\"u}{\\c{c}}{\\\"u}k, Dilek" ]
Empathify at WASSA 2024 Empathy and Personality Shared Task: Contextualizing Empathy with a BERT-Based Context-Aware Approach for Empathy Detection
wassa-1.33
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.33/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.34.bib
@inproceedings{huang-liang-2024-zhenmei, title = "Zhenmei at {WASSA}-2024 Empathy and Personality Shared Track 2 Incorporating {P}earson Correlation Coefficient as a Regularization Term for Enhanced Empathy and Emotion Prediction in Conversational Turns", author = "Huang, Liting and Liang, Huizhi", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.34", pages = "399--403", abstract = "In the realm of conversational empathy and emotion prediction, emotions are frequently categorized into multiple levels. This study seeks to enhance the performance of emotion prediction models by incorporating the Pearson correlation coefficient as a regularization term within the loss function. This regularization approach ensures closer alignment between predicted and actual emotion levels, mitigating extreme predictions and resulting in smoother and more consistent outputs. Such outputs are essential for capturing the subtle transitions between continuous emotion levels. Through experimental comparisons between models with and without Pearson regularization, our findings demonstrate that integrating the Pearson correlation coefficient significantly boosts model performance, yielding higher correlation scores and more accurate predictions. Our system officially ranked 9th at the Track 2: CONV-turn. The code for our model can be found at Link .", }
In the realm of conversational empathy and emotion prediction, emotions are frequently categorized into multiple levels. This study seeks to enhance the performance of emotion prediction models by incorporating the Pearson correlation coefficient as a regularization term within the loss function. This regularization approach ensures closer alignment between predicted and actual emotion levels, mitigating extreme predictions and resulting in smoother and more consistent outputs. Such outputs are essential for capturing the subtle transitions between continuous emotion levels. Through experimental comparisons between models with and without Pearson regularization, our findings demonstrate that integrating the Pearson correlation coefficient significantly boosts model performance, yielding higher correlation scores and more accurate predictions. Our system officially ranked 9th at the Track 2: CONV-turn. The code for our model can be found at Link .
[ "Huang, Liting", "Liang, Huizhi" ]
Zhenmei at WASSA-2024 Empathy and Personality Shared Track 2 Incorporating Pearson Correlation Coefficient as a Regularization Term for Enhanced Empathy and Emotion Prediction in Conversational Turns
wassa-1.34
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.34/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.35.bib
@inproceedings{furniturewala-jaidka-2024-empaths, title = "Empaths at {WASSA} 2024 Empathy and Personality Shared Task: Turn-Level Empathy Prediction Using Psychological Indicators", author = "Furniturewala, Shaz and Jaidka, Kokil", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.35", pages = "404--411", abstract = "For the WASSA 2024 Empathy and Personality Prediction Shared Task, we propose a novel turn-level empathy detection method that decomposes empathy into six psychological indicators: Emotional Language, Perspective-Taking, Sympathy and Compassion, Extroversion, Openness, and Agreeableness. A pipeline of text enrichment using a Large Language Model (LLM) followed by DeBERTA fine-tuning demonstrates a significant improvement in the Pearson Correlation Coefficient and F1 scores for empathy detection, highlighting the effectiveness of our approach. Our system officially ranked 7th at the CONV-turn track.", }
For the WASSA 2024 Empathy and Personality Prediction Shared Task, we propose a novel turn-level empathy detection method that decomposes empathy into six psychological indicators: Emotional Language, Perspective-Taking, Sympathy and Compassion, Extroversion, Openness, and Agreeableness. A pipeline of text enrichment using a Large Language Model (LLM) followed by DeBERTA fine-tuning demonstrates a significant improvement in the Pearson Correlation Coefficient and F1 scores for empathy detection, highlighting the effectiveness of our approach. Our system officially ranked 7th at the CONV-turn track.
[ "Furniturewala, Shaz", "Jaidka, Kokil" ]
Empaths at WASSA 2024 Empathy and Personality Shared Task: Turn-Level Empathy Prediction Using Psychological Indicators
wassa-1.35
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.35/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.36.bib
@inproceedings{osei-brefo-liang-2024-nu, title = "{NU} at {WASSA} 2024 Empathy and Personality Shared Task: Enhancing Personality Predictions with Knowledge Graphs; A Graphical Neural Network and {L}ight{GBM} Ensemble Approach", author = "Osei-Brefo, Emmanuel and Liang, Huizhi", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.36", pages = "412--419", abstract = "This paper proposes a novel ensemble approach that combines Graph Neural Networks (GNNs) and LightGBM to enhance personality prediction based on the personality Big 5 model. By integrating BERT embeddings from user essays with knowledge graph-derived embeddings, our method accurately captures rich semantic and relational information. Additionally, a special loss function that combines Mean Squared Error (MSE), Pearson correlation loss, and contrastive loss to improve model performance is introduced. The proposed ensemble model, made of Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and LightGBM, demonstrates superior performance over other models, with significant improvements in prediction accuracy for the Big Five personality traits achieved. Our system officially ranked $2^{nd}$ at the Track 4: PER track.", }
This paper proposes a novel ensemble approach that combines Graph Neural Networks (GNNs) and LightGBM to enhance personality prediction based on the personality Big 5 model. By integrating BERT embeddings from user essays with knowledge graph-derived embeddings, our method accurately captures rich semantic and relational information. Additionally, a special loss function that combines Mean Squared Error (MSE), Pearson correlation loss, and contrastive loss to improve model performance is introduced. The proposed ensemble model, made of Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and LightGBM, demonstrates superior performance over other models, with significant improvements in prediction accuracy for the Big Five personality traits achieved. Our system officially ranked $2^{nd}$ at the Track 4: PER track.
[ "Osei-Brefo, Emmanuel", "Liang, Huizhi" ]
NU at WASSA 2024 Empathy and Personality Shared Task: Enhancing Personality Predictions with Knowledge Graphs; A Graphical Neural Network and LightGBM Ensemble Approach
wassa-1.36
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.36/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.37.bib
@inproceedings{chevi-aji-2024-daisy, title = "Daisy at {WASSA} 2024 Empathy and Personality Shared Task: A Quick Exploration on Emotional Pattern of Empathy and Distress", author = "Chevi, Rendi and Aji, Alham", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.37", pages = "420--424", abstract = "When we encountered upsetting or tragic situations involving other people, we might feel certain emotions that are congruent, though not necessarily identical, to what that person might went through. These kind of vicarious emotions are what defined empathy and distress, they can be seen as a form of emotional response to other people in need. In this paper, we describe our participation in WASSA 2024 Shared Task 3 in predicting writer{'}s level of empathy and distress from their personal essays. We approach this task by assuming one{'}s level of empathy and distress can be revealed from the emotional patterns within their essay. By extracting the emotional patterns from essays via an emotion classifier, we regress the empathy and distress levels from these patterns. Through correlation and model explainability analysis, we found that there are similar set of emotions, such as sadness or disappointment, and distinct set of emotions, such as anger or approval, that might describe the writer{'}s level of empathy and distress. We hope that our approach and findings could serve as a basis for future work that try to model and explain empathy and distress from emotional patterns.", }
When we encountered upsetting or tragic situations involving other people, we might feel certain emotions that are congruent, though not necessarily identical, to what that person might went through. These kind of vicarious emotions are what defined empathy and distress, they can be seen as a form of emotional response to other people in need. In this paper, we describe our participation in WASSA 2024 Shared Task 3 in predicting writer{'}s level of empathy and distress from their personal essays. We approach this task by assuming one{'}s level of empathy and distress can be revealed from the emotional patterns within their essay. By extracting the emotional patterns from essays via an emotion classifier, we regress the empathy and distress levels from these patterns. Through correlation and model explainability analysis, we found that there are similar set of emotions, such as sadness or disappointment, and distinct set of emotions, such as anger or approval, that might describe the writer{'}s level of empathy and distress. We hope that our approach and findings could serve as a basis for future work that try to model and explain empathy and distress from emotional patterns.
[ "Chevi, Rendi", "Aji, Alham" ]
Daisy at WASSA 2024 Empathy and Personality Shared Task: A Quick Exploration on Emotional Pattern of Empathy and Distress
wassa-1.37
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.37/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.38.bib
@inproceedings{churina-etal-2024-wassa, title = "{WASSA} 2024 Shared Task: Enhancing Emotional Intelligence with Prompts", author = "Churina, Svetlana and Verma, Preetika and [email protected], [email protected]", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.38", pages = "425--429", abstract = "This paper describes the system for the last-min-submittion team in WASSA-2024 Shared Task 1:Empathy Detection and Emotion Classification. This task aims at developing models which can predict the empathy, emotion, and emotional polarity. This system achieved relatively goodresults on the competition{'}s official leaderboard.The code of this system is available here.", }
This paper describes the system for the last-min-submittion team in WASSA-2024 Shared Task 1:Empathy Detection and Emotion Classification. This task aims at developing models which can predict the empathy, emotion, and emotional polarity. This system achieved relatively goodresults on the competition{'}s official leaderboard.The code of this system is available here.
[ "Churina, Svetlana", "Verma, Preetika", "[email protected], [email protected]" ]
WASSA 2024 Shared Task: Enhancing Emotional Intelligence with Prompts
wassa-1.38
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.38/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.39.bib
@inproceedings{yang-etal-2024-hyy33, title = "hyy33 at {WASSA} 2024 Empathy and Personality Shared Task: Using the {C}ombined{L}oss and {FGM} for Enhancing {BERT}-based Models in Emotion and Empathy Prediction from Conversation Turns", author = "Yang, Huiyu and Huang, Liting and Li, Tian and Rusnachenko, Nicolay and Liang, Huizhi", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.39", pages = "430--434", abstract = "This paper presents our participation to the WASSA 2024 Shared Task on Empathy Detection and Emotion Classification and Personality Detection in Interactions. We focus on Track 2: Empathy and Emotion Prediction in Conversations Turns (CONV-turn), which consists of predicting the perceived empathy, emotion polarity and emotion intensity at turn level in a conversation. In the method, we conduct BERT and DeBERTa based finetuning, implement the CombinedLoss which consists of a structured contrastive loss and Pearson loss, adopt adversarial training using Fast Gradient Method (FGM). This method achieved Pearson correlation of 0.581 for Emotion,0.644 for Emotional Polarity and 0.544 for Empathy on the test set, with the average value of 0.590 which ranked 4th among all teams. After submission to WASSA 2024 competition, we further introduced the segmented mix-up for data augmentation, boosting for ensemble and regression experiments, which yield even better results: 0.6521 for Emotion, 0.7376 for EmotionalPolarity, 0.6326 for Empathy in Pearson correlation on the development set. The implementation and fine-tuned models are publicly-available at https://github.com/hyy-33/hyy33-WASSA-2024-Track-2.", }
This paper presents our participation to the WASSA 2024 Shared Task on Empathy Detection and Emotion Classification and Personality Detection in Interactions. We focus on Track 2: Empathy and Emotion Prediction in Conversations Turns (CONV-turn), which consists of predicting the perceived empathy, emotion polarity and emotion intensity at turn level in a conversation. In the method, we conduct BERT and DeBERTa based finetuning, implement the CombinedLoss which consists of a structured contrastive loss and Pearson loss, adopt adversarial training using Fast Gradient Method (FGM). This method achieved Pearson correlation of 0.581 for Emotion,0.644 for Emotional Polarity and 0.544 for Empathy on the test set, with the average value of 0.590 which ranked 4th among all teams. After submission to WASSA 2024 competition, we further introduced the segmented mix-up for data augmentation, boosting for ensemble and regression experiments, which yield even better results: 0.6521 for Emotion, 0.7376 for EmotionalPolarity, 0.6326 for Empathy in Pearson correlation on the development set. The implementation and fine-tuned models are publicly-available at https://github.com/hyy-33/hyy33-WASSA-2024-Track-2.
[ "Yang, Huiyu", "Huang, Liting", "Li, Tian", "Rusnachenko, Nicolay", "Liang, Huizhi" ]
hyy33 at WASSA 2024 Empathy and Personality Shared Task: Using the CombinedLoss and FGM for Enhancing BERT-based Models in Emotion and Empathy Prediction from Conversation Turns
wassa-1.39
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.39/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.40.bib
@inproceedings{frick-steinebach-2024-fraunhofer, title = "Fraunhofer {SIT} at {WASSA} 2024 Empathy and Personality Shared Task: Use of Sentiment Transformers and Data Augmentation With Fuzzy Labels to Predict Emotional Reactions in Conversations and Essays", author = "Frick, Raphael and Steinebach, Martin", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.40", pages = "435--440", abstract = "Predicting emotions and emotional reactions during conversations and within texts poses challenges, even for advanced AI systems. The second iteration of the WASSA Empathy and Personality Shared Task focuses on creating innovative models that can anticipate emotional responses to news articles containing harmful content across four tasks.In this paper, we introduce our Fraunhofer SIT team{'}s solutions for the three tasks: Task 1 (CONVD), Task 2 (CONVT), and Task 3 (EMP).It involves combining LLM-driven data augmentation with fuzzy labels and fine-tuning RoBERTa models pre-trained on sentiment classification tasks to solve the regression problems. In the competition, our solutions achieved first place in Task 1, X in Task 2, and third place in Task 3.", }
Predicting emotions and emotional reactions during conversations and within texts poses challenges, even for advanced AI systems. The second iteration of the WASSA Empathy and Personality Shared Task focuses on creating innovative models that can anticipate emotional responses to news articles containing harmful content across four tasks.In this paper, we introduce our Fraunhofer SIT team{'}s solutions for the three tasks: Task 1 (CONVD), Task 2 (CONVT), and Task 3 (EMP).It involves combining LLM-driven data augmentation with fuzzy labels and fine-tuning RoBERTa models pre-trained on sentiment classification tasks to solve the regression problems. In the competition, our solutions achieved first place in Task 1, X in Task 2, and third place in Task 3.
[ "Frick, Raphael", "Steinebach, Martin" ]
Fraunhofer SIT at WASSA 2024 Empathy and Personality Shared Task: Use of Sentiment Transformers and Data Augmentation With Fuzzy Labels to Predict Emotional Reactions in Conversations and Essays
wassa-1.40
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.40/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.41.bib
@inproceedings{lee-etal-2024-empatheticfig, title = "{E}mpathetic{FIG} at {WASSA} 2024 Empathy and Personality Shared Task: Predicting Empathy and Emotion in Conversations with Figurative Language", author = "Lee, Gyeongeun and Wang, Zhu and Ravi, Sathya N. and Parde, Natalie", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.41", pages = "441--447", abstract = "Recent research highlights the importance of figurative language as a tool for amplifying emotional impact. In this paper, we dive deeper into this phenomenon and outline our methods for \textit{Track 1, Empathy Prediction in Conversations} (CONV-dialog) and \textit{Track 2, Empathy and Emotion Prediction in Conversation Turns} (CONV-turn) of the WASSA 2024 shared task. We leveraged transformer-based large language models augmented with figurative language prompts, specifically idioms, metaphors and hyperbole, that were selected and trained for each track to optimize system performance. For Track 1, we observed that a fine-tuned BERT with metaphor and hyperbole features outperformed other models on the development set. For Track 2, DeBERTa, with different combinations of figurative language prompts, performed well for different prediction tasks. Our method provides a novel framework for understanding how figurative language influences emotional perception in conversational contexts. Our system officially ranked 4th in the 1st track and 3rd in the 2nd track.", }
Recent research highlights the importance of figurative language as a tool for amplifying emotional impact. In this paper, we dive deeper into this phenomenon and outline our methods for \textit{Track 1, Empathy Prediction in Conversations} (CONV-dialog) and \textit{Track 2, Empathy and Emotion Prediction in Conversation Turns} (CONV-turn) of the WASSA 2024 shared task. We leveraged transformer-based large language models augmented with figurative language prompts, specifically idioms, metaphors and hyperbole, that were selected and trained for each track to optimize system performance. For Track 1, we observed that a fine-tuned BERT with metaphor and hyperbole features outperformed other models on the development set. For Track 2, DeBERTa, with different combinations of figurative language prompts, performed well for different prediction tasks. Our method provides a novel framework for understanding how figurative language influences emotional perception in conversational contexts. Our system officially ranked 4th in the 1st track and 3rd in the 2nd track.
[ "Lee, Gyeongeun", "Wang, Zhu", "Ravi, Sathya N.", "Parde, Natalie" ]
EmpatheticFIG at WASSA 2024 Empathy and Personality Shared Task: Predicting Empathy and Emotion in Conversations with Figurative Language
wassa-1.41
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.41/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.42.bib
@inproceedings{pereira-etal-2024-context, title = "{C}on{T}ext at {WASSA} 2024 Empathy and Personality Shared Task: History-Dependent Embedding Utterance Representations for Empathy and Emotion Prediction in Conversations", author = "Pereira, Patr{\'\i}cia and Moniz, Helena and Carvalho, Joao Paulo", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.42", pages = "448--453", abstract = "Empathy and emotion prediction are key components in the development of effective and empathetic agents, amongst several other applications. The WASSA shared task on empathy empathy and emotion prediction in interactions presents an opportunity to benchmark approaches to these tasks.Appropriately selecting and representing the historical context is crucial in the modelling of empathy and emotion in conversations. In our submissions, we model empathy, emotion polarity and emotion intensity of each utterance in a conversation by feeding the utterance to be classified together with its conversational context, i.e., a certain number of previous conversational turns, as input to an encoder Pre-trained Language Model (PLM), to which we append a regression head for prediction. We also model perceived counterparty empathy of each interlocutor by feeding all utterances from the conversation and a token identifying the interlocutor for which we are predicting the empathy. Our system officially ranked 1st at the CONV-turn track and 2nd at the CONV-dialog track.", }
Empathy and emotion prediction are key components in the development of effective and empathetic agents, amongst several other applications. The WASSA shared task on empathy empathy and emotion prediction in interactions presents an opportunity to benchmark approaches to these tasks.Appropriately selecting and representing the historical context is crucial in the modelling of empathy and emotion in conversations. In our submissions, we model empathy, emotion polarity and emotion intensity of each utterance in a conversation by feeding the utterance to be classified together with its conversational context, i.e., a certain number of previous conversational turns, as input to an encoder Pre-trained Language Model (PLM), to which we append a regression head for prediction. We also model perceived counterparty empathy of each interlocutor by feeding all utterances from the conversation and a token identifying the interlocutor for which we are predicting the empathy. Our system officially ranked 1st at the CONV-turn track and 2nd at the CONV-dialog track.
[ "Pereira, Patr{\\'\\i}cia", "Moniz, Helena", "Carvalho, Joao Paulo" ]
ConText at WASSA 2024 Empathy and Personality Shared Task: History-Dependent Embedding Utterance Representations for Empathy and Emotion Prediction in Conversations
wassa-1.42
Poster
2407.03818
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.42/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.43.bib
@inproceedings{maladry-etal-2024-findings, title = "Findings of the {WASSA} 2024 {EXALT} shared task on Explainability for Cross-Lingual Emotion in Tweets", author = "Maladry, Aaron and Singh, Pranaydeep and Lefever, Els", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.43", pages = "454--463", abstract = "This paper presents a detailed description and results of the first shared task on explainability for cross-lingual emotion in tweets. Given a tweet in one of the five target languages (Dutch, Russian, Spanish, English, and French), systems should predict the correct emotion label (Task 1), as well as the words triggering the predicted emotion label (Task 2). The tweets were collected based on a list of stop words to prevent topical or emotional bias and were subsequently manually annotated. For both tasks, only a training corpus for English was provided, obliging participating systems to design cross-lingual approaches. Our shared task received submissions from 14 teams for the emotion detection task and from 6 teams for the trigger word detection task. The highest macro F1-scores obtained for both tasks are respectively 0.629 and 0.616, demonstrating that cross-lingual emotion detection is still a challenging task.", }
This paper presents a detailed description and results of the first shared task on explainability for cross-lingual emotion in tweets. Given a tweet in one of the five target languages (Dutch, Russian, Spanish, English, and French), systems should predict the correct emotion label (Task 1), as well as the words triggering the predicted emotion label (Task 2). The tweets were collected based on a list of stop words to prevent topical or emotional bias and were subsequently manually annotated. For both tasks, only a training corpus for English was provided, obliging participating systems to design cross-lingual approaches. Our shared task received submissions from 14 teams for the emotion detection task and from 6 teams for the trigger word detection task. The highest macro F1-scores obtained for both tasks are respectively 0.629 and 0.616, demonstrating that cross-lingual emotion detection is still a challenging task.
[ "Maladry, Aaron", "Singh, Pranaydeep", "Lefever, Els" ]
Findings of the WASSA 2024 EXALT shared task on Explainability for Cross-Lingual Emotion in Tweets
wassa-1.43
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.43/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.44.bib
@inproceedings{kadiyala-2024-cross, title = "Cross-lingual Emotion Detection through Large Language Models", author = "Kadiyala, Ram Mohan Rao", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.44", pages = "464--469", abstract = "This paper presents a detailed system description of our entry which finished 1st with a large lead at WASSA 2024 Task 2, focused on cross-lingual emotion detection. We utilized a combination of large language models (LLMs) and their ensembles to effectively understand and categorize emotions across different languages. Our approach not only outperformed other submissions with a large margin, but also demonstrated the strength of integrating multiple models to enhance performance. Additionally, We conducted a thorough comparison of the benefits and limitations of each model used. An error analysis is included along with suggested areas for future improvement. This paper aims to offer a clear and comprehensive understanding of advanced techniques in emotion detection, making it accessible even to those new to the field.", }
This paper presents a detailed system description of our entry which finished 1st with a large lead at WASSA 2024 Task 2, focused on cross-lingual emotion detection. We utilized a combination of large language models (LLMs) and their ensembles to effectively understand and categorize emotions across different languages. Our approach not only outperformed other submissions with a large margin, but also demonstrated the strength of integrating multiple models to enhance performance. Additionally, We conducted a thorough comparison of the benefits and limitations of each model used. An error analysis is included along with suggested areas for future improvement. This paper aims to offer a clear and comprehensive understanding of advanced techniques in emotion detection, making it accessible even to those new to the field.
[ "Kadiyala, Ram Mohan Rao" ]
Cross-lingual Emotion Detection through Large Language Models
wassa-1.44
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.44/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.45.bib
@inproceedings{wang-etal-2024-knowledge, title = "Knowledge Distillation from Monolingual to Multilingual Models for Intelligent and Interpretable Multilingual Emotion Detection", author = "Wang, Yuqi and Wang, Zimu and Han, Nijia and Wang, Wei and Chen, Qi and Zhang, Haiyang and Pan, Yushan and Nguyen, Anh", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.45", pages = "470--475", abstract = "Emotion detection from text is a crucial task in understanding natural language with wide-ranging applications. Existing approaches for multilingual emotion detection from text face challenges with data scarcity across many languages and a lack of interpretability. We propose a novel method that leverages both monolingual and multilingual pre-trained language models to improve performance and interpretability. Our approach involves 1) training a high-performing English monolingual model in parallel with a multilingual model and 2) using knowledge distillation to transfer the emotion detection capabilities from the monolingual teacher to the multilingual student model. Experiments on a multilingual dataset demonstrate significant performance gains for refined multilingual models like XLM-RoBERTa and E5 after distillation. Furthermore, our approach enhances interpretability by enabling better identification of emotion-trigger words. Our work presents a promising direction for building accurate, robust and explainable multilingual emotion detection systems.", }
Emotion detection from text is a crucial task in understanding natural language with wide-ranging applications. Existing approaches for multilingual emotion detection from text face challenges with data scarcity across many languages and a lack of interpretability. We propose a novel method that leverages both monolingual and multilingual pre-trained language models to improve performance and interpretability. Our approach involves 1) training a high-performing English monolingual model in parallel with a multilingual model and 2) using knowledge distillation to transfer the emotion detection capabilities from the monolingual teacher to the multilingual student model. Experiments on a multilingual dataset demonstrate significant performance gains for refined multilingual models like XLM-RoBERTa and E5 after distillation. Furthermore, our approach enhances interpretability by enabling better identification of emotion-trigger words. Our work presents a promising direction for building accurate, robust and explainable multilingual emotion detection systems.
[ "Wang, Yuqi", "Wang, Zimu", "Han, Nijia", "Wang, Wei", "Chen, Qi", "Zhang, Haiyang", "Pan, Yushan", "Nguyen, Anh" ]
Knowledge Distillation from Monolingual to Multilingual Models for Intelligent and Interpretable Multilingual Emotion Detection
wassa-1.45
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.45/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.46.bib
@inproceedings{xiong-etal-2024-hitsz, title = "{HITSZ}-{HLT} at {WASSA}-2024 Shared Task 2: Language-agnostic Multi-task Learning for Explainability of Cross-lingual Emotion Detection", author = "Xiong, Feng and Wang, Jun and Tu, Geng and Xu, Ruifeng", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.46", pages = "476--482", abstract = "This paper describes the system developed by the HITSZ-HLT team for WASSA-2024 Shared Task 2, which addresses two closely linked sub-tasks: Cross-lingual Emotion Detection and Binary Trigger Word Detection in tweets. The main goal of Shared Task 2 is to simultaneously identify the emotions expressed and detect the trigger words across multiple languages. To achieve this, we introduce a Language-agnostic Multi Task Learning (LaMTL) framework that integrates emotion prediction and emotion trigger word detection tasks. By fostering synergistic interactions between task-specific and task-agnostic representations, the LaMTL aims to mutually enhance emotional cues, ultimately improving the performance of both tasks. Additionally, we leverage large-scale language models to translate the training dataset into multiple languages, thereby fostering the formation of language-agnostic representations within the model, significantly enhancing the model{'}s ability to transfer and perform well across multilingual data. Experimental results demonstrate the effectiveness of our framework across both tasks, with a particular highlight on its success in achieving second place in sub-task 2.", }
This paper describes the system developed by the HITSZ-HLT team for WASSA-2024 Shared Task 2, which addresses two closely linked sub-tasks: Cross-lingual Emotion Detection and Binary Trigger Word Detection in tweets. The main goal of Shared Task 2 is to simultaneously identify the emotions expressed and detect the trigger words across multiple languages. To achieve this, we introduce a Language-agnostic Multi Task Learning (LaMTL) framework that integrates emotion prediction and emotion trigger word detection tasks. By fostering synergistic interactions between task-specific and task-agnostic representations, the LaMTL aims to mutually enhance emotional cues, ultimately improving the performance of both tasks. Additionally, we leverage large-scale language models to translate the training dataset into multiple languages, thereby fostering the formation of language-agnostic representations within the model, significantly enhancing the model{'}s ability to transfer and perform well across multilingual data. Experimental results demonstrate the effectiveness of our framework across both tasks, with a particular highlight on its success in achieving second place in sub-task 2.
[ "Xiong, Feng", "Wang, Jun", "Tu, Geng", "Xu, Ruifeng" ]
HITSZ-HLT at WASSA-2024 Shared Task 2: Language-agnostic Multi-task Learning for Explainability of Cross-lingual Emotion Detection
wassa-1.46
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.46/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.47.bib
@inproceedings{smid-etal-2024-uwb, title = "{UWB} at {WASSA}-2024 Shared Task 2: Cross-lingual Emotion Detection", author = "{\v{S}}m{\'\i}d, Jakub and P{\v{r}}ib{\'a}{\v{n}}, Pavel and Kr{\'a}l, Pavel", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.47", pages = "483--489", abstract = "This paper presents our system built for the WASSA-2024 Cross-lingual Emotion Detection Shared Task. The task consists of two subtasks: first, to assess an emotion label from six possible classes for a given tweet in one of five languages, and second, to predict words triggering the detected emotions in binary and numerical formats. Our proposed approach revolves around fine-tuning quantized large language models, specifically Orca 2, with low-rank adapters (LoRA) and multilingual Transformer-based models, such as XLM-R and mT5. We enhance performance through machine translation for both subtasks and trigger word switching for the second subtask. The system achieves excellent performance, ranking 1st in numerical trigger words detection, 3rd in binary trigger words detection, and 7th in emotion detection.", }
This paper presents our system built for the WASSA-2024 Cross-lingual Emotion Detection Shared Task. The task consists of two subtasks: first, to assess an emotion label from six possible classes for a given tweet in one of five languages, and second, to predict words triggering the detected emotions in binary and numerical formats. Our proposed approach revolves around fine-tuning quantized large language models, specifically Orca 2, with low-rank adapters (LoRA) and multilingual Transformer-based models, such as XLM-R and mT5. We enhance performance through machine translation for both subtasks and trigger word switching for the second subtask. The system achieves excellent performance, ranking 1st in numerical trigger words detection, 3rd in binary trigger words detection, and 7th in emotion detection.
[ "{\\v{S}}m{\\'\\i}d, Jakub", "P{\\v{r}}ib{\\'a}{\\v{n}}, Pavel", "Kr{\\'a}l, Pavel" ]
UWB at WASSA-2024 Shared Task 2: Cross-lingual Emotion Detection
wassa-1.47
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.47/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.48.bib
@inproceedings{vazquez-osorio-etal-2024-pcicunam, title = "{PCICUNAM} at {WASSA} 2024: Cross-lingual Emotion Detection Task with Hierarchical Classification and Weighted Loss Functions", author = "V{\'a}zquez-Osorio, Jes{\'u}s and Sierra, Gerardo and G{\'o}mez-Adorno, Helena and Bel-Enguix, Gemma", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.48", pages = "490--494", abstract = "This paper addresses the shared task of multi-lingual emotion detection in tweets, presented at the Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media Analysis (WASSA) co-located with the ACL 2024 conference. The task involves predicting emotions from six classes in tweets from five different languages using only English for model training. Our approach focuses on addressing class imbalance through data augmentation, hierarchical classification, and the application of focal loss and weighted cross-entropy loss functions. These methods enhance our transformer-based model{'}s ability to transfer emotion detection capabilities across languages, resulting in improved performance despite the constraints of limited computational resources.", }
This paper addresses the shared task of multi-lingual emotion detection in tweets, presented at the Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media Analysis (WASSA) co-located with the ACL 2024 conference. The task involves predicting emotions from six classes in tweets from five different languages using only English for model training. Our approach focuses on addressing class imbalance through data augmentation, hierarchical classification, and the application of focal loss and weighted cross-entropy loss functions. These methods enhance our transformer-based model{'}s ability to transfer emotion detection capabilities across languages, resulting in improved performance despite the constraints of limited computational resources.
[ "V{\\'a}zquez-Osorio, Jes{\\'u}s", "Sierra, Gerardo", "G{\\'o}mez-Adorno, Helena", "Bel-Enguix, Gemma" ]
PCICUNAM at WASSA 2024: Cross-lingual Emotion Detection Task with Hierarchical Classification and Weighted Loss Functions
wassa-1.48
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.48/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.49.bib
@inproceedings{cheng-etal-2024-teii, title = "{TEII}: Think, Explain, Interact and Iterate with Large Language Models to Solve Cross-lingual Emotion Detection", author = "Cheng, Long and Shao, Qihao and Zhao, Christine and Bi, Sheng and Levow, Gina-Anne", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.49", pages = "495--504", abstract = "Cross-lingual emotion detection allows us to analyze global trends, public opinion, and social phenomena at scale. We participated in the Explainability of Cross-lingual Emotion Detection (EXALT) shared task, achieving an F1-score of 0.6046 on the evaluation set for the emotion detection sub-task. Our system outperformed the baseline by more than 0.16 F1-score absolute, and ranked second amongst competing systems. We conducted experiments using fine-tuning, zero-shot learning, and few-shot learning for Large Language Model (LLM)-based models as well as embedding-based BiLSTM and KNN for non-LLM-based techniques. Additionally, we introduced two novel methods: the Multi-Iteration Agentic Workflow and the Multi-Binary-Classifier Agentic Workflow. We found that LLM-based approaches provided good performance on multilingual emotion detection. Furthermore, ensembles combining all our experimented models yielded higher F1-scores than any single approach alone.", }
Cross-lingual emotion detection allows us to analyze global trends, public opinion, and social phenomena at scale. We participated in the Explainability of Cross-lingual Emotion Detection (EXALT) shared task, achieving an F1-score of 0.6046 on the evaluation set for the emotion detection sub-task. Our system outperformed the baseline by more than 0.16 F1-score absolute, and ranked second amongst competing systems. We conducted experiments using fine-tuning, zero-shot learning, and few-shot learning for Large Language Model (LLM)-based models as well as embedding-based BiLSTM and KNN for non-LLM-based techniques. Additionally, we introduced two novel methods: the Multi-Iteration Agentic Workflow and the Multi-Binary-Classifier Agentic Workflow. We found that LLM-based approaches provided good performance on multilingual emotion detection. Furthermore, ensembles combining all our experimented models yielded higher F1-scores than any single approach alone.
[ "Cheng, Long", "Shao, Qihao", "Zhao, Christine", "Bi, Sheng", "Levow, Gina-Anne" ]
TEII: Think, Explain, Interact and Iterate with Large Language Models to Solve Cross-lingual Emotion Detection
wassa-1.49
Poster
2405.17129
[ "https://github.com/cl-victor1/exalt_2024_bcsz" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.49/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.50.bib
@inproceedings{lin-etal-2024-nycu, title = "{NYCU}-{NLP} at {EXALT} 2024: Assembling Large Language Models for Cross-Lingual Emotion and Trigger Detection", author = "Lin, Tzu-Mi and Xu, Zhe-Yu and Zhou, Jian-Yu and Lee, Lung-Hao", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.50", pages = "505--510", abstract = "This study describes the model design of the NYCU-NLP system for the EXALT shared task at the WASSA 2024 workshop. We instruction-tune several large language models and then assemble various model combinations as our main system architecture for cross-lingual emotion and trigger detection in tweets. Experimental results showed that our best performing submission is an assembly of the Starling (7B) and Llama 3 (8B) models. Our submission was ranked sixth of 17 participating systems for the emotion detection subtask, and fifth of 7 systems for the binary trigger detection subtask.", }
This study describes the model design of the NYCU-NLP system for the EXALT shared task at the WASSA 2024 workshop. We instruction-tune several large language models and then assemble various model combinations as our main system architecture for cross-lingual emotion and trigger detection in tweets. Experimental results showed that our best performing submission is an assembly of the Starling (7B) and Llama 3 (8B) models. Our submission was ranked sixth of 17 participating systems for the emotion detection subtask, and fifth of 7 systems for the binary trigger detection subtask.
[ "Lin, Tzu-Mi", "Xu, Zhe-Yu", "Zhou, Jian-Yu", "Lee, Lung-Hao" ]
NYCU-NLP at EXALT 2024: Assembling Large Language Models for Cross-Lingual Emotion and Trigger Detection
wassa-1.50
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.50/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.51.bib
@inproceedings{cheng-etal-2024-effectiveness, title = "Effectiveness of Scalable Monolingual Data and Trigger Words Prompting on Cross-Lingual Emotion Detection Task", author = "Cheng, Yao-Fei and Hong, Jeongyeob and Wang, Andrew and Silva, Anita and Levow, Gina-Anne", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.51", pages = "511--522", abstract = "This paper introduces our submitted systems for WASSA 2024 Shared Task 2: Cross-Lingual Emotion Detection. We implemented a BERT-based classifier and an in-context learning-based system. Our best-performing model, using English Chain of Thought prompts with trigger words, reached 3rd overall with an F1 score of 0.6015. Following the motivation of the shared task, we further analyzed the scalability and transferability of the monolingual English dataset on cross-lingual tasks. Our analysis demonstrates the importance of data quality over quantity. We also found that augmented multilingual data does not necessarily perform better than English monolingual data in cross-lingual tasks. We open-sourced the augmented data and source code of our system for future research.", }
This paper introduces our submitted systems for WASSA 2024 Shared Task 2: Cross-Lingual Emotion Detection. We implemented a BERT-based classifier and an in-context learning-based system. Our best-performing model, using English Chain of Thought prompts with trigger words, reached 3rd overall with an F1 score of 0.6015. Following the motivation of the shared task, we further analyzed the scalability and transferability of the monolingual English dataset on cross-lingual tasks. Our analysis demonstrates the importance of data quality over quantity. We also found that augmented multilingual data does not necessarily perform better than English monolingual data in cross-lingual tasks. We open-sourced the augmented data and source code of our system for future research.
[ "Cheng, Yao-Fei", "Hong, Jeongyeob", "Wang, Andrew", "Silva, Anita", "Levow, Gina-Anne" ]
Effectiveness of Scalable Monolingual Data and Trigger Words Prompting on Cross-Lingual Emotion Detection Task
wassa-1.51
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.51/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.52.bib
@inproceedings{davenport-etal-2024-wu, title = "{WU}{\_}{TLAXE} at {WASSA} 2024 Explainability for Cross-Lingual Emotion in Tweets Shared Task 1: Emotion through Translation using {T}w{HIN}-{BERT} and {GPT}", author = "Davenport, Jon and Ruditsky, Keren and Batra, Anna and Lhawa, Yulha and Levow, Gina-Anne", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.52", pages = "523--527", abstract = "This paper describes our task 1 submission for the WASSA 2024 shared task on Explainability for Cross-lingual Emotion in Tweets. Our task is to predict the correct emotion label (Anger, Sadness, Fear, Joy, Love, and Neutral) for a dataset of English, Dutch, French, Spanish, and Russian tweets, while training exclusively on English emotion labeled data, to reveal what kind of emotion detection information is transferable cross-language (Maladry et al., 2024). To that end, we used an ensemble of models with a GPT-4 decider. Our ensemble consisted of a few-shot GPT-4 prompt system and a TwHIN-BERT system fine-tuned on the EXALT and additional English data. We ranked 8th place under the name WU{\_}TLAXE with an F1 Macro score of 0.573 on the test set. We also experimented with an English-only TwHIN-BERT model by translating the other languages into English for inference, which proved to be worse than the other models.", }
This paper describes our task 1 submission for the WASSA 2024 shared task on Explainability for Cross-lingual Emotion in Tweets. Our task is to predict the correct emotion label (Anger, Sadness, Fear, Joy, Love, and Neutral) for a dataset of English, Dutch, French, Spanish, and Russian tweets, while training exclusively on English emotion labeled data, to reveal what kind of emotion detection information is transferable cross-language (Maladry et al., 2024). To that end, we used an ensemble of models with a GPT-4 decider. Our ensemble consisted of a few-shot GPT-4 prompt system and a TwHIN-BERT system fine-tuned on the EXALT and additional English data. We ranked 8th place under the name WU{\_}TLAXE with an F1 Macro score of 0.573 on the test set. We also experimented with an English-only TwHIN-BERT model by translating the other languages into English for inference, which proved to be worse than the other models.
[ "Davenport, Jon", "Ruditsky, Keren", "Batra, Anna", "Lhawa, Yulha", "Levow, Gina-Anne" ]
WU_TLAXE at WASSA 2024 Explainability for Cross-Lingual Emotion in Tweets Shared Task 1: Emotion through Translation using TwHIN-BERT and GPT
wassa-1.52
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.52/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.53.bib
@inproceedings{zhang-etal-2024-enhancing, title = "Enhancing Cross-Lingual Emotion Detection with Data Augmentation and Token-Label Mapping", author = "Zhang, Jinghui and Zhao, Yuan and Zhang, Siqin and Zhao, Ruijing and Bao, Siyu", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.53", pages = "528--533", abstract = "Cross-lingual emotion detection faces challenges such as imbalanced label distribution, data scarcity, cultural and linguistic differences, figurative language, and the opaqueness of pre-trained language models. This paper presents our approach to the EXALT shared task at WASSA 2024, focusing on emotion transferability across languages and trigger word identification. We employ data augmentation techniques, including back-translation and synonym replacement, to address data scarcity and imbalance issues in the emotion detection sub-task. For the emotion trigger identification sub-task, we utilize token and label mapping to capture emotional information at the subword level. Our system achieves competitive performance, ranking 13th, 1st, and 2nd in the Emotion Detection, Binary Trigger Word Detection, and Numerical Trigger Word Detection tasks.", }
Cross-lingual emotion detection faces challenges such as imbalanced label distribution, data scarcity, cultural and linguistic differences, figurative language, and the opaqueness of pre-trained language models. This paper presents our approach to the EXALT shared task at WASSA 2024, focusing on emotion transferability across languages and trigger word identification. We employ data augmentation techniques, including back-translation and synonym replacement, to address data scarcity and imbalance issues in the emotion detection sub-task. For the emotion trigger identification sub-task, we utilize token and label mapping to capture emotional information at the subword level. Our system achieves competitive performance, ranking 13th, 1st, and 2nd in the Emotion Detection, Binary Trigger Word Detection, and Numerical Trigger Word Detection tasks.
[ "Zhang, Jinghui", "Zhao, Yuan", "Zhang, Siqin", "Zhao, Ruijing", "Bao, Siyu" ]
Enhancing Cross-Lingual Emotion Detection with Data Augmentation and Token-Label Mapping
wassa-1.53
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.53/
[]
[]
[]
0