Datasets:

bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
sequencelengths
1
45
title
stringlengths
21
199
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
28
upvotes
int64
-1
255
num_comments
int64
-1
23
n_authors
int64
-1
35
proceedings
stringlengths
38
47
Models
sequencelengths
0
57
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.smm4h-1.6.bib
@inproceedings{mukans-barzdins-2024-riga, title = "{RIGA} at {SMM}4{H}-2024 Task 1: Enhancing {ADE} discovery with {GPT}-4", author = "Mukans, Eduards and Barzdins, Guntis", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.6", pages = "23--27", abstract = "The following is a description of the RIGA team{'}s submissions for the SMM4H-2024 Task 1: Extraction and normalization of adverse drug events (ADEs) in English tweets. Our approach focuses on utilizing Large Language Models (LLMs) to generate data that enhances the fine-tuning of classification and Named Entity Recognition (NER) models. Our solution significantly outperforms mean and median submissions of other teams. The efficacy of our ADE extraction from tweets is comparable to the current state-of-the-art solution, established as the task baseline. The code for our method is available on GitHub (https://github.com/emukans/smm4h2024-riga)", }
The following is a description of the RIGA team{'}s submissions for the SMM4H-2024 Task 1: Extraction and normalization of adverse drug events (ADEs) in English tweets. Our approach focuses on utilizing Large Language Models (LLMs) to generate data that enhances the fine-tuning of classification and Named Entity Recognition (NER) models. Our solution significantly outperforms mean and median submissions of other teams. The efficacy of our ADE extraction from tweets is comparable to the current state-of-the-art solution, established as the task baseline. The code for our method is available on GitHub (https://github.com/emukans/smm4h2024-riga)
[ "Mukans, Eduards", "Barzdins, Guntis" ]
RIGA at SMM4H-2024 Task 1: Enhancing ADE discovery with GPT-4
smm4h-1.6
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.7.bib
@inproceedings{mia-etal-2024-golden, title = "{G}olden{\_}{D}uck at {\#}{SMM}4{H} 2024: A Transformer-based Approach to Social Media Text Classification", author = "Mia, Md Ayon and Yahan, Mahshar and Murad, Hasan and Khan, Muhammad", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.7", pages = "28--31", abstract = "In this paper, we have addressed Task 3 on social anxiety disorder identification and Task 5 on mental illness recognition organized by the SMM4H 2024 workshop. In Task 3, a multi-classification problem has been presented to classify Reddit posts about outdoor spaces into four categories: Positive, Neutral, Negative, or Unrelated. Using the pre-trained RoBERTa-base model along with techniques like Mean pooling, CLS, and Attention Head, we have scored an F1-Score of 0.596 on the test dataset for Task 3. Task 5 aims to classify tweets into two categories: those describing a child with conditions like ADHD, ASD, delayed speech, or asthma (class 1), and those merely mentioning a disorder (class 0). Using the pre-trained RoBERTa-large model, incorporating a weighted ensemble of the last 4 hidden layers through concatenation and mean pooling, we achieved an F1 Score of 0.928 on the test data for Task 5.", }
In this paper, we have addressed Task 3 on social anxiety disorder identification and Task 5 on mental illness recognition organized by the SMM4H 2024 workshop. In Task 3, a multi-classification problem has been presented to classify Reddit posts about outdoor spaces into four categories: Positive, Neutral, Negative, or Unrelated. Using the pre-trained RoBERTa-base model along with techniques like Mean pooling, CLS, and Attention Head, we have scored an F1-Score of 0.596 on the test dataset for Task 3. Task 5 aims to classify tweets into two categories: those describing a child with conditions like ADHD, ASD, delayed speech, or asthma (class 1), and those merely mentioning a disorder (class 0). Using the pre-trained RoBERTa-large model, incorporating a weighted ensemble of the last 4 hidden layers through concatenation and mean pooling, we achieved an F1 Score of 0.928 on the test data for Task 5.
[ "Mia, Md Ayon", "Yahan, Mahshar", "Murad, Hasan", "Khan, Muhammad" ]
Golden_Duck at #SMM4H 2024: A Transformer-based Approach to Social Media Text Classification
smm4h-1.7
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.8.bib
@inproceedings{li-etal-2024-srcb, title = "{SRCB} at {\#}{SMM}4{H} 2024: Making Full Use of {LLM}-based Data Augmentation in Adverse Drug Event Extraction and Normalization", author = "Li, Hongyu and Zhang, Yuming and Zhang, Yongwei and Jiang, Shanshan and Dong, Bin", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.8", pages = "32--37", abstract = "This paper reports on the performance of SRCB{'}s system in the Social Media Mining for Health ({\#}SMM4H) 2024 Shared Task 1: extraction and normalization of adverse drug events (ADEs) in English tweets. We develop a system composed of an ADE extraction module and an ADE normalization module which furtherly includes a retrieval module and a filtering module. To alleviate the data imbalance and other issues introduced by the dataset, we employ 4 data augmentation techniques based on Large Language Models (LLMs) across both modules. Our best submission achieves an F1 score of 53.6 (49.4 on the unseen subset) on the ADE normalization task and an F1 score of 52.1 on ADE extraction task.", }
This paper reports on the performance of SRCB{'}s system in the Social Media Mining for Health ({\#}SMM4H) 2024 Shared Task 1: extraction and normalization of adverse drug events (ADEs) in English tweets. We develop a system composed of an ADE extraction module and an ADE normalization module which furtherly includes a retrieval module and a filtering module. To alleviate the data imbalance and other issues introduced by the dataset, we employ 4 data augmentation techniques based on Large Language Models (LLMs) across both modules. Our best submission achieves an F1 score of 53.6 (49.4 on the unseen subset) on the ADE normalization task and an F1 score of 52.1 on ADE extraction task.
[ "Li, Hongyu", "Zhang, Yuming", "Zhang, Yongwei", "Jiang, Shanshan", "Dong, Bin" ]
SRCB at #SMM4H 2024: Making Full Use of LLM-based Data Augmentation in Adverse Drug Event Extraction and Normalization
smm4h-1.8
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.9.bib
@inproceedings{athukoralage-etal-2024-lt4sg, title = "{LT}4{SG}@{SMM}4{H}{'}24: Tweets Classification for Digital Epidemiology of Childhood Health Outcomes Using Pre-Trained Language Models", author = "Athukoralage, Dasun and Atapattu, Thushari and Thilakaratne, Menasha and Falkner, Katrina", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.9", pages = "38--41", abstract = "This paper presents our approaches for the SMM4H{'}24 Shared Task 5 on the binary classification of English tweets reporting children{'}s medical disorders. Our first approach involves fine-tuning a single RoBERTa-large model, while the second approach entails ensembling the results of three fine-tuned BERTweet-large models. We demonstrate that although both approaches exhibit identical performance on validation data, the BERTweet-large ensemble excels on test data. Our best-performing system achieves an F1-score of 0.938 on test data, outperforming the benchmark classifier by 1.18{\%}.", }
This paper presents our approaches for the SMM4H{'}24 Shared Task 5 on the binary classification of English tweets reporting children{'}s medical disorders. Our first approach involves fine-tuning a single RoBERTa-large model, while the second approach entails ensembling the results of three fine-tuned BERTweet-large models. We demonstrate that although both approaches exhibit identical performance on validation data, the BERTweet-large ensemble excels on test data. Our best-performing system achieves an F1-score of 0.938 on test data, outperforming the benchmark classifier by 1.18{\%}.
[ "Athukoralage, Dasun", "Atapattu, Thushari", "Thilakaratne, Menasha", "Falkner, Katrina" ]
LT4SG@SMM4H'24: Tweets Classification for Digital Epidemiology of Childhood Health Outcomes Using Pre-Trained Language Models
smm4h-1.9
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.10.bib
@inproceedings{yamagishi-nakamura-2024-utrad, title = "{UTR}ad-{NLP} at {\#}{SMM}4{H} 2024: Why {LLM}-Generated Texts Fail to Improve Text Classification Models", author = "Yamagishi, Yosuke and Nakamura, Yuta", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.10", pages = "42--47", abstract = "In this paper, we present our approach to addressing the binary classification tasks, Tasks 5 and 6, as part of the Social Media Mining for Health (SMM4H) text classification challenge. Both tasks involved working with imbalanced datasets that featured a scarcity of positive examples. To mitigate this imbalance, we employed a Large Language Model to generate synthetic texts with positive labels, aiming to augment the training data for our text classification models. Unfortunately, this method did not significantly improve model performance. Through clustering analysis using text embeddings, we discovered that the generated texts significantly lacked diversity compared to the raw data. This finding highlights the challenges of using synthetic text generation for enhancing model efficacy in real-world applications, specifically in the context of health-related social media data.", }
In this paper, we present our approach to addressing the binary classification tasks, Tasks 5 and 6, as part of the Social Media Mining for Health (SMM4H) text classification challenge. Both tasks involved working with imbalanced datasets that featured a scarcity of positive examples. To mitigate this imbalance, we employed a Large Language Model to generate synthetic texts with positive labels, aiming to augment the training data for our text classification models. Unfortunately, this method did not significantly improve model performance. Through clustering analysis using text embeddings, we discovered that the generated texts significantly lacked diversity compared to the raw data. This finding highlights the challenges of using synthetic text generation for enhancing model efficacy in real-world applications, specifically in the context of health-related social media data.
[ "Yamagishi, Yosuke", "Nakamura, Yuta" ]
UTRad-NLP at #SMM4H 2024: Why LLM-Generated Texts Fail to Improve Text Classification Models
smm4h-1.10
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.11.bib
@inproceedings{ke-etal-2024-hbut, title = "{HBUT} at {\#}{SMM}4{H} 2024 Task1: Extraction and Normalization of Adverse Drug Events with a Large Language Model", author = "Ke, Yuanzhi and Jin, Hanbo and Wu, Xinyun and Xiong, Caiquan", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.11", pages = "48--54", abstract = "In this paper, we describe our proposed systems for the Social Media Mining for Health 2024 shared task 1. We built our system on the basis of GLM, a pre-trained large language model with few-shot Learning capabilities, using a two-step prompting strategy to extract adverse drug event (ADE) and an ensemble method for normalization. In first step of extraction phase, we extract all the potential ADEs with in-context few-shot learning. In the second step for extraction, we let GLM to filer out false positive outputs in the first step by a tailored prompt. Then we normalize each ADE to its MedDRA preferred term ID (ptID) by an ensemble method using Reciprocal Rank Fusion (RRF). Our method achieved excellent recall rate. It obtained 41.1{\%}, 42.8{\%}, and 40.6{\%} recall rate for ADE normalization, ADE recognition, and normalization for unseen ADEs, respectively. Compared to the performance of the average and median among all the participants in terms of recall rate, our recall rate scores are generally 10{\%}-20{\%} higher than the other participants{'} systems.", }
In this paper, we describe our proposed systems for the Social Media Mining for Health 2024 shared task 1. We built our system on the basis of GLM, a pre-trained large language model with few-shot Learning capabilities, using a two-step prompting strategy to extract adverse drug event (ADE) and an ensemble method for normalization. In first step of extraction phase, we extract all the potential ADEs with in-context few-shot learning. In the second step for extraction, we let GLM to filer out false positive outputs in the first step by a tailored prompt. Then we normalize each ADE to its MedDRA preferred term ID (ptID) by an ensemble method using Reciprocal Rank Fusion (RRF). Our method achieved excellent recall rate. It obtained 41.1{\%}, 42.8{\%}, and 40.6{\%} recall rate for ADE normalization, ADE recognition, and normalization for unseen ADEs, respectively. Compared to the performance of the average and median among all the participants in terms of recall rate, our recall rate scores are generally 10{\%}-20{\%} higher than the other participants{'} systems.
[ "Ke, Yuanzhi", "Jin, Hanbo", "Wu, Xinyun", "Xiong, Caiquan" ]
HBUT at #SMM4H 2024 Task1: Extraction and Normalization of Adverse Drug Events with a Large Language Model
smm4h-1.11
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.12.bib
@inproceedings{dey-etal-2024-smm4h, title = "{SMM}4{H} 2024: 5 Fold Cross Validation for Classification of tweets reporting children{'}s disorders", author = "Dey, Lipika and Naik, B and Poojita, Oppangi and Pothireddi, Kovidh", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.12", pages = "55--57", abstract = "This paper presents our system developed for the Social Media Mining for Health (SMM4H) 2024 Task 05. The task objective was binary classification of tweets provided in the dataset, distinguishing between those reporting medical disorders and those merely mentioning diseases. We address this challenge through the utilization of a 5-fold cross-validation approach, employing the RoBERTa-Large model. Evaluation results demonstrate an F1-score of 0.886 on the validation dataset and 0.823 on the test dataset.", }
This paper presents our system developed for the Social Media Mining for Health (SMM4H) 2024 Task 05. The task objective was binary classification of tweets provided in the dataset, distinguishing between those reporting medical disorders and those merely mentioning diseases. We address this challenge through the utilization of a 5-fold cross-validation approach, employing the RoBERTa-Large model. Evaluation results demonstrate an F1-score of 0.886 on the validation dataset and 0.823 on the test dataset.
[ "Dey, Lipika", "Naik, B", "Poojita, Oppangi", "Pothireddi, Kovidh" ]
SMM4H 2024: 5 Fold Cross Validation for Classification of tweets reporting children's disorders
smm4h-1.12
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.12/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.13.bib
@inproceedings{ke-etal-2024-hbut-smm4h, title = "{HBUT} at {\#}{SMM}4{H} 2024 Task2: Cross-lingual Few-shot Medical Entity Extraction using a Large Language Model", author = "Ke, Yuanzhi and Yin, Zhangju and Wu, Xinyun and Xiong, Caiquan", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.13", pages = "58--62", abstract = "Named entity recognition (NER) of drug and disorder/body function mentions in web text is challenging in the face of multilingualism, limited data, and poor data quality. Traditional small-scale models struggle to cope with the task. Large language models with conventional prompts also yield poor results. In this paper, we introduce our system, which employs a large language model (LLM) with a novel two-step prompting strategy. Instead of directly extracting the target medical entities, our system firstly extract all entities and then prompt the LLM to extract drug and disorder entities given the all-entity list and original input text as the context. The experimental and test results indicate that this strategy successfully enhanced our system performance, especially for German language.", }
Named entity recognition (NER) of drug and disorder/body function mentions in web text is challenging in the face of multilingualism, limited data, and poor data quality. Traditional small-scale models struggle to cope with the task. Large language models with conventional prompts also yield poor results. In this paper, we introduce our system, which employs a large language model (LLM) with a novel two-step prompting strategy. Instead of directly extracting the target medical entities, our system firstly extract all entities and then prompt the LLM to extract drug and disorder entities given the all-entity list and original input text as the context. The experimental and test results indicate that this strategy successfully enhanced our system performance, especially for German language.
[ "Ke, Yuanzhi", "Yin, Zhangju", "Wu, Xinyun", "Xiong, Caiquan" ]
HBUT at #SMM4H 2024 Task2: Cross-lingual Few-shot Medical Entity Extraction using a Large Language Model
smm4h-1.13
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.14.bib
@inproceedings{hecht-etal-2024-pcic, title = "{PCIC} at {SMM}4{H} 2024: Enhancing {R}eddit Post Classification on Social Anxiety Using Transformer Models and Advanced Loss Functions", author = "Hecht, Leon and Pozos, Victor and Gomez Adorno, Helena and Fuentes-Pineda, Gibran and Sierra, Gerardo and Bel-Enguix, Gemma", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.14", pages = "63--66", abstract = "We present our approach to solving the task of identifying the effect of outdoor activities on social anxiety based on reddit posts. We employed state-of-the-art transformer models enhanced with a combination of advanced loss functions. Data augmentation techniques were also used to address class imbalance within the training set. Our method achieved a macro-averaged F1-score of 0.655 on the test data, surpassing the workshop{'}s mean F1-Score of 0.519. These findings suggest that integrating weighted loss functions improves the performance of transformer models in classifying unbalanced text data, while data augmentation can improve the model{'}s ability to generalize.", }
We present our approach to solving the task of identifying the effect of outdoor activities on social anxiety based on reddit posts. We employed state-of-the-art transformer models enhanced with a combination of advanced loss functions. Data augmentation techniques were also used to address class imbalance within the training set. Our method achieved a macro-averaged F1-score of 0.655 on the test data, surpassing the workshop{'}s mean F1-Score of 0.519. These findings suggest that integrating weighted loss functions improves the performance of transformer models in classifying unbalanced text data, while data augmentation can improve the model{'}s ability to generalize.
[ "Hecht, Leon", "Pozos, Victor", "Gomez Adorno, Helena", "Fuentes-Pineda, Gibran", "Sierra, Gerardo", "Bel-Enguix, Gemma" ]
PCIC at SMM4H 2024: Enhancing Reddit Post Classification on Social Anxiety Using Transformer Models and Advanced Loss Functions
smm4h-1.14
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.14/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.15.bib
@inproceedings{singhal-bedi-2024-transformers-smm4h, title = "Transformers at {\#}{SMM}4{H} 2024: Identification of Tweets Reporting Children{'}s Medical Disorders And Effects of Outdoor Spaces on Social Anxiety Symptoms on {R}eddit Using {R}o{BERT}a", author = "Singhal, Kriti and Bedi, Jatin", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.15", pages = "67--70", abstract = "With the widespread increase in the use of social media platforms such as Twitter, Instagram, and Reddit, people are sharing their views on various topics. They have become more vocal on these platforms about their views and opinions on the medical challenges they are facing. This data is a valuable asset of medical insights in the study and research of healthcare. This paper describes our adoption of transformer-based approaches for tasks 3 and 5. For both tasks, we fine-tuned large RoBERTa, a BERT-based architecture, and achieved a highest F1 score of 0.413 and 0.900 in tasks 3 and 5, respectively.", }
With the widespread increase in the use of social media platforms such as Twitter, Instagram, and Reddit, people are sharing their views on various topics. They have become more vocal on these platforms about their views and opinions on the medical challenges they are facing. This data is a valuable asset of medical insights in the study and research of healthcare. This paper describes our adoption of transformer-based approaches for tasks 3 and 5. For both tasks, we fine-tuned large RoBERTa, a BERT-based architecture, and achieved a highest F1 score of 0.413 and 0.900 in tasks 3 and 5, respectively.
[ "Singhal, Kriti", "Bedi, Jatin" ]
Transformers at #SMM4H 2024: Identification of Tweets Reporting Children's Medical Disorders And Effects of Outdoor Spaces on Social Anxiety Symptoms on Reddit Using RoBERTa
smm4h-1.15
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.16.bib
@inproceedings{khademi-etal-2024-enhancing, title = "Enhancing Social Media Health Prediction Certainty by Integrating Large Language Models with Transformer Classifiers", author = "Khademi, Sedigh and Palmer, Christopher and Javed, Muhammad and Buttery, Jim and Dimaguila, Gerardo", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.16", pages = "71--73", abstract = "This paper presents our approach for SMM4H 2024 Task 5, focusing on identifying tweets where users discuss their child{'}s health conditions of ADHD, ASD, delayed speech, or asthma. Our approach uses a pipeline that combines transformer-based classifiers and GPT-4 large language models (LLMs). We first address data imbalance in the training set using topic modelling and under-sampling. Next, we train RoBERTa-based classifiers on the adjusted data. Finally, GPT-4 refines the classifier{'}s predictions for uncertain cases (confidence below 0.9). This strategy achieved significant improvement over the baseline RoBERTa models. Our work demonstrates the effectiveness of combining transformer classifiers and LLMs for extracting health insights from social media conversations.", }
This paper presents our approach for SMM4H 2024 Task 5, focusing on identifying tweets where users discuss their child{'}s health conditions of ADHD, ASD, delayed speech, or asthma. Our approach uses a pipeline that combines transformer-based classifiers and GPT-4 large language models (LLMs). We first address data imbalance in the training set using topic modelling and under-sampling. Next, we train RoBERTa-based classifiers on the adjusted data. Finally, GPT-4 refines the classifier{'}s predictions for uncertain cases (confidence below 0.9). This strategy achieved significant improvement over the baseline RoBERTa models. Our work demonstrates the effectiveness of combining transformer classifiers and LLMs for extracting health insights from social media conversations.
[ "Khademi, Sedigh", "Palmer, Christopher", "Javed, Muhammad", "Buttery, Jim", "Dimaguila, Gerardo" ]
Enhancing Social Media Health Prediction Certainty by Integrating Large Language Models with Transformer Classifiers
smm4h-1.16
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.16/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.17.bib
@inproceedings{yu-etal-2024-polyucbs, title = "{P}olyu{CBS} at {SMM}4{H} 2024: {LLM}-based Medical Disorder and Adverse Drug Event Detection with Low-rank Adaptation", author = "Yu, Zhai and Bao, Xiaoyi and Chersoni, Emmanuele and Portelli, Beatrice and Lee, Sophia and Gu, Jinghang and Huang, Chu-Ren", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.17", pages = "74--78", abstract = "This is the demonstration of systems and results of our team{'}s participation in the Social Medical Mining for Health (SMM4H) 2024 Shared Task. Our team participated in two tasks: Task 1 and Task 5. Task 5 requires the detection of tweet sentences that claim children{'}s medical disorders from certain users. Task 1 needs teams to extract and normalize Adverse Drug Event terms in the tweet sentence. The team selected several Pre-trained Language Models and generative Large Language Models to meet the requirements. Strategies to improve the performance include cloze test, prompt engineering, Low Rank Adaptation etc. The test result of our system has an F1 score of 0.935, Precision of 0.954 and Recall of 0.917 in Task 5 and an overall F1 score of 0.08 in Task 1.", }
This is the demonstration of systems and results of our team{'}s participation in the Social Medical Mining for Health (SMM4H) 2024 Shared Task. Our team participated in two tasks: Task 1 and Task 5. Task 5 requires the detection of tweet sentences that claim children{'}s medical disorders from certain users. Task 1 needs teams to extract and normalize Adverse Drug Event terms in the tweet sentence. The team selected several Pre-trained Language Models and generative Large Language Models to meet the requirements. Strategies to improve the performance include cloze test, prompt engineering, Low Rank Adaptation etc. The test result of our system has an F1 score of 0.935, Precision of 0.954 and Recall of 0.917 in Task 5 and an overall F1 score of 0.08 in Task 1.
[ "Yu, Zhai", "Bao, Xiaoyi", "Chersoni, Emmanuele", "Portelli, Beatrice", "Lee, Sophia", "Gu, Jinghang", "Huang, Chu-Ren" ]
PolyuCBS at SMM4H 2024: LLM-based Medical Disorder and Adverse Drug Event Detection with Low-rank Adaptation
smm4h-1.17
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.17/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.18.bib
@inproceedings{abburi-etal-2024-deloitte, title = "Deloitte at {\#}{SMM}4{H} 2024: Can {GPT}-4 Detect {COVID}-19 Tweets Annotated by Itself?", author = "Abburi, Harika and Pudota, Nirmala and Veeramani, Balaji and Bowen, Edward and Bhattacharya, Sanmitra", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.18", pages = "79--82", abstract = "The advent of Large Language Models (LLMs) such as Generative Pre-trained Transformers (GPT-4) mark a transformative era in Natural Language Generation (NLG). These models demonstrate the ability to generate coherent text that closely resembles human-authored content. They are easily accessible and have become invaluable tools in handling various text-based tasks, such as data annotation, report generation, and question answering. In this paper, we investigate GPT-4{'}s ability to discern between data it has annotated and data annotated by humans, specifically within the context of tweets in the medical domain. Through experimental analysis, we observe GPT-4 outperform other state-of-the-art models. The dataset used in this study was provided by the SMM4H (Social Media Mining for Health Research and Applications) shared task. Our model achieved an accuracy of 0.51, securing a second rank in the shared task.", }
The advent of Large Language Models (LLMs) such as Generative Pre-trained Transformers (GPT-4) mark a transformative era in Natural Language Generation (NLG). These models demonstrate the ability to generate coherent text that closely resembles human-authored content. They are easily accessible and have become invaluable tools in handling various text-based tasks, such as data annotation, report generation, and question answering. In this paper, we investigate GPT-4{'}s ability to discern between data it has annotated and data annotated by humans, specifically within the context of tweets in the medical domain. Through experimental analysis, we observe GPT-4 outperform other state-of-the-art models. The dataset used in this study was provided by the SMM4H (Social Media Mining for Health Research and Applications) shared task. Our model achieved an accuracy of 0.51, securing a second rank in the shared task.
[ "Abburi, Harika", "Pudota, Nirmala", "Veeramani, Balaji", "Bowen, Edward", "Bhattacharya, Sanmitra" ]
Deloitte at #SMM4H 2024: Can GPT-4 Detect COVID-19 Tweets Annotated by Itself?
smm4h-1.18
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.18/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.19.bib
@inproceedings{wuehrl-etal-2024-ims, title = "{IMS}{\_}medic{ALY} at {\#}{SMM}4{H} 2024: Detecting Impacts of Outdoor Spaces on Social Anxiety with Data Augmented Ensembling", author = "Wuehrl, Amelie and Greschner, Lynn and Menchaca Resendiz, Yarik and Klinger, Roman", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.19", pages = "83--87", abstract = "Many individuals affected by Social Anxiety Disorder turn to social media platforms to share their experiences and seek advice. This includes discussing the potential benefits of engaging with outdoor environments. As part of {\#}SMM4H 2024, Shared Task 3 focuses on classifying the effects of outdoor spaces on social anxiety symptoms in Reddit posts. In our contribution to the task, we explore the effectiveness of domain-specific models (trained on social media data {--} SocBERT) against general domain models (trained on diverse datasets {--} BERT, RoBERTa, GPT-3.5) in predicting the sentiment related to outdoor spaces. Further, we assess the benefits of augmenting sparse human-labeled data with synthetic training instances and evaluate the complementary strengths of domain-specific and general classifiers using an ensemble model. Our results show that (1) fine-tuning small, domain-specific models generally outperforms large general language models in most cases. Only one large language model (GPT-4) exhibits performance comparable to the fine-tuned models (52{\%} F1). Further, we find that (2) synthetic data does improve the performance of fine-tuned models in some cases, and (3) models do not appear to complement each other in our ensemble setup.", }
Many individuals affected by Social Anxiety Disorder turn to social media platforms to share their experiences and seek advice. This includes discussing the potential benefits of engaging with outdoor environments. As part of {\#}SMM4H 2024, Shared Task 3 focuses on classifying the effects of outdoor spaces on social anxiety symptoms in Reddit posts. In our contribution to the task, we explore the effectiveness of domain-specific models (trained on social media data {--} SocBERT) against general domain models (trained on diverse datasets {--} BERT, RoBERTa, GPT-3.5) in predicting the sentiment related to outdoor spaces. Further, we assess the benefits of augmenting sparse human-labeled data with synthetic training instances and evaluate the complementary strengths of domain-specific and general classifiers using an ensemble model. Our results show that (1) fine-tuning small, domain-specific models generally outperforms large general language models in most cases. Only one large language model (GPT-4) exhibits performance comparable to the fine-tuned models (52{\%} F1). Further, we find that (2) synthetic data does improve the performance of fine-tuned models in some cases, and (3) models do not appear to complement each other in our ensemble setup.
[ "Wuehrl, Amelie", "Greschner, Lynn", "Menchaca Resendiz, Yarik", "Klinger, Roman" ]
IMS_medicALY at #SMM4H 2024: Detecting Impacts of Outdoor Spaces on Social Anxiety with Data Augmented Ensembling
smm4h-1.19
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.19/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.20.bib
@inproceedings{kadiyala-rao-2024-1024m, title = "1024m at {SMM}4{H} 2024: Tasks 3, 5 {\&} 6 - Self Reported Health Text Classification through Ensembles", author = "Kadiyala, Ram and Rao, M.v.p.", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.20", pages = "88--94", abstract = "Social media is a great source of data for users reporting information and regarding their health and how various things have had an effect on them. This paper presents various approaches using Transformers and Large Language Models and their ensembles, their performance along with advantages and drawbacks for various tasks of SMM4H{'}24 - Classifying texts on impact of nature and outdoor spaces on the author{'}s mental health (Task 3), Binary classification of tweets reporting their children{'}s health disorders like Asthma, Autism, ADHD and Speech disorder (task 5), Binary classification of users self-reporting their age (task 6).", }
Social media is a great source of data for users reporting information and regarding their health and how various things have had an effect on them. This paper presents various approaches using Transformers and Large Language Models and their ensembles, their performance along with advantages and drawbacks for various tasks of SMM4H{'}24 - Classifying texts on impact of nature and outdoor spaces on the author{'}s mental health (Task 3), Binary classification of tweets reporting their children{'}s health disorders like Asthma, Autism, ADHD and Speech disorder (task 5), Binary classification of users self-reporting their age (task 6).
[ "Kadiyala, Ram", "Rao, M.v.p." ]
1024m at SMM4H 2024: Tasks 3, 5 & 6 - Self Reported Health Text Classification through Ensembles
smm4h-1.20
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.20/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.21.bib
@inproceedings{alhamed-etal-2024-experimenting, title = "Experimenting with Transformer-based and Large Language Models for Classifying Effects of Outdoor Spaces on Social Anxiety in Social Media Data", author = "Alhamed, Falwah and Ive, Julia and Specia, Lucia", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.21", pages = "95--97", abstract = "Social Anxiety Disorder (SAD) is a common condition, affecting a significant portion of the population. While research suggests spending time in nature can alleviate anxiety, the specific impact on SAD remains unclear. This study explores the relationship between discussions of outdoor spaces and social anxiety on social media. We leverage transformer-based and large language models (LLMs) to analyze a social media dataset focused on SAD. We developed three methods for the task of predicting the effects of outdoor spaces on SAD in social media. A two-stage pipeline classifier achieved the best performance of our submissions with results exceeding baseline performance.", }
Social Anxiety Disorder (SAD) is a common condition, affecting a significant portion of the population. While research suggests spending time in nature can alleviate anxiety, the specific impact on SAD remains unclear. This study explores the relationship between discussions of outdoor spaces and social anxiety on social media. We leverage transformer-based and large language models (LLMs) to analyze a social media dataset focused on SAD. We developed three methods for the task of predicting the effects of outdoor spaces on SAD in social media. A two-stage pipeline classifier achieved the best performance of our submissions with results exceeding baseline performance.
[ "Alhamed, Falwah", "Ive, Julia", "Specia, Lucia" ]
Experimenting with Transformer-based and Large Language Models for Classifying Effects of Outdoor Spaces on Social Anxiety in Social Media Data
smm4h-1.21
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.21/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.22.bib
@inproceedings{elliott-elliott-2024-interrupt, title = "interrupt-driven@{SMM}4{H}{'}24: Relevance-weighted Sentiment Analysis of {R}eddit Posts", author = "Elliott, Jessica and Elliott, Roland", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.22", pages = "98--100", abstract = "This paper describes our approach to Task 3 of the Social Media Mining for Health 2024 (SMM4H{'}24) shared tasks. The objective of the task was to classify the sentiment of social media posts, taken from the social anxiety subreddit, with reference to the outdoors, as positive, negative, neutral, or unrelated. We classified posts using a relevance-weighted sentiment analysis, which scored poorly, at 0.45 accuracy on the test set and 0.396 accuracy on the evaluation set. We consider what factors contributed to these low scores, and what alternatives could yield improvements, namely: improved data cleaning, a sentiment analyzer trained on a more suitable data set, improved sentiment heuristics, and a more involved relevance-weighting.", }
This paper describes our approach to Task 3 of the Social Media Mining for Health 2024 (SMM4H{'}24) shared tasks. The objective of the task was to classify the sentiment of social media posts, taken from the social anxiety subreddit, with reference to the outdoors, as positive, negative, neutral, or unrelated. We classified posts using a relevance-weighted sentiment analysis, which scored poorly, at 0.45 accuracy on the test set and 0.396 accuracy on the evaluation set. We consider what factors contributed to these low scores, and what alternatives could yield improvements, namely: improved data cleaning, a sentiment analyzer trained on a more suitable data set, improved sentiment heuristics, and a more involved relevance-weighting.
[ "Elliott, Jessica", "Elliott, Rol", "" ]
interrupt-driven@SMM4H'24: Relevance-weighted Sentiment Analysis of Reddit Posts
smm4h-1.22
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.22/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.23.bib
@inproceedings{sankar-etal-2024-iitroorkee, title = "{IITR}oorkee@{SMM}4{H} 2024 Cross-Platform Age Detection in {T}witter and {R}eddit Using Transformer-Based Model", author = "Sankar, Thadavarthi and Suraj, Dudekula and Reddy, Mallamgari and Toshniwal, Durga and Agarwal, Amit", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.23", pages = "101--105", abstract = "This paper outlines the methodology for the automatic extraction of self-reported ages from social media posts as part of the Social Media Mining for Health (SMM4H) 2024 Workshop Shared Tasks. The focus was on Task 6: {``}Self-reported exact age classification with cross-platform evaluation in English.{''} The goal was to accurately identify age-related information from user-generated content, which is crucial for applications in public health monitoring, targeted advertising, and demographic research. A number of transformer-based models were employed, including RoBERTa-Base, BERT-Base, BiLSTM, and Flan T5 Base, leveraging their advanced capabilities in natural language understanding. The training strategies included fine-tuning foundational pre-trained language models and evaluating model performance using standard metrics: F1-score, Precision, and Recall. The experimental results demonstrated that the RoBERTa-Base model significantly outperformed the other models in this classification task. The best results achieved with the RoBERTa-Base model were an F1-score of 0.878, a Precision of 0.899, and a Recall of 0.858.", }
This paper outlines the methodology for the automatic extraction of self-reported ages from social media posts as part of the Social Media Mining for Health (SMM4H) 2024 Workshop Shared Tasks. The focus was on Task 6: {``}Self-reported exact age classification with cross-platform evaluation in English.{''} The goal was to accurately identify age-related information from user-generated content, which is crucial for applications in public health monitoring, targeted advertising, and demographic research. A number of transformer-based models were employed, including RoBERTa-Base, BERT-Base, BiLSTM, and Flan T5 Base, leveraging their advanced capabilities in natural language understanding. The training strategies included fine-tuning foundational pre-trained language models and evaluating model performance using standard metrics: F1-score, Precision, and Recall. The experimental results demonstrated that the RoBERTa-Base model significantly outperformed the other models in this classification task. The best results achieved with the RoBERTa-Base model were an F1-score of 0.878, a Precision of 0.899, and a Recall of 0.858.
[ "Sankar, Thadavarthi", "Suraj, Dudekula", "Reddy, Mallamgari", "Toshniwal, Durga", "Agarwal, Amit" ]
IITRoorkee@SMM4H 2024 Cross-Platform Age Detection in Twitter and Reddit Using Transformer-Based Model
smm4h-1.23
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.23/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.24.bib
@inproceedings{singh-etal-2024-smm4h24, title = "{SMM}4{H}{'}24 Task6 : Extracting Self-Reported Age with {LLM} and {BERT}weet: Fine-Grained Approaches for Social Media Text", author = "Singh, Jaskaran and Bedi, Jatin and Kaur, Maninder", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.24", pages = "106--109", abstract = "The paper presents two distinct approaches to Task 6 of the SMM4H{'}24 workshop: extracting self-reported exact age information from social media posts across platforms. This research task focuses on developing methods for automatically extracting self-reported ages from posts on two prominent social media platforms: Twitter (now X) and Reddit. The work leverages two ways, one Mistral-7B-Instruct-v0.2 Large Language Model (LLM) and another pre-trained language model BERTweet, to achieve robust and generalizable age classification, surpassing limitations of existing methods that rely on predefined age groups. The proposed models aim to advance the automatic extraction of self-reported exact ages from social media posts, enabling more nuanced analyses and insights into user demographics across different platforms.", }
The paper presents two distinct approaches to Task 6 of the SMM4H{'}24 workshop: extracting self-reported exact age information from social media posts across platforms. This research task focuses on developing methods for automatically extracting self-reported ages from posts on two prominent social media platforms: Twitter (now X) and Reddit. The work leverages two ways, one Mistral-7B-Instruct-v0.2 Large Language Model (LLM) and another pre-trained language model BERTweet, to achieve robust and generalizable age classification, surpassing limitations of existing methods that rely on predefined age groups. The proposed models aim to advance the automatic extraction of self-reported exact ages from social media posts, enabling more nuanced analyses and insights into user demographics across different platforms.
[ "Singh, Jaskaran", "Bedi, Jatin", "Kaur, Maninder" ]
SMM4H'24 Task6 : Extracting Self-Reported Age with LLM and BERTweet: Fine-Grained Approaches for Social Media Text
smm4h-1.24
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.24/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.25.bib
@inproceedings{el-sayed-etal-2024-aast, title = "{AAST}-{NLP}@{\#}{SMM}4{H}{'}24: Finetuning Language Models for Exact Age Classification and Effect of Outdoor Spaces on Social Anxiety", author = "El-Sayed, Ahmed and Nasr, Omar and Tawfik, Noha", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.25", pages = "110--113", abstract = "This paper evaluates the performance of {``}AAST-NLP{''} in the Social Media Mining for Health (SMM4H) Shared Tasks 3 and 6, where more than 20 teams participated in each. We leveraged state-of-the-art transformer-based models, including Mistral, to achieve our results. Our models consistently outperformed both the mean and median scores across the tasks. Specifically, an F1-score of 0.636 was achieved in classifying the impact of outdoor spaces on social anxiety symptoms, while an F1-score of 0.946 was recorded for the classification of self-reported exact ages.", }
This paper evaluates the performance of {``}AAST-NLP{''} in the Social Media Mining for Health (SMM4H) Shared Tasks 3 and 6, where more than 20 teams participated in each. We leveraged state-of-the-art transformer-based models, including Mistral, to achieve our results. Our models consistently outperformed both the mean and median scores across the tasks. Specifically, an F1-score of 0.636 was achieved in classifying the impact of outdoor spaces on social anxiety symptoms, while an F1-score of 0.946 was recorded for the classification of self-reported exact ages.
[ "El-Sayed, Ahmed", "Nasr, Omar", "Tawfik, Noha" ]
AAST-NLP@#SMM4H'24: Finetuning Language Models for Exact Age Classification and Effect of Outdoor Spaces on Social Anxiety
smm4h-1.25
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.25/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.26.bib
@inproceedings{dahiya-bagga-2024-cogai, title = "{C}og{AI}@{SMM}4{H} 2024: Leveraging {BERT}-based Ensemble Models for Classifying Tweets on Developmental Disorders", author = "Dahiya, Liza and Bagga, Rachit", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.26", pages = "114--116", abstract = "This paper presents our work for the Task 5 of the Social Media Mining for Health Applications 2024 Shared Task - Binary classification of English tweets reporting children{'}s medical disorders. In this paper, we present and compare multiple approaches for automatically classifying tweets from parents based on whether they mention having a child with attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASD), delayed speech, or asthma. We use ensemble of various BERT-based models trained on provided dataset that yields an F1 score of \textbf{0.901} on the test data.", }
This paper presents our work for the Task 5 of the Social Media Mining for Health Applications 2024 Shared Task - Binary classification of English tweets reporting children{'}s medical disorders. In this paper, we present and compare multiple approaches for automatically classifying tweets from parents based on whether they mention having a child with attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASD), delayed speech, or asthma. We use ensemble of various BERT-based models trained on provided dataset that yields an F1 score of \textbf{0.901} on the test data.
[ "Dahiya, Liza", "Bagga, Rachit" ]
CogAI@SMM4H 2024: Leveraging BERT-based Ensemble Models for Classifying Tweets on Developmental Disorders
smm4h-1.26
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.26/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.27.bib
@inproceedings{davis-etal-2024-ade, title = "{ADE} Oracle at {\#}{SMM}4{H} 2024: A Two-Stage {NLP} System for Extracting and Normalizing Adverse Drug Events from Tweets", author = {Davis, Andrew and Dickson, Billy and K{\"u}bler, Sandra}, editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.27", pages = "117--120", abstract = "This study describes the approach of Team ADE Oracle for Task 1 of the Social Media Mining for Health Applications ({\#}SMM4H) 2024 shared task. Task 1 challenges participants to detect adverse drug events (ADEs) within English tweets and normalize these mentions against the Medical Dictionary for Regulatory Activities standards. Our approach utilized a two-stage NLP pipeline consisting of a named entity recognition model, retrained to recognize ADEs, followed by vector similarity assessment with a RoBERTa-based model. Despite achieving a relatively high recall of 37.4{\%} in the extraction of ADEs, indicative of effective identification of potential ADEs, our model encountered challenges with precision. We found marked discrepancies between recall and precision between the test set and our validation set, which underscores the need for further efforts to prevent overfitting and enhance the model{'}s generalization capabilities for practical applications.", }
This study describes the approach of Team ADE Oracle for Task 1 of the Social Media Mining for Health Applications ({\#}SMM4H) 2024 shared task. Task 1 challenges participants to detect adverse drug events (ADEs) within English tweets and normalize these mentions against the Medical Dictionary for Regulatory Activities standards. Our approach utilized a two-stage NLP pipeline consisting of a named entity recognition model, retrained to recognize ADEs, followed by vector similarity assessment with a RoBERTa-based model. Despite achieving a relatively high recall of 37.4{\%} in the extraction of ADEs, indicative of effective identification of potential ADEs, our model encountered challenges with precision. We found marked discrepancies between recall and precision between the test set and our validation set, which underscores the need for further efforts to prevent overfitting and enhance the model{'}s generalization capabilities for practical applications.
[ "Davis, Andrew", "Dickson, Billy", "K{\\\"u}bler, S", "ra" ]
ADE Oracle at #SMM4H 2024: A Two-Stage NLP System for Extracting and Normalizing Adverse Drug Events from Tweets
smm4h-1.27
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.27/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.28.bib
@inproceedings{chaudhary-etal-2024-brainstorm, title = "{B}rain{S}torm @ i{REL} at {\#}{SMM}4{H} 2024: Leveraging Translation and Topical Embeddings for Annotation Detection in Tweets", author = "Chaudhary, Manav and Gupta, Harshit and Varma, Vasudeva", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.28", pages = "121--123", abstract = "The proliferation of LLMs in various NLP tasks has sparked debates regarding their reliability, particularly in annotation tasks where biases and hallucinations may arise. In this shared task, we address the challenge of distinguishing annotations made by LLMs from those made by human domain experts in the context of COVID-19 symptom detection from tweets in Latin American Spanish. This paper presents BrainStorm @ iREL{'}s approach to the {\#}SMM4H 2024 Shared Task, leveraging the inherent topical information in tweets, we propose a novel approach to identify and classify annotations, aiming to enhance the trustworthiness of annotated data.", }
The proliferation of LLMs in various NLP tasks has sparked debates regarding their reliability, particularly in annotation tasks where biases and hallucinations may arise. In this shared task, we address the challenge of distinguishing annotations made by LLMs from those made by human domain experts in the context of COVID-19 symptom detection from tweets in Latin American Spanish. This paper presents BrainStorm @ iREL{'}s approach to the {\#}SMM4H 2024 Shared Task, leveraging the inherent topical information in tweets, we propose a novel approach to identify and classify annotations, aiming to enhance the trustworthiness of annotated data.
[ "Chaudhary, Manav", "Gupta, Harshit", "Varma, Vasudeva" ]
BrainStorm @ iREL at #SMM4H 2024: Leveraging Translation and Topical Embeddings for Annotation Detection in Tweets
smm4h-1.28
Poster
2405.11192
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.28/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.29.bib
@inproceedings{obeidat-etal-2024-ukynlp, title = "{UKYNLP}@{SMM}4{H}2024: Language Model Methods for Health Entity Tagging and Classification on Social Media (Tasks 4 {\&} 5)", author = "Obeidat, Motasem and Ekanayake, Vinu and Nahian, Md Sultan Al and Kavuluru, Ramakanth", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.29", pages = "124--129", abstract = "We describe the methods and results of our submission to the 9th Social Media Mining for Health Research and Applications (SMM4H) 2024 shared tasks 4 and 5. Task 4 involved extracting the clinical and social impacts of non-medical substance use and task 5 focused on the binary classification of tweets reporting children{'}s medical disorders. We employed encoder language models and their ensembles, achieving the top score on task 4 and a high score for task 5.", }
We describe the methods and results of our submission to the 9th Social Media Mining for Health Research and Applications (SMM4H) 2024 shared tasks 4 and 5. Task 4 involved extracting the clinical and social impacts of non-medical substance use and task 5 focused on the binary classification of tweets reporting children{'}s medical disorders. We employed encoder language models and their ensembles, achieving the top score on task 4 and a high score for task 5.
[ "Obeidat, Motasem", "Ekanayake, Vinu", "Nahian, Md Sultan Al", "Kavuluru, Ramakanth" ]
UKYNLP@SMM4H2024: Language Model Methods for Health Entity Tagging and Classification on Social Media (Tasks 4 & 5)
smm4h-1.29
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.29/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.30.bib
@inproceedings{zheng-etal-2024-lhs712, title = "{LHS}712{\_}{ADEN}ot{G}ood at {\#}{SMM}4{H} 2024 Task 1: Deep-{LLMADE}miner: A deep learning and {LLM} pharmacovigilance pipeline for extraction and normalization of adverse drug event mentions on {T}witter", author = "Zheng, Yifan and Gong, Jun and Ren, Shushun and Simancek, Dalton and Vydiswaran, V.G.Vinod", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.30", pages = "130--132", abstract = "Adverse drug events (ADEs) pose major public health risks, with traditional reporting systems often failing to capture them. Our proposed pipeline, called Deep-LLMADEminer, used natural language processing approaches to tackle this issue for {\#}SMM4H 2024 shared task 1. Using annotated tweets, we built a three part pipeline: RoBERTa for classification, GPT-4-turbo for span extraction, and BioBERT for normalization. Our models achieved F1-scores of 0.838, 0.306, and 0.354, respectively, offering a novel system for Task 1 and similar pharmacovigilance tasks.", }
Adverse drug events (ADEs) pose major public health risks, with traditional reporting systems often failing to capture them. Our proposed pipeline, called Deep-LLMADEminer, used natural language processing approaches to tackle this issue for {\#}SMM4H 2024 shared task 1. Using annotated tweets, we built a three part pipeline: RoBERTa for classification, GPT-4-turbo for span extraction, and BioBERT for normalization. Our models achieved F1-scores of 0.838, 0.306, and 0.354, respectively, offering a novel system for Task 1 and similar pharmacovigilance tasks.
[ "Zheng, Yifan", "Gong, Jun", "Ren, Shushun", "Simancek, Dalton", "Vydiswaran, V.G.Vinod" ]
LHS712_ADENotGood at #SMM4H 2024 Task 1: Deep-LLMADEminer: A deep learning and LLM pharmacovigilance pipeline for extraction and normalization of adverse drug event mentions on Twitter
smm4h-1.30
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.30/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.31.bib
@inproceedings{mahajan-s-2024-halelab, title = "{H}ale{L}ab{\_}{NITK}@{SMM}4{H}{'}24: Binary classification of {E}nglish tweets reporting children{'}s medical disorders", author = "Mahajan, Ritik and S., Sowmya", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.31", pages = "133--135", abstract = "This paper describes the work undertaken as part of the SMM4H-2024 shared task, specifically Task 5, which involves the binary classification of English tweets reporting children{'}s medical disorders. The primary objective is to develop a system capable of automatically identifying tweets from users who report their pregnancy and mention children with specific medical conditions, such as attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASD), delayed speech, or asthma, while distinguishing them from tweets that merely reference a disorder without much context. Our approach leverages advanced natural language processing techniques and machine learning algorithms to accurately classify the tweets. The system achieved an overall F1-score of 0.87, highlighting its robustness and effectiveness in addressing the classification challenge posed by this task.", }
This paper describes the work undertaken as part of the SMM4H-2024 shared task, specifically Task 5, which involves the binary classification of English tweets reporting children{'}s medical disorders. The primary objective is to develop a system capable of automatically identifying tweets from users who report their pregnancy and mention children with specific medical conditions, such as attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASD), delayed speech, or asthma, while distinguishing them from tweets that merely reference a disorder without much context. Our approach leverages advanced natural language processing techniques and machine learning algorithms to accurately classify the tweets. The system achieved an overall F1-score of 0.87, highlighting its robustness and effectiveness in addressing the classification challenge posed by this task.
[ "Mahajan, Ritik", "S., Sowmya" ]
HaleLab_NITK@SMM4H'24: Binary classification of English tweets reporting children's medical disorders
smm4h-1.31
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.31/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.32.bib
@inproceedings{gupta-2024-team, title = "Team Yseop at {\#}{SMM}4{H} 2024: Multilingual Pharmacovigilance Named Entity Recognition and Relation Extraction", author = "Gupta, Anubhav", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.32", pages = "136--141", abstract = "This paper describes three RoBERTa based systems. The first one recognizes adverse drug events (ADEs) in English tweets and links themwith MedDRA concepts. It scored F1-norm of 40 for the Task 1. The next one extracts pharmacovigilance related named entities inFrench and scored a F1 of 0.4132 for the Task 2a. The third system extracts pharmacovigilance related named entities and their relationsin Japanese. It obtained a F1 of 0.5827 for the Task 2a and 0.0301 for the Task 2b. The French and Japanese systems are the best performing system for the Task 2", }
This paper describes three RoBERTa based systems. The first one recognizes adverse drug events (ADEs) in English tweets and links themwith MedDRA concepts. It scored F1-norm of 40 for the Task 1. The next one extracts pharmacovigilance related named entities inFrench and scored a F1 of 0.4132 for the Task 2a. The third system extracts pharmacovigilance related named entities and their relationsin Japanese. It obtained a F1 of 0.5827 for the Task 2a and 0.0301 for the Task 2b. The French and Japanese systems are the best performing system for the Task 2
[ "Gupta, Anubhav" ]
Team Yseop at #SMM4H 2024: Multilingual Pharmacovigilance Named Entity Recognition and Relation Extraction
smm4h-1.32
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.32/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.33.bib
@inproceedings{francis-moens-2024-kul, title = "{KUL}@{SMM}4{H}2024: Optimizing Text Classification with Quality-Assured Augmentation Strategies", author = "Francis, Sumam and Moens, Marie-Francine", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.33", pages = "142--145", abstract = "This paper presents our models for the Social Media Mining for Health 2024 shared task, specifically Task 5, which involves classifying tweets reporting a child with childhood disorders (annotated as {``}1{''}) versus those merely mentioning a disorder (annotated as {``}0{''}). We utilized a classification model enhanced with diverse textual and language model-based augmentations. To ensure quality, we used semantic similarity, perplexity, and lexical diversity as evaluation metrics. Combining supervised contrastive learning and cross-entropy-based learning, our best model, incorporating R-drop and various LM generation-based augmentations, achieved an impressive F1 score of 0.9230 on the test set, surpassing the task mean and median scores.", }
This paper presents our models for the Social Media Mining for Health 2024 shared task, specifically Task 5, which involves classifying tweets reporting a child with childhood disorders (annotated as {``}1{''}) versus those merely mentioning a disorder (annotated as {``}0{''}). We utilized a classification model enhanced with diverse textual and language model-based augmentations. To ensure quality, we used semantic similarity, perplexity, and lexical diversity as evaluation metrics. Combining supervised contrastive learning and cross-entropy-based learning, our best model, incorporating R-drop and various LM generation-based augmentations, achieved an impressive F1 score of 0.9230 on the test set, surpassing the task mean and median scores.
[ "Francis, Sumam", "Moens, Marie-Francine" ]
KUL@SMM4H2024: Optimizing Text Classification with Quality-Assured Augmentation Strategies
smm4h-1.33
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.33/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.34.bib
@inproceedings{fraga-etal-2024-lhs712nv, title = "{LHS}712{NV} at {\#}{SMM}4{H} 2024 Task 4: Using {BERT} to classify {R}eddit posts on non-medical substance use", author = "Fraga, Valeria and Nair, Neha and Simancek, Dalton and Vydiswaran, V.G.Vinod", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.34", pages = "146--148", abstract = "This paper summarizes our participation in the Shared Task 4 of {\#}SMM4H 2024. Task 4 was a named entity recognition (NER) task identifying clinical and social impacts of non-medical substance use in English Reddit posts. We employed the Bidirectional Encoder Representations from Transformers (BERT) model to complete this task. Our team achieved an F1-score of 0.892 on a validation set and a relaxed F1-score of 0.191 on the test set.", }
This paper summarizes our participation in the Shared Task 4 of {\#}SMM4H 2024. Task 4 was a named entity recognition (NER) task identifying clinical and social impacts of non-medical substance use in English Reddit posts. We employed the Bidirectional Encoder Representations from Transformers (BERT) model to complete this task. Our team achieved an F1-score of 0.892 on a validation set and a relaxed F1-score of 0.191 on the test set.
[ "Fraga, Valeria", "Nair, Neha", "Simancek, Dalton", "Vydiswaran, V.G.Vinod" ]
LHS712NV at #SMM4H 2024 Task 4: Using BERT to classify Reddit posts on non-medical substance use
smm4h-1.34
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.34/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.35.bib
@inproceedings{yusuf-etal-2024-712fortask7, title = "712for{T}ask7 at {\#}{SMM}4{H} 2024 Task 7: Classifying {S}panish Tweets Annotated by Humans versus Machines with {BETO} Models", author = "Yusuf, Hafizh and Belmonte, David and Simancek, Dalton and Vydiswaran, V.G.Vinod", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.35", pages = "149--152", abstract = "The goal of Social Media Mining for Health ({\#}SMM4H) 2024 Task 7 was to train a machine learning model that is able to distinguish between annotations made by humans and those made by a Large Language Model (LLM). The dataset consisted of tweets originating from {\#}SMM4H 2023 Task 3, wherein the objective was to extract COVID-19 symptoms in Latin-American Spanish tweets. Due to the lack of additional annotated tweets for classification, we reframed the task using the available tweets and their corresponding human or machine annotator labels to explore differences between the two subsets of tweets. We conducted an exploratory data analysis and trained a BERT-based classifier to identify sampling biases between the two subsets. The exploratory data analysis found no significant differences between the samples and our best classifier achieved a precision of 0.52 and a recall of 0.51, indicating near-random performance. This confirms the lack of sampling biases between the two sets of tweets and is thus a valid dataset for a task designed to assess the authorship of annotations by humans versus machines.", }
The goal of Social Media Mining for Health ({\#}SMM4H) 2024 Task 7 was to train a machine learning model that is able to distinguish between annotations made by humans and those made by a Large Language Model (LLM). The dataset consisted of tweets originating from {\#}SMM4H 2023 Task 3, wherein the objective was to extract COVID-19 symptoms in Latin-American Spanish tweets. Due to the lack of additional annotated tweets for classification, we reframed the task using the available tweets and their corresponding human or machine annotator labels to explore differences between the two subsets of tweets. We conducted an exploratory data analysis and trained a BERT-based classifier to identify sampling biases between the two subsets. The exploratory data analysis found no significant differences between the samples and our best classifier achieved a precision of 0.52 and a recall of 0.51, indicating near-random performance. This confirms the lack of sampling biases between the two sets of tweets and is thus a valid dataset for a task designed to assess the authorship of annotations by humans versus machines.
[ "Yusuf, Hafizh", "Belmonte, David", "Simancek, Dalton", "Vydiswaran, V.G.Vinod" ]
712forTask7 at #SMM4H 2024 Task 7: Classifying Spanish Tweets Annotated by Humans versus Machines with BETO Models
smm4h-1.35
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.35/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.36.bib
@inproceedings{berkowitz-etal-2024-tlab, title = "{TL}ab at {\#}{SMM}4{H} 2024: Retrieval-Augmented Generation for {ADE} Extraction and Normalization", author = "Berkowitz, Jacob and Srinivasan, Apoorva and Cortina, Jose and Tatonetti1, Nicholas", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.36", pages = "153--157", abstract = "SMM4H 2024 Task 1 is focused on the identification of standardized Adverse Drug Events (ADEs) in tweets. We introduce a novel Retrieval-Augmented Generation (RAG) method, leveraging the capabilities of Llama 3, GPT-4, and the SFR-embedding-mistral model, along with few-shot prompting techniques, to map colloquial tweet language to MedDRA Preferred Terms (PTs) without relying on extensive training datasets. Our method achieved competitive performance, with an F1 score of 0.359 in the normalization task and 0.392 in the named entity recognition (NER) task. Notably, our model demonstrated robustness in identifying previously unseen MedDRA PTs (F1=0.363) greatly surpassing the median task score of 0.141 for such terms.", }
SMM4H 2024 Task 1 is focused on the identification of standardized Adverse Drug Events (ADEs) in tweets. We introduce a novel Retrieval-Augmented Generation (RAG) method, leveraging the capabilities of Llama 3, GPT-4, and the SFR-embedding-mistral model, along with few-shot prompting techniques, to map colloquial tweet language to MedDRA Preferred Terms (PTs) without relying on extensive training datasets. Our method achieved competitive performance, with an F1 score of 0.359 in the normalization task and 0.392 in the named entity recognition (NER) task. Notably, our model demonstrated robustness in identifying previously unseen MedDRA PTs (F1=0.363) greatly surpassing the median task score of 0.141 for such terms.
[ "Berkowitz, Jacob", "Srinivasan, Apoorva", "Cortina, Jose", "Tatonetti1, Nicholas" ]
TLab at #SMM4H 2024: Retrieval-Augmented Generation for ADE Extraction and Normalization
smm4h-1.36
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.36/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.37.bib
@inproceedings{afonso-etal-2024-bit, title = "{BIT}@{UA} at {\#}{SMM}4{H} 2024 Tasks 1 and 5: finding adverse drug events and children{'}s medical disorders in {E}nglish tweets", author = "Afonso, Luis and Almeida, Jo{\~a}o and Antunes, Rui and Oliveira, Jos{\'e}", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.37", pages = "158--162", abstract = "In this paper we present our proposed systems, for Tasks 1 and 5 of the {\#}SMM4H-2024 shared task (Social Media Mining for Health), responsible for identifying health-related aspects in English social media text. Task 1 consisted of identifying text spans mentioning adverse drug events and linking them to unique identifiers from the medical terminology MedDRA, whereas in Task 5 the aim was to distinguish tweets that report a user having a child with a medical disorder from tweets that merely mention a disorder.For Task 1, our system, composed of a pre-trained RoBERTa model and a random forest classifier, achieved 0.397 and 0.295 entity recognition and normalization F1-scores respectively. In Task 5, we obtained a 0.840 F1-score using a pre-trained BERT model.", }
In this paper we present our proposed systems, for Tasks 1 and 5 of the {\#}SMM4H-2024 shared task (Social Media Mining for Health), responsible for identifying health-related aspects in English social media text. Task 1 consisted of identifying text spans mentioning adverse drug events and linking them to unique identifiers from the medical terminology MedDRA, whereas in Task 5 the aim was to distinguish tweets that report a user having a child with a medical disorder from tweets that merely mention a disorder.For Task 1, our system, composed of a pre-trained RoBERTa model and a random forest classifier, achieved 0.397 and 0.295 entity recognition and normalization F1-scores respectively. In Task 5, we obtained a 0.840 F1-score using a pre-trained BERT model.
[ "Afonso, Luis", "Almeida, Jo{\\~a}o", "Antunes, Rui", "Oliveira, Jos{\\'e}" ]
BIT@UA at #SMM4H 2024 Tasks 1 and 5: finding adverse drug events and children's medical disorders in English tweets
smm4h-1.37
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.37/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.38.bib
@inproceedings{jana-etal-2024-force, title = "{FORCE}: A Benchmark Dataset for Foodborne Disease Outbreak and Recall Event Extraction from News", author = "Jana, Sudeshna and Sinha, Manjira and Dasgupta, Tirthankar", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.38", pages = "163--169", abstract = "The escalating prevalence of food safety incidents within the food supply chain necessitates immediate action to protect consumers. These incidents encompass a spectrum of issues, including food product contamination and deliberate food and feed adulteration for economic gain leading to outbreaks and recalls. Understanding the origins and pathways of contamination is imperative for prevention and mitigation. In this paper, we introduce FORCE Foodborne disease Outbreak and ReCall Event extraction from openweb). Our proposed model leverages a multi-tasking sequence labeling architecture in conjunction with transformer-based document embeddings. We have compiled a substantial annotated corpus comprising relevant articles published between 2011 and 2023 to train and evaluate the model. The dataset will be publicly released with the paper. The event detection model demonstrates fair accuracy in identifying food-related incidents and outbreaks associated with organizations, as assessed through cross-validation techniques.", }
The escalating prevalence of food safety incidents within the food supply chain necessitates immediate action to protect consumers. These incidents encompass a spectrum of issues, including food product contamination and deliberate food and feed adulteration for economic gain leading to outbreaks and recalls. Understanding the origins and pathways of contamination is imperative for prevention and mitigation. In this paper, we introduce FORCE Foodborne disease Outbreak and ReCall Event extraction from openweb). Our proposed model leverages a multi-tasking sequence labeling architecture in conjunction with transformer-based document embeddings. We have compiled a substantial annotated corpus comprising relevant articles published between 2011 and 2023 to train and evaluate the model. The dataset will be publicly released with the paper. The event detection model demonstrates fair accuracy in identifying food-related incidents and outbreaks associated with organizations, as assessed through cross-validation techniques.
[ "Jana, Sudeshna", "Sinha, Manjira", "Dasgupta, Tirthankar" ]
FORCE: A Benchmark Dataset for Foodborne Disease Outbreak and Recall Event Extraction from News
smm4h-1.38
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.38/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.39.bib
@inproceedings{raithel-etal-2024-overview, title = "Overview of {\#}{SMM}4{H} 2024 {--} Task 2: Cross-Lingual Few-Shot Relation Extraction for Pharmacovigilance in {F}rench, {G}erman, and {J}apanese", author = {Raithel, Lisa and Thomas, Philippe and Verma, Bhuvanesh and Roller, Roland and Yeh, Hui-Syuan and Yada, Shuntaro and Grouin, Cyril and Wakamiya, Shoko and Aramaki, Eiji and M{\"o}ller, Sebastian and Zweigenbaum, Pierre}, editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.39", pages = "170--182", abstract = "This paper provides an overview of Task 2 from the Social Media Mining for Health 2024 shared task ({\#}SMM4H 2024), which focused on Named Entity Recognition (NER, Subtask 2a) and the joint task of NER and Relation Extraction (RE, Subtask 2b) for detecting adverse drug reactions (ADRs) in German, Japanese, and French texts written by patients. Participants were challenged with a few-shot learning scenario, necessitating models that can effectively generalize from limited annotated examples. Despite the diverse strategies employed by the participants, the overall performance across submissions from three teams highlighted significant challenges. The results underscored the complexity of extracting entities and relations in multi-lingual contexts, especially from the noisy and informal nature of user-generated content. Further research is required to develop robust systems capable of accurately identifying and associating ADR-related information in low-resource and multilingual settings.", }
This paper provides an overview of Task 2 from the Social Media Mining for Health 2024 shared task ({\#}SMM4H 2024), which focused on Named Entity Recognition (NER, Subtask 2a) and the joint task of NER and Relation Extraction (RE, Subtask 2b) for detecting adverse drug reactions (ADRs) in German, Japanese, and French texts written by patients. Participants were challenged with a few-shot learning scenario, necessitating models that can effectively generalize from limited annotated examples. Despite the diverse strategies employed by the participants, the overall performance across submissions from three teams highlighted significant challenges. The results underscored the complexity of extracting entities and relations in multi-lingual contexts, especially from the noisy and informal nature of user-generated content. Further research is required to develop robust systems capable of accurately identifying and associating ADR-related information in low-resource and multilingual settings.
[ "Raithel, Lisa", "Thomas, Philippe", "Verma, Bhuvanesh", "Roller, Rol", "", "Yeh, Hui-Syuan", "Yada, Shuntaro", "Grouin, Cyril", "Wakamiya, Shoko", "Aramaki, Eiji", "M{\\\"o}ller, Sebastian", "Zweigenbaum, Pierre" ]
Overview of #SMM4H 2024 – Task 2: Cross-Lingual Few-Shot Relation Extraction for Pharmacovigilance in French, German, and Japanese
smm4h-1.39
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.39/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.40.bib
@inproceedings{xu-etal-2024-overview-9th, title = "Overview of the 9th Social Media Mining for Health Applications ({\#}{SMM}4{H}) Shared Tasks at {ACL} 2024 {--} Large Language Models and Generalizability for Social Media {NLP}", author = "Xu, Dongfang and Garcia, Guillermo and Raithel, Lisa and Thomas, Philippe and Roller, Roland and Aramaki, Eiji and Wakamiya, Shoko and Yada, Shuntaro and Zweigenbaum, Pierre and O{'}Connor, Karen and Samineni, Sai and Hernandez, Sophia and Ge, Yao and Rajwal, Swati and Das, Sudeshna and Sarker, Abeed and Klein, Ari and Schmidt, Ana and Sharma, Vishakha and Rodriguez-Esteban, Raul and Banda, Juan and Amaro, Ivan and Weissenbacher, Davy and Gonzalez-Hernandez, Graciela", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.40", pages = "183--195", abstract = "For the past nine years, the Social Media Mining for Health Applications ({\#}SMM4H) shared tasks have promoted community-driven development and evaluation of advanced natural language processing systems to detect, extract, and normalize health-related information in publicly available user-generated content. This year, {\#}SMM4H included seven shared tasks in English, Japanese, German, French, and Spanish from Twitter, Reddit, and health forums. A total of 84 teams from 22 countries registered for {\#}SMM4H, and 45 teams participated in at least one task. This represents a growth of 180{\%} and 160{\%} in registration and participation, respectively, compared to the last iteration. This paper provides an overview of the tasks and participating systems. The data sets remain available upon request, and new systems can be evaluated through the post-evaluation phase on CodaLab.", }
For the past nine years, the Social Media Mining for Health Applications ({\#}SMM4H) shared tasks have promoted community-driven development and evaluation of advanced natural language processing systems to detect, extract, and normalize health-related information in publicly available user-generated content. This year, {\#}SMM4H included seven shared tasks in English, Japanese, German, French, and Spanish from Twitter, Reddit, and health forums. A total of 84 teams from 22 countries registered for {\#}SMM4H, and 45 teams participated in at least one task. This represents a growth of 180{\%} and 160{\%} in registration and participation, respectively, compared to the last iteration. This paper provides an overview of the tasks and participating systems. The data sets remain available upon request, and new systems can be evaluated through the post-evaluation phase on CodaLab.
[ "Xu, Dongfang", "Garcia, Guillermo", "Raithel, Lisa", "Thomas, Philippe", "Roller, Rol", "", "Aramaki, Eiji", "Wakamiya, Shoko", "Yada, Shuntaro", "Zweigenbaum, Pierre", "O{'}Connor, Karen", "Samineni, Sai", "Hern", "ez, Sophia", "Ge, Yao", "Rajwal, Swati", "Das, Sudeshna", "Sarker, Abeed", "Klein, Ari", "Schmidt, Ana", "Sharma, Vishakha", "Rodriguez-Esteban, Raul", "B", "a, Juan", "Amaro, Ivan", "Weissenbacher, Davy", "Gonzalez-Hern", "ez, Graciela" ]
Overview of the 9th Social Media Mining for Health Applications (#SMM4H) Shared Tasks at ACL 2024 – Large Language Models and Generalizability for Social Media NLP
smm4h-1.40
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.40/
[]
[]
[]
0
https://aclanthology.org/2024.splurobonlp-1.1.bib
@inproceedings{zhang-etal-2024-language, title = "Language-guided World Models: A Model-based Approach to {AI} Control", author = "Zhang, Alex and Nguyen, Khanh and Tuyls, Jens and Lin, Albert and Narasimhan, Karthik", editor = "Kordjamshidi, Parisa and Wang, Xin Eric and Zhang, Yue and Ma, Ziqiao and Inan, Mert", booktitle = "Proceedings of the 4th Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.splurobonlp-1.1", pages = "1--16", abstract = "Developing internal world models for artificial agents opens an efficient channel for humans to communicate with and control them. In addition to updating policies, humans can modify the world models of these agents in order to influence their decisions.The challenge, however, is that currently existing world models are difficult for humans to adapt because they lack a natural communication interface. Aimed at addressing this shortcoming, we develop *Language-Guided World Models* (LWMs), which can capture environment dynamics by reading language descriptions. These models enhance agent communication efficiency, allowing humans to simultaneously alter their behavior on multiple tasks with concise language feedback. They also enable agents to self-learn from texts originally written to instruct humans. To facilitate the development of LWMs, we design a challenging benchmark based on the game of MESSENGER (Hanjie et al., 2021), requiring compositional generalization to new language descriptions and environment dynamics. Our experiments reveal that the current state-of-the-art Transformer architecture performs poorly on this benchmark, motivating us to design a more robust architecture. To showcase the practicality of our proposed LWMs, we simulate a scenario where these models augment the interpretability and safety of an agent by enabling it to generate and discuss plans with a human before execution. By effectively incorporating language feedback on the plan, the models boost the agent performance in the real environment by up to three times without collecting any interactive experiences in this environment.", }
Developing internal world models for artificial agents opens an efficient channel for humans to communicate with and control them. In addition to updating policies, humans can modify the world models of these agents in order to influence their decisions.The challenge, however, is that currently existing world models are difficult for humans to adapt because they lack a natural communication interface. Aimed at addressing this shortcoming, we develop *Language-Guided World Models* (LWMs), which can capture environment dynamics by reading language descriptions. These models enhance agent communication efficiency, allowing humans to simultaneously alter their behavior on multiple tasks with concise language feedback. They also enable agents to self-learn from texts originally written to instruct humans. To facilitate the development of LWMs, we design a challenging benchmark based on the game of MESSENGER (Hanjie et al., 2021), requiring compositional generalization to new language descriptions and environment dynamics. Our experiments reveal that the current state-of-the-art Transformer architecture performs poorly on this benchmark, motivating us to design a more robust architecture. To showcase the practicality of our proposed LWMs, we simulate a scenario where these models augment the interpretability and safety of an agent by enabling it to generate and discuss plans with a human before execution. By effectively incorporating language feedback on the plan, the models boost the agent performance in the real environment by up to three times without collecting any interactive experiences in this environment.
[ "Zhang, Alex", "Nguyen, Khanh", "Tuyls, Jens", "Lin, Albert", "Narasimhan, Karthik" ]
Language-guided World Models: A Model-based Approach to AI Control
splurobonlp-1.1
Poster
2402.01695
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.splurobonlp-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.splurobonlp-1.2.bib
@inproceedings{sadler-etal-2024-learning, title = "Learning Communication Policies for Different Follower Behaviors in a Collaborative Reference Game", author = "Sadler, Philipp and Hakimov, Sherzod and Schlangen, David", editor = "Kordjamshidi, Parisa and Wang, Xin Eric and Zhang, Yue and Ma, Ziqiao and Inan, Mert", booktitle = "Proceedings of the 4th Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.splurobonlp-1.2", pages = "17--29", abstract = "In this work, we evaluate the adaptability of neural agents towards assumed partner behaviors in a collaborative reference game. In this game, success is achieved when a knowledgeable guide can verbally lead a follower to the selection of a specific puzzle piece among several distractors. We frame this language grounding and coordination task as a reinforcement learning problem and measure to which extent a common reinforcement training algorithm (PPO) is able to produce neural agents (the guides) that perform well with various heuristic follower behaviors that vary along the dimensions of confidence and autonomy. We experiment with a learning signal that in addition to the goal condition also respects an assumed communicative effort. Our results indicate that this novel ingredient leads to communicative strategies that are less verbose (staying silent in some of the steps) and that with respect to that the guide{'}s strategies indeed adapt to the partner{'}s level of confidence and autonomy.", }
In this work, we evaluate the adaptability of neural agents towards assumed partner behaviors in a collaborative reference game. In this game, success is achieved when a knowledgeable guide can verbally lead a follower to the selection of a specific puzzle piece among several distractors. We frame this language grounding and coordination task as a reinforcement learning problem and measure to which extent a common reinforcement training algorithm (PPO) is able to produce neural agents (the guides) that perform well with various heuristic follower behaviors that vary along the dimensions of confidence and autonomy. We experiment with a learning signal that in addition to the goal condition also respects an assumed communicative effort. Our results indicate that this novel ingredient leads to communicative strategies that are less verbose (staying silent in some of the steps) and that with respect to that the guide{'}s strategies indeed adapt to the partner{'}s level of confidence and autonomy.
[ "Sadler, Philipp", "Hakimov, Sherzod", "Schlangen, David" ]
Learning Communication Policies for Different Follower Behaviors in a Collaborative Reference Game
splurobonlp-1.2
Poster
2402.04824
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.splurobonlp-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.splurobonlp-1.3.bib
@inproceedings{kawabata-etal-2024-collection, title = "Collection of {J}apanese Route Information Reference Expressions Using Maps as Stimuli", author = "Kawabata, Yoshiko and Omura, Mai and Konishi, Hikari and Asahara, Masayuki and Takeuchi, Johane", editor = "Kordjamshidi, Parisa and Wang, Xin Eric and Zhang, Yue and Ma, Ziqiao and Inan, Mert", booktitle = "Proceedings of the 4th Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.splurobonlp-1.3", pages = "30--35", abstract = "We constructed a database of Japanese expressions based on route information. Using 20 maps as stimuli, we requested descriptions of routes between two points on each map from 40 individuals per route, collecting 1600 route information reference expressions. We determined whether the expressions were based solely on relative reference expressions by using landmarks on the maps. In cases in which only relative reference expressions were used, we labeled the presence or absence of information regarding the starting point, waypoints, and destination. Additionally, we collected clarity ratings for each expression using a survey.", }
We constructed a database of Japanese expressions based on route information. Using 20 maps as stimuli, we requested descriptions of routes between two points on each map from 40 individuals per route, collecting 1600 route information reference expressions. We determined whether the expressions were based solely on relative reference expressions by using landmarks on the maps. In cases in which only relative reference expressions were used, we labeled the presence or absence of information regarding the starting point, waypoints, and destination. Additionally, we collected clarity ratings for each expression using a survey.
[ "Kawabata, Yoshiko", "Omura, Mai", "Konishi, Hikari", "Asahara, Masayuki", "Takeuchi, Johane" ]
Collection of Japanese Route Information Reference Expressions Using Maps as Stimuli
splurobonlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.splurobonlp-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.1.bib
@inproceedings{wilson-2024-documenting-unwritten, title = "Documenting the Unwritten Curriculum of Student Research", author = "Wilson, Shomir", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.1", pages = "1--3", abstract = "Graduate and undergraduate student researchers in natural language processing (NLP) often need mentoring to learn the norms of research. While methodological and technical knowledge are essential, there is also a {``}hidden curriculum{''} of experiential knowledge about topics like work strategies, common obstacles, collaboration, conferences, and scholarly writing. As a professor, I have written a set of guides that cover typically unwritten customs and procedures for academic research. I share them with advisees to help them understand research norms and to help us focus on their specific questions and interests. This paper describes these guides, which are freely accessible on the web (https://shomir.net/advice), and I provide recommendations to faculty who are interested in creating similar materials for their advisees.", }
Graduate and undergraduate student researchers in natural language processing (NLP) often need mentoring to learn the norms of research. While methodological and technical knowledge are essential, there is also a {``}hidden curriculum{''} of experiential knowledge about topics like work strategies, common obstacles, collaboration, conferences, and scholarly writing. As a professor, I have written a set of guides that cover typically unwritten customs and procedures for academic research. I share them with advisees to help them understand research norms and to help us focus on their specific questions and interests. This paper describes these guides, which are freely accessible on the web (https://shomir.net/advice), and I provide recommendations to faculty who are interested in creating similar materials for their advisees.
[ "Wilson, Shomir" ]
Documenting the Unwritten Curriculum of Student Research
teachingnlp-1.1
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.2.bib
@inproceedings{parde-2024-example-driven, title = "Example-Driven Course Slides on Natural Language Processing Concepts", author = "Parde, Natalie", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.2", pages = "4--6", abstract = "Natural language processing (NLP) is a fast-paced field and a popular course topic in many undergraduate and graduate programs. This paper presents a comprehensive suite of example-driven course slides covering NLP concepts, ranging from fundamental building blocks to modern state-of-the-art approaches. In contributing these slides, I hope to alleviate burden for those starting out as faculty or in need of course material updates. The slides are publicly available for external use and are updated regularly to incorporate new advancements.", }
Natural language processing (NLP) is a fast-paced field and a popular course topic in many undergraduate and graduate programs. This paper presents a comprehensive suite of example-driven course slides covering NLP concepts, ranging from fundamental building blocks to modern state-of-the-art approaches. In contributing these slides, I hope to alleviate burden for those starting out as faculty or in need of course material updates. The slides are publicly available for external use and are updated regularly to incorporate new advancements.
[ "Parde, Natalie" ]
Example-Driven Course Slides on Natural Language Processing Concepts
teachingnlp-1.2
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.3.bib
@inproceedings{nikishina-etal-2024-industry-vs, title = "Industry vs Academia: Running a Course on Transformers in Two Setups", author = "Nikishina, Irina and Tikhonova, Maria and Chekalina, Viktoriia and Zaytsev, Alexey and Vazhentsev, Artem and Panchenko, Alexander", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.3", pages = "7--22", abstract = "This paper presents a course on neural networks based on the Transformer architecture targeted at diverse groups of people from academia and industry with experience in Python, Machine Learning, and Deep Learning but little or no experience with Transformers. The course covers a comprehensive overview of the Transformers NLP applications and their use for other data types. The course features 15 sessions, each consisting of a lecture and a practical part, and two homework assignments organized as CodaLab competitions. The first six sessions of the course are devoted to the Transformer and the variations of this architecture (e.g., encoders, decoders, encoder-decoders) as well as different techniques of model tuning. Subsequent sessions are devoted to multilingualism, multimodality (e.g., texts and images), efficiency, event sequences, and tabular data.We ran the course for different audiences: academic students and people from industry. The first run was held in 2022. During the subsequent iterations until 2024, it was constantly updated and extended with recently emerged findings on GPT-4, LLMs, RLHF, etc. Overall, it has been ran six times (four times in industry and twice in academia) and received positive feedback from academic and industry students.", }
This paper presents a course on neural networks based on the Transformer architecture targeted at diverse groups of people from academia and industry with experience in Python, Machine Learning, and Deep Learning but little or no experience with Transformers. The course covers a comprehensive overview of the Transformers NLP applications and their use for other data types. The course features 15 sessions, each consisting of a lecture and a practical part, and two homework assignments organized as CodaLab competitions. The first six sessions of the course are devoted to the Transformer and the variations of this architecture (e.g., encoders, decoders, encoder-decoders) as well as different techniques of model tuning. Subsequent sessions are devoted to multilingualism, multimodality (e.g., texts and images), efficiency, event sequences, and tabular data.We ran the course for different audiences: academic students and people from industry. The first run was held in 2022. During the subsequent iterations until 2024, it was constantly updated and extended with recently emerged findings on GPT-4, LLMs, RLHF, etc. Overall, it has been ran six times (four times in industry and twice in academia) and received positive feedback from academic and industry students.
[ "Nikishina, Irina", "Tikhonova, Maria", "Chekalina, Viktoriia", "Zaytsev, Alexey", "Vazhentsev, Artem", "Panchenko, Alex", "er" ]
Industry vs Academia: Running a Course on Transformers in Two Setups
teachingnlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.4.bib
@inproceedings{joshi-etal-2024-striking-balance, title = "Striking a Balance between Classical and Deep Learning Approaches in Natural Language Processing Pedagogy", author = "Joshi, Aditya and Renzella, Jake and Bhattacharyya, Pushpak and Jha, Saurav and Zhang, Xiangyu", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.4", pages = "23--32", abstract = "While deep learning approaches represent the state-of-the-art of natural language processing (NLP) today, classical algorithms and approaches still find a place in NLP textbooks and courses of recent years. This paper discusses the perspectives of conveners of two introductory NLP courses taught in Australia and India, and examines how classical and deep learning approaches can be balanced within the lecture plan and assessments of the courses. We also draw parallels with the objects-first and objects-later debate in CS1 education. We observe that teaching classical approaches adds value to student learning by building an intuitive understanding of NLP problems, potential solutions, and even deep learning models themselves. Despite classical approaches not being state-of-the-art, the paper makes a case for their inclusion in NLP courses today.", }
While deep learning approaches represent the state-of-the-art of natural language processing (NLP) today, classical algorithms and approaches still find a place in NLP textbooks and courses of recent years. This paper discusses the perspectives of conveners of two introductory NLP courses taught in Australia and India, and examines how classical and deep learning approaches can be balanced within the lecture plan and assessments of the courses. We also draw parallels with the objects-first and objects-later debate in CS1 education. We observe that teaching classical approaches adds value to student learning by building an intuitive understanding of NLP problems, potential solutions, and even deep learning models themselves. Despite classical approaches not being state-of-the-art, the paper makes a case for their inclusion in NLP courses today.
[ "Joshi, Aditya", "Renzella, Jake", "Bhattacharyya, Pushpak", "Jha, Saurav", "Zhang, Xiangyu" ]
Striking a Balance between Classical and Deep Learning Approaches in Natural Language Processing Pedagogy
teachingnlp-1.4
Poster
2405.09854
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.5.bib
@inproceedings{mccrae-2024-co-creational, title = "Co-Creational Teaching of Natural Language Processing", author = "McCrae, John", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.5", pages = "33--42", abstract = "Traditional lectures have poorer outcomes compared to active learning methodologies, yet many natural language processing classes in higher education still follow this outdated methodology. In this paper, we present, co-creational teaching, a methodology that encourages partnership between staff and lecturers and show how this can be applied to teach natural language processing. As a fast-moving and dynamic area of study with high interest from students, natural language processing is an ideal subject for innovative teaching methodologies to improve student outcomes. We detail our experience with teaching natural language processing through partnership with students and provide detailed descriptions of methodologies that can be used by others in their teaching, including considerations of diverse student populations.", }
Traditional lectures have poorer outcomes compared to active learning methodologies, yet many natural language processing classes in higher education still follow this outdated methodology. In this paper, we present, co-creational teaching, a methodology that encourages partnership between staff and lecturers and show how this can be applied to teach natural language processing. As a fast-moving and dynamic area of study with high interest from students, natural language processing is an ideal subject for innovative teaching methodologies to improve student outcomes. We detail our experience with teaching natural language processing through partnership with students and provide detailed descriptions of methodologies that can be used by others in their teaching, including considerations of diverse student populations.
[ "McCrae, John" ]
Co-Creational Teaching of Natural Language Processing
teachingnlp-1.5
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.5/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.6.bib
@inproceedings{assenmacher-etal-2024-collaborative-development, title = "Collaborative Development of Modular Open Source Educational Resources for Natural Language Processing", author = {A{\ss}enmacher, Matthias and Stephan, Andreas and Weissweiler, Leonie and {\c{C}}ano, Erion and Ziegler, Ingo and H{\"a}rttrich, Marwin and Bischl, Bernd and Roth, Benjamin and Heumann, Christian and Sch{\"u}tze, Hinrich}, editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.6", pages = "43--53", abstract = "In this work, we present a collaboratively and continuously developed open-source educational resource (OSER) for teaching natural language processing at two different universities. We shed light on the principles we followed for the initial design of the course and the rationale for ongoing developments, followed by a reflection on the inter-university collaboration for designing and maintaining teaching material. When reflecting on the latter, we explicitly emphasize the considerations that need to be made when facing heterogeneous groups and when having to accommodate multiple examination regulations within one single course framework. Relying on the fundamental principles of OSER developments as defined by Bothmann et al. (2023) proved to be an important guideline during this process. The final part pertains to open-sourcing our teaching material, coping with the increasing speed of developments in the field, and integrating the course digitally, also addressing conflicting priorities and challenges we are currently facing.", }
In this work, we present a collaboratively and continuously developed open-source educational resource (OSER) for teaching natural language processing at two different universities. We shed light on the principles we followed for the initial design of the course and the rationale for ongoing developments, followed by a reflection on the inter-university collaboration for designing and maintaining teaching material. When reflecting on the latter, we explicitly emphasize the considerations that need to be made when facing heterogeneous groups and when having to accommodate multiple examination regulations within one single course framework. Relying on the fundamental principles of OSER developments as defined by Bothmann et al. (2023) proved to be an important guideline during this process. The final part pertains to open-sourcing our teaching material, coping with the increasing speed of developments in the field, and integrating the course digitally, also addressing conflicting priorities and challenges we are currently facing.
[ "A{\\ss}enmacher, Matthias", "Stephan, Andreas", "Weissweiler, Leonie", "{\\c{C}}ano, Erion", "Ziegler, Ingo", "H{\\\"a}rttrich, Marwin", "Bischl, Bernd", "Roth, Benjamin", "Heumann, Christian", "Sch{\\\"u}tze, Hinrich" ]
Collaborative Development of Modular Open Source Educational Resources for Natural Language Processing
teachingnlp-1.6
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.7.bib
@inproceedings{cignarella-etal-2024-hate-speech, title = "From Hate Speech to Societal Empowerment: A Pedagogical Journey Through Computational Thinking and {NLP} for High School Students", author = "Cignarella, Alessandra Teresa and Chierchiello, Elisa and Ferrando, Chiara and Frenda, Simona and Lo, Soda Marem and Marra, Andrea", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.7", pages = "54--65", abstract = "The teaching laboratory we have created integrates methodologies to address the topic of hate speech on social media among students while fostering computational thinking and AI education for societal impact. We provide a foundational understanding of hate speech and introduce computational concepts using matrices, bag of words, and practical exercises in platforms like Colaboratory. Additionally, we emphasize the application of AI, particularly in NLP, to address real-world challenges. Through retrospective evaluation, we assess the efficacy of our approach, aiming to empower students as proactive contributors to societal betterment. With this paper we present an overview of the laboratory{'}s structure, the primary materials used, and insights gleaned from six editions conducted to the present date.", }
The teaching laboratory we have created integrates methodologies to address the topic of hate speech on social media among students while fostering computational thinking and AI education for societal impact. We provide a foundational understanding of hate speech and introduce computational concepts using matrices, bag of words, and practical exercises in platforms like Colaboratory. Additionally, we emphasize the application of AI, particularly in NLP, to address real-world challenges. Through retrospective evaluation, we assess the efficacy of our approach, aiming to empower students as proactive contributors to societal betterment. With this paper we present an overview of the laboratory{'}s structure, the primary materials used, and insights gleaned from six editions conducted to the present date.
[ "Cignarella, Aless", "ra Teresa", "Chierchiello, Elisa", "Ferr", "o, Chiara", "Frenda, Simona", "Lo, Soda Marem", "Marra, Andrea" ]
From Hate Speech to Societal Empowerment: A Pedagogical Journey Through Computational Thinking and NLP for High School Students
teachingnlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.8.bib
@inproceedings{biester-wu-2024-tightly-coupled, title = "Tightly Coupled Worksheets and Homework Assignments for {NLP}", author = "Biester, Laura and Wu, Winston", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.8", pages = "66--68", abstract = "In natural language processing courses, students often struggle to debug their code. In this paper, we present three homework assignments that are tightly coupled with in-class worksheets. The worksheets allow students to confirm their understanding of the algorithms on paper before trying to write code. Then, as students complete the coding portion of the assignments, the worksheets aid students in the debugging process as test cases for the code, allowing students to seamlessly compare their results to those from the correct execution of the algorithm.", }
In natural language processing courses, students often struggle to debug their code. In this paper, we present three homework assignments that are tightly coupled with in-class worksheets. The worksheets allow students to confirm their understanding of the algorithms on paper before trying to write code. Then, as students complete the coding portion of the assignments, the worksheets aid students in the debugging process as test cases for the code, allowing students to seamlessly compare their results to those from the correct execution of the algorithm.
[ "Biester, Laura", "Wu, Winston" ]
Tightly Coupled Worksheets and Homework Assignments for NLP
teachingnlp-1.8
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.9.bib
@inproceedings{helcl-etal-2024-teaching-llms, title = "Teaching {LLM}s at {C}harles {U}niversity: Assignments and Activities", author = "Helcl, Jind{\v{r}}ich and Kasner, Zden{\v{e}}k and Du{\v{s}}ek, Ond{\v{r}}ej and Limisiewicz, Tomasz and Mach{\'a}{\v{c}}ek, Dominik and Musil, Tom{\'a}{\v{s}} and Libovick{\'y}, Jind{\v{r}}ich", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.9", pages = "69--72", abstract = "This paper presents teaching materials, particularly assignments and ideas for classroom activities, from a new course on large language modelsThe assignments include experiments with LLM inference for weather report generation and machine translation.The classroom activities include class quizzes, focused research on downstream tasks and datasets, and an interactive {``}best paper{''} session aimed at reading and comprehension of research papers.", }
This paper presents teaching materials, particularly assignments and ideas for classroom activities, from a new course on large language modelsThe assignments include experiments with LLM inference for weather report generation and machine translation.The classroom activities include class quizzes, focused research on downstream tasks and datasets, and an interactive {``}best paper{''} session aimed at reading and comprehension of research papers.
[ "Helcl, Jind{\\v{r}}ich", "Kasner, Zden{\\v{e}}k", "Du{\\v{s}}ek, Ond{\\v{r}}ej", "Limisiewicz, Tomasz", "Mach{\\'a}{\\v{c}}ek, Dominik", "Musil, Tom{\\'a}{\\v{s}}", "Libovick{\\'y}, Jind{\\v{r}}ich" ]
Teaching LLMs at Charles University: Assignments and Activities
teachingnlp-1.9
Poster
2407.19798
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.10.bib
@inproceedings{lee-etal-2024-empowering-future, title = "Empowering the Future with Multilinguality and Language Diversity", author = "Lee, En-Shiun and Uemura, Kosei and Wasti, Syed and Shipton, Mason", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.10", pages = "73--76", abstract = "The rapid advancements and the widespread transformation of Large Language Models, have made it necessary to incorporate these cutting-edge techniques into the educational curricula of Natural Language Processing (NLP) with limited computing resources. This paper presents an applied NLP course designed for upper-year computer science undergraduate students on state-of-the-art techniques with an emphasis on multilinguality and language diversity. We hope to empower learners to advance their language community while preparing for industry.", }
The rapid advancements and the widespread transformation of Large Language Models, have made it necessary to incorporate these cutting-edge techniques into the educational curricula of Natural Language Processing (NLP) with limited computing resources. This paper presents an applied NLP course designed for upper-year computer science undergraduate students on state-of-the-art techniques with an emphasis on multilinguality and language diversity. We hope to empower learners to advance their language community while preparing for industry.
[ "Lee, En-Shiun", "Uemura, Kosei", "Wasti, Syed", "Shipton, Mason" ]
Empowering the Future with Multilinguality and Language Diversity
teachingnlp-1.10
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.11.bib
@inproceedings{hou-etal-2024-course-shared, title = "A Course Shared Task on Evaluating {LLM} Output for Clinical Questions", author = "Hou, Yufang and Tran, Thy and Vu, Doan and Cao, Yiwen and Li, Kai and Rohde, Lukas and Gurevych, Iryna", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.11", pages = "77--80", abstract = "This paper presents a shared task that we organized at the Foundations of Language Technology (FoLT) course in 2023/2024 at the Technical University of Darmstadt, which focuses on evaluating the output of Large Language Models (LLMs) in generating harmful answers to health-related clinical questions. We describe the task design considerations and report the feedback we received from the students. We expect the task and the findings reported in this paper to be relevant for instructors teaching natural language processing (NLP).", }
This paper presents a shared task that we organized at the Foundations of Language Technology (FoLT) course in 2023/2024 at the Technical University of Darmstadt, which focuses on evaluating the output of Large Language Models (LLMs) in generating harmful answers to health-related clinical questions. We describe the task design considerations and report the feedback we received from the students. We expect the task and the findings reported in this paper to be relevant for instructors teaching natural language processing (NLP).
[ "Hou, Yufang", "Tran, Thy", "Vu, Doan", "Cao, Yiwen", "Li, Kai", "Rohde, Lukas", "Gurevych, Iryna" ]
A Course Shared Task on Evaluating LLM Output for Clinical Questions
teachingnlp-1.11
Poster
2408.00122
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.12.bib
@inproceedings{anderson-2024-prompting-assignment, title = "A Prompting Assignment for Exploring Pretrained {LLM}s", author = "Anderson, Carolyn", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.12", pages = "81--84", abstract = "As the scale of publicly-available large language models (LLMs) has increased, so has interest in few-shot prompting methods. This paper presents an assignment that asks students to explore three aspects of large language model capabilities (commonsense reasoning, factuality, and wordplay) with a prompt engineering focus. The assignment consists of three tasks designed to share a common programming framework, so that students can reuse and adapt code from earlier tasks. Two of the tasks also involve dataset construction: students are asked to construct a simple dataset for the wordplay task, and a more challenging dataset for the factuality task. In addition, the assignment includes reflection questions that ask students to think critically about what they observe.", }
As the scale of publicly-available large language models (LLMs) has increased, so has interest in few-shot prompting methods. This paper presents an assignment that asks students to explore three aspects of large language model capabilities (commonsense reasoning, factuality, and wordplay) with a prompt engineering focus. The assignment consists of three tasks designed to share a common programming framework, so that students can reuse and adapt code from earlier tasks. Two of the tasks also involve dataset construction: students are asked to construct a simple dataset for the wordplay task, and a more challenging dataset for the factuality task. In addition, the assignment includes reflection questions that ask students to think critically about what they observe.
[ "Anderson, Carolyn" ]
A Prompting Assignment for Exploring Pretrained LLMs
teachingnlp-1.12
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.12/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.13.bib
@inproceedings{braun-2024-teaching-natural, title = "Teaching Natural Language Processing in Law School", author = "Braun, Daniel", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.13", pages = "85--90", abstract = "Fuelled by technical advances, the interest in Natural Language Processing in the legal domain has rapidly increased over the last months and years. The design, usage, and testing of domain-specific systems, but also assessing these systems from a legal perspective, needs competencies at the intersection of law and Natural Language Processing. While the demand for such competencies is high among students, only a few law schools, particularly in Europe, teach such competencies. In this paper, we present the design for a Natural Language Processing course for postgraduate law students that is based on the principle of constructive alignment and has proven to be successful over the last three years.", }
Fuelled by technical advances, the interest in Natural Language Processing in the legal domain has rapidly increased over the last months and years. The design, usage, and testing of domain-specific systems, but also assessing these systems from a legal perspective, needs competencies at the intersection of law and Natural Language Processing. While the demand for such competencies is high among students, only a few law schools, particularly in Europe, teach such competencies. In this paper, we present the design for a Natural Language Processing course for postgraduate law students that is based on the principle of constructive alignment and has proven to be successful over the last three years.
[ "Braun, Daniel" ]
Teaching Natural Language Processing in Law School
teachingnlp-1.13
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.14.bib
@inproceedings{anderson-2024-exploring-language, title = "Exploring Language Representation through a Resource Inventory Project", author = "Anderson, Carolyn", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.14", pages = "91--93", abstract = "The increasing scale of large language models has led some students to wonder what contributions can be made in academia. However, students are often unaware that LLM-based approaches are not feasible for the majority of the world{'}s languages due to lack of data availability. This paper presents a research project in which students explore the issue of language representation by creating an inventory of the data, preprocessing, and model resources available for a less-resourced language. Students are put into small groups and assigned a language to research. Within the group, students take on one of three roles: dataset investigator, preprocessing investigator, or downstream task investigator. Students then work together to create a 7-page research report about their language.", }
The increasing scale of large language models has led some students to wonder what contributions can be made in academia. However, students are often unaware that LLM-based approaches are not feasible for the majority of the world{'}s languages due to lack of data availability. This paper presents a research project in which students explore the issue of language representation by creating an inventory of the data, preprocessing, and model resources available for a less-resourced language. Students are put into small groups and assigned a language to research. Within the group, students take on one of three roles: dataset investigator, preprocessing investigator, or downstream task investigator. Students then work together to create a 7-page research report about their language.
[ "Anderson, Carolyn" ]
Exploring Language Representation through a Resource Inventory Project
teachingnlp-1.14
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.14/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.15.bib
@inproceedings{ginn-etal-2024-belt-building, title = "{BELT}: Building Endangered Language Technology", author = "Ginn, Michael and Saavedra-Beltr{\'a}n, David and Robayo, Camilo and Palmer, Alexis", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.15", pages = "94--104", abstract = "The development of language technology (LT) for an endangered language is often identified as a goal in language revitalization efforts, but developing such technologies is typically subject to additional methodological challenges as well as social and ethical concerns. In particular, LT development has too often taken on colonialist qualities, extracting language data, relying on outside experts, and denying the speakers of a language sovereignty over the technologies produced.We seek to avoid such an approach through the development of the Building Endangered Language Technology (BELT) website, an educational resource designed for speakers and community members with limited technological experience to develop LTs for their own language. Specifically, BELT provides interactive lessons on basic Python programming, coupled with projects to develop specific language technologies, such as spellcheckers or word games. In this paper, we describe BELT{'}s design, the motivation underlying many key decisions, and preliminary responses from learners.", }
The development of language technology (LT) for an endangered language is often identified as a goal in language revitalization efforts, but developing such technologies is typically subject to additional methodological challenges as well as social and ethical concerns. In particular, LT development has too often taken on colonialist qualities, extracting language data, relying on outside experts, and denying the speakers of a language sovereignty over the technologies produced.We seek to avoid such an approach through the development of the Building Endangered Language Technology (BELT) website, an educational resource designed for speakers and community members with limited technological experience to develop LTs for their own language. Specifically, BELT provides interactive lessons on basic Python programming, coupled with projects to develop specific language technologies, such as spellcheckers or word games. In this paper, we describe BELT{'}s design, the motivation underlying many key decisions, and preliminary responses from learners.
[ "Ginn, Michael", "Saavedra-Beltr{\\'a}n, David", "Robayo, Camilo", "Palmer, Alexis" ]
BELT: Building Endangered Language Technology
teachingnlp-1.15
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.16.bib
@inproceedings{prasad-davis-2024-training-nlp, title = "Training an {NLP} Scholar at a Small Liberal Arts College: A Backwards Designed Course Proposal", author = "Prasad, Grusha and Davis, Forrest", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.16", pages = "105--118", abstract = "The rapid growth in natural language processing (NLP) over the last couple yearshas generated student interest and excitement in learning more about the field. In this paper, we present two types of students that NLP courses might want to train. First, an {``}NLP engineer{''} who is able to flexibly design, build and apply new technologies in NLP for a wide range of tasks. Second, an {``}NLP scholar{''} who is able to pose, refine and answer questions in NLP and how it relates to the society, while also learning to effectively communicate these answers to a broader audience. While these two types of skills are not mutually exclusive {---} NLP engineers should be able to think critically, and NLP scholars should be able to build systems {---} we think that courses can differ in the balance of these skills. As educators at Small Liberal Arts Colleges, the strengths of our students and our institution favors an approach that is better suited to train NLP scholars. In this paper we articulate what kinds of skills an NLP scholar should have, and then adopt a backwards design to propose course components that can aid the acquisition of these skills.", }
The rapid growth in natural language processing (NLP) over the last couple yearshas generated student interest and excitement in learning more about the field. In this paper, we present two types of students that NLP courses might want to train. First, an {``}NLP engineer{''} who is able to flexibly design, build and apply new technologies in NLP for a wide range of tasks. Second, an {``}NLP scholar{''} who is able to pose, refine and answer questions in NLP and how it relates to the society, while also learning to effectively communicate these answers to a broader audience. While these two types of skills are not mutually exclusive {---} NLP engineers should be able to think critically, and NLP scholars should be able to build systems {---} we think that courses can differ in the balance of these skills. As educators at Small Liberal Arts Colleges, the strengths of our students and our institution favors an approach that is better suited to train NLP scholars. In this paper we articulate what kinds of skills an NLP scholar should have, and then adopt a backwards design to propose course components that can aid the acquisition of these skills.
[ "Prasad, Grusha", "Davis, Forrest" ]
Training an NLP Scholar at a Small Liberal Arts College: A Backwards Designed Course Proposal
teachingnlp-1.16
Poster
2408.05664
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.16/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.17.bib
@inproceedings{brown-etal-2024-interactive-toolkit, title = "An Interactive Toolkit for Approachable {NLP}", author = "Brown, AriaRay and Steuer, Julius and Mosbach, Marius and Klakow, Dietrich", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.17", pages = "119--127", abstract = "We present a novel tool designed for teaching and interfacing the information-theoretic modeling abilities of large language models. The Surprisal Toolkit allows students from diverse linguistic and programming backgrounds to learn about measures of information theory and natural language processing (NLP) through an online interactive tool. In addition, the interface provides a valuable research mechanism for obtaining measures of surprisal. We implement the toolkit as part of a classroom tutorial in three different learning scenarios and discuss the overall receptive student feedback. We suggest this toolkit and similar applications as resourceful supplements to instruction in NLP topics, especially for the purpose of balancing conceptual understanding with technical instruction, grounding abstract topics, and engaging students with varying coding abilities.", }
We present a novel tool designed for teaching and interfacing the information-theoretic modeling abilities of large language models. The Surprisal Toolkit allows students from diverse linguistic and programming backgrounds to learn about measures of information theory and natural language processing (NLP) through an online interactive tool. In addition, the interface provides a valuable research mechanism for obtaining measures of surprisal. We implement the toolkit as part of a classroom tutorial in three different learning scenarios and discuss the overall receptive student feedback. We suggest this toolkit and similar applications as resourceful supplements to instruction in NLP topics, especially for the purpose of balancing conceptual understanding with technical instruction, grounding abstract topics, and engaging students with varying coding abilities.
[ "Brown, AriaRay", "Steuer, Julius", "Mosbach, Marius", "Klakow, Dietrich" ]
An Interactive Toolkit for Approachable NLP
teachingnlp-1.17
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.17/
[]
[]
[]
0
https://aclanthology.org/2024.teachingnlp-1.18.bib
@inproceedings{guerzhoy-2024-occams-razor, title = "Occam{'}s Razor and Bender and Koller{'}s Octopus", author = "Guerzhoy, Michael", editor = {Al-azzawi, Sana and Biester, Laura and Kov{\'a}cs, Gy{\"o}rgy and Marasovi{\'c}, Ana and Mathur, Leena and Mieskes, Margot and Weissweiler, Leonie}, booktitle = "Proceedings of the Sixth Workshop on Teaching NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.teachingnlp-1.18", pages = "128--129", abstract = "We discuss the teaching of the controversy surrounding Bender and Koller{'}s prominent 2020 paper, {``}Climbing toward NLU: On Meaning, Form, and Understanding in the Age of Data{''} (ACL 2020)We present what we understand to be the main contentions of the paper, and then recommend that the students engage with the natural counter-arguments to the claims in the paper.We attach teaching materials that we use to facilitate teaching this topic to undergraduate students.", }
We discuss the teaching of the controversy surrounding Bender and Koller{'}s prominent 2020 paper, {``}Climbing toward NLU: On Meaning, Form, and Understanding in the Age of Data{''} (ACL 2020)We present what we understand to be the main contentions of the paper, and then recommend that the students engage with the natural counter-arguments to the claims in the paper.We attach teaching materials that we use to facilitate teaching this topic to undergraduate students.
[ "Guerzhoy, Michael" ]
Occam's Razor and Bender and Koller's Octopus
teachingnlp-1.18
Poster
2407.21070
[ "https://github.com/guerzh/octopus" ]
-1
-1
-1
-1
https://aclanthology.org/2024.teachingnlp-1.18/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.1.bib
@inproceedings{ignat-etal-2024-learning, title = "Learning Human Action Representations from Temporal Context in Lifestyle Vlogs", author = "Ignat, Oana and Castro, Santiago and Li, Weiji and Mihalcea, Rada", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.1", pages = "1--18", abstract = "We address the task of human action representation and show how the approach to generating word representations based on co-occurrence can be adapted to generate human action representations by analyzing their co-occurrence in videos. To this end, we formalize the new task of human action co-occurrence identification in online videos, i.e., determine whether two human actions are likely to co-occur in the same interval of time.We create and make publicly available the Co-Act (Action Co-occurrence) dataset, consisting of a large graph of {\textasciitilde}12k co-occurring pairs of visual actions and their corresponding video clips. We describe graph link prediction models that leverage visual and textual information to automatically infer if two actions are co-occurring.We show that graphs are particularly well suited to capture relations between human actions, and the learned graph representations are effective for our task and capture novel and relevant information across different data domains.", }
We address the task of human action representation and show how the approach to generating word representations based on co-occurrence can be adapted to generate human action representations by analyzing their co-occurrence in videos. To this end, we formalize the new task of human action co-occurrence identification in online videos, i.e., determine whether two human actions are likely to co-occur in the same interval of time.We create and make publicly available the Co-Act (Action Co-occurrence) dataset, consisting of a large graph of {\textasciitilde}12k co-occurring pairs of visual actions and their corresponding video clips. We describe graph link prediction models that leverage visual and textual information to automatically infer if two actions are co-occurring.We show that graphs are particularly well suited to capture relations between human actions, and the learned graph representations are effective for our task and capture novel and relevant information across different data domains.
[ "Ignat, Oana", "Castro, Santiago", "Li, Weiji", "Mihalcea, Rada" ]
Learning Human Action Representations from Temporal Context in Lifestyle Vlogs
textgraphs-1.1
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.2.bib
@inproceedings{brannon-etal-2024-congrat, title = "{C}on{G}ra{T}: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings", author = "Brannon, William and Kang, Wonjune and Fulay, Suyash and Jiang, Hang and Roy, Brandon and Roy, Deb and Kabbara, Jad", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.2", pages = "19--39", abstract = "Learning on text-attributed graphs (TAGs), in which nodes are associated with one or more texts, has been the subject of much recent work. However, most approaches tend to make strong assumptions about the downstream task of interest, are reliant on hand-labeled data, or fail to equally balance the importance of both text and graph representations. In this work, we propose Contrastive Graph-Text pretraining (ConGraT), a general, self-supervised approach for jointly learning separate representations of texts and nodes in a TAG. Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP. We further propose an extension to the CLIP objective that leverages graph structure to incorporate information about inter-node similarity. Extensive experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling. Finally, we present an application of our method to community detection in social graphs, which enables finding more textually grounded communities, rather than purely graph-based ones.", }
Learning on text-attributed graphs (TAGs), in which nodes are associated with one or more texts, has been the subject of much recent work. However, most approaches tend to make strong assumptions about the downstream task of interest, are reliant on hand-labeled data, or fail to equally balance the importance of both text and graph representations. In this work, we propose Contrastive Graph-Text pretraining (ConGraT), a general, self-supervised approach for jointly learning separate representations of texts and nodes in a TAG. Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP. We further propose an extension to the CLIP objective that leverages graph structure to incorporate information about inter-node similarity. Extensive experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling. Finally, we present an application of our method to community detection in social graphs, which enables finding more textually grounded communities, rather than purely graph-based ones.
[ "Brannon, William", "Kang, Wonjune", "Fulay, Suyash", "Jiang, Hang", "Roy, Br", "on", "Roy, Deb", "Kabbara, Jad" ]
ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings
textgraphs-1.2
Poster
2305.14321
[ "https://github.com/wwbrannon/congrat" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.3.bib
@inproceedings{chun-xue-2024-pipeline, title = "A Pipeline Approach for Parsing Documents into Uniform Meaning Representation Graphs", author = "Chun, Jayeol and Xue, Nianwen", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.3", pages = "40--52", abstract = "Uniform Meaning Representation (UMR) is the next phase of semantic formalism following Abstract Meaning Representation (AMR), with added focus on inter-sentential relations allowing the representational scope of UMR to cover a full document.This, in turn, greatly increases the complexity of its parsing task with the additional requirement of capturing document-level linguistic phenomena such as coreference, modal and temporal dependencies.In order to establish a strong baseline despite the small size of recently released UMR v1.0 corpus, we introduce a pipeline model that does not require any training.At the core of our method is a two-track strategy of obtaining UMR{'}s sentence and document graphs separately, with the document-level triples being compiled at the token level and the sentence graph being converted from AMR graphs.By leveraging alignment between AMR and its sentence, we are able to generate the first automatic English UMR parses.", }
Uniform Meaning Representation (UMR) is the next phase of semantic formalism following Abstract Meaning Representation (AMR), with added focus on inter-sentential relations allowing the representational scope of UMR to cover a full document.This, in turn, greatly increases the complexity of its parsing task with the additional requirement of capturing document-level linguistic phenomena such as coreference, modal and temporal dependencies.In order to establish a strong baseline despite the small size of recently released UMR v1.0 corpus, we introduce a pipeline model that does not require any training.At the core of our method is a two-track strategy of obtaining UMR{'}s sentence and document graphs separately, with the document-level triples being compiled at the token level and the sentence graph being converted from AMR graphs.By leveraging alignment between AMR and its sentence, we are able to generate the first automatic English UMR parses.
[ "Chun, Jayeol", "Xue, Nianwen" ]
A Pipeline Approach for Parsing Documents into Uniform Meaning Representation Graphs
textgraphs-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.4.bib
@inproceedings{saetia-etal-2024-financial, title = "Financial Product Ontology Population with Large Language Models", author = "Saetia, Chanatip and Phruetthiset, Jiratha and Chalothorn, Tawunrat and Lertsutthiwong, Monchai and Taerungruang, Supawat and Buabthong, Pakpoom", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.4", pages = "53--60", abstract = "Ontology population, which aims to extract structured data to enrich domain-specific ontologies from unstructured text, typically faces challenges in terms of data scarcity and linguistic complexity, particularly in specialized fields such as retail banking. In this study, we investigate the application of large language models (LLMs) to populate domain-specific ontologies of retail banking products from Thai corporate documents. We compare traditional span-based approaches to LLMs-based generative methods, with different prompting techniques. Our findings reveal that while span-based methods struggle with data scarcity and the complex linguistic structure, LLMs-based generative approaches substantially outperform, achieving a 61.05{\%} F1 score, with the most improvement coming from providing examples in the prompts. This improvement highlights the potential of LLMs for ontology population tasks, offering a scalable and efficient solution for structured information extraction in especially in low-resource language settings.", }
Ontology population, which aims to extract structured data to enrich domain-specific ontologies from unstructured text, typically faces challenges in terms of data scarcity and linguistic complexity, particularly in specialized fields such as retail banking. In this study, we investigate the application of large language models (LLMs) to populate domain-specific ontologies of retail banking products from Thai corporate documents. We compare traditional span-based approaches to LLMs-based generative methods, with different prompting techniques. Our findings reveal that while span-based methods struggle with data scarcity and the complex linguistic structure, LLMs-based generative approaches substantially outperform, achieving a 61.05{\%} F1 score, with the most improvement coming from providing examples in the prompts. This improvement highlights the potential of LLMs for ontology population tasks, offering a scalable and efficient solution for structured information extraction in especially in low-resource language settings.
[ "Saetia, Chanatip", "Phruetthiset, Jiratha", "Chalothorn, Tawunrat", "Lertsutthiwong, Monchai", "Taerungruang, Supawat", "Buabthong, Pakpoom" ]
Financial Product Ontology Population with Large Language Models
textgraphs-1.4
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.5.bib
@inproceedings{chepurova-etal-2024-prompt, title = "Prompt Me One More Time: A Two-Step Knowledge Extraction Pipeline with Ontology-Based Verification", author = "Chepurova, Alla and Kuratov, Yuri and Bulatov, Aydar and Burtsev, Mikhail", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.5", pages = "61--77", abstract = "This study explores a method for extending real-world knowledge graphs (specifically, Wikidata) by extracting triplets from texts with the aid of Large Language Models (LLMs). We propose a two-step pipeline that includes the initial extraction of entity candidates, followed by their refinement and linkage to the canonical entities and relations of the knowledge graph. Finally, we utilize Wikidata relation constraints to select only verified triplets. We compare our approach to a model that was fine-tuned on a machine-generated dataset and demonstrate that it performs better on natural data. Our results suggest that LLM-based triplet extraction from texts, with subsequent verification, is a viable method for real-world applications.", }
This study explores a method for extending real-world knowledge graphs (specifically, Wikidata) by extracting triplets from texts with the aid of Large Language Models (LLMs). We propose a two-step pipeline that includes the initial extraction of entity candidates, followed by their refinement and linkage to the canonical entities and relations of the knowledge graph. Finally, we utilize Wikidata relation constraints to select only verified triplets. We compare our approach to a model that was fine-tuned on a machine-generated dataset and demonstrate that it performs better on natural data. Our results suggest that LLM-based triplet extraction from texts, with subsequent verification, is a viable method for real-world applications.
[ "Chepurova, Alla", "Kuratov, Yuri", "Bulatov, Aydar", "Burtsev, Mikhail" ]
Prompt Me One More Time: A Two-Step Knowledge Extraction Pipeline with Ontology-Based Verification
textgraphs-1.5
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.5/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.6.bib
@inproceedings{goldstein-etal-2024-towards, title = "Towards Understanding Attention-based Reasoning through Graph Structures in Medical Codes Classification", author = {Goldstein, Noon and Amin, Saadullah and Neumann, G{\"u}nter}, editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.6", pages = "78--92", abstract = "A common approach to automatically assigning diagnostic and procedural clinical codes to health records is to solve the task as a multi-label classification problem. Difficulties associated with this task stem from domain knowledge requirements, long document texts, large and imbalanced label space, reflecting the breadth and dependencies between medical diagnoses and procedures. Decisions in the healthcare domain also need to demonstrate sound reasoning, both when they are correct and when they are erroneous. Existing works address some of these challenges by incorporating external knowledge, which can be encoded into a graph-structured format. Incorporating graph structures on the output label space or between the input document and output label spaces have shown promising results in medical codes classification. Limited focus has been put on utilizing graph-based representation on the input document space. To partially bridge this gap, we represent clinical texts as graph-structured data through the UMLS Metathesaurus; we explore implicit graph representation through pre-trained knowledge graph embeddings and explicit domain-knowledge guided encoding of document concepts and relational information through graph neural networks. Our findings highlight the benefits of pre-trained knowledge graph embeddings in understanding model{'}s attention-based reasoning. In contrast, transparent domain knowledge guidance in graph encoder approaches is overshadowed by performance loss. Our qualitative analysis identifies limitations that contribute to prediction errors.", }
A common approach to automatically assigning diagnostic and procedural clinical codes to health records is to solve the task as a multi-label classification problem. Difficulties associated with this task stem from domain knowledge requirements, long document texts, large and imbalanced label space, reflecting the breadth and dependencies between medical diagnoses and procedures. Decisions in the healthcare domain also need to demonstrate sound reasoning, both when they are correct and when they are erroneous. Existing works address some of these challenges by incorporating external knowledge, which can be encoded into a graph-structured format. Incorporating graph structures on the output label space or between the input document and output label spaces have shown promising results in medical codes classification. Limited focus has been put on utilizing graph-based representation on the input document space. To partially bridge this gap, we represent clinical texts as graph-structured data through the UMLS Metathesaurus; we explore implicit graph representation through pre-trained knowledge graph embeddings and explicit domain-knowledge guided encoding of document concepts and relational information through graph neural networks. Our findings highlight the benefits of pre-trained knowledge graph embeddings in understanding model{'}s attention-based reasoning. In contrast, transparent domain knowledge guidance in graph encoder approaches is overshadowed by performance loss. Our qualitative analysis identifies limitations that contribute to prediction errors.
[ "Goldstein, Noon", "Amin, Saadullah", "Neumann, G{\\\"u}nter" ]
Towards Understanding Attention-based Reasoning through Graph Structures in Medical Codes Classification
textgraphs-1.6
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.7.bib
@inproceedings{nonkes-etal-2024-leveraging, title = "Leveraging Graph Structures to Detect Hallucinations in Large Language Models", author = "Nonkes, Noa and Agaronian, Sergei and Kanoulas, Evangelos and Petcu, Roxana", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.7", pages = "93--104", abstract = "Large language models are extensively applied across a wide range of tasks, such as customer support, content creation, educational tutoring, and providing financial guidance. However, a well-known drawback is their predisposition to generate hallucinations. This damages the trustworthiness of the information these models provide, impacting decision-making and user confidence. We propose a method to detect hallucinations by looking at the structure of the latent space and finding associations within hallucinated and non-hallucinated generations. We create a graph structure that connects generations that lie closely in the embedding space. Moreover, we employ a Graph Attention Network which utilizes message passing to aggregate information from neighboring nodes and assigns varying degrees of importance to each neighbor based on their relevance. Our findings show that 1) there exists a structure in the latent space that differentiates between hallucinated and non-hallucinated generations, 2) Graph Attention Networks can learn this structure and generalize it to unseen generations, and 3) the robustness of our method is enhanced when incorporating contrastive learning. When evaluated against evidence-based benchmarks, our model performs similarly without access to search-based methods.", }
Large language models are extensively applied across a wide range of tasks, such as customer support, content creation, educational tutoring, and providing financial guidance. However, a well-known drawback is their predisposition to generate hallucinations. This damages the trustworthiness of the information these models provide, impacting decision-making and user confidence. We propose a method to detect hallucinations by looking at the structure of the latent space and finding associations within hallucinated and non-hallucinated generations. We create a graph structure that connects generations that lie closely in the embedding space. Moreover, we employ a Graph Attention Network which utilizes message passing to aggregate information from neighboring nodes and assigns varying degrees of importance to each neighbor based on their relevance. Our findings show that 1) there exists a structure in the latent space that differentiates between hallucinated and non-hallucinated generations, 2) Graph Attention Networks can learn this structure and generalize it to unseen generations, and 3) the robustness of our method is enhanced when incorporating contrastive learning. When evaluated against evidence-based benchmarks, our model performs similarly without access to search-based methods.
[ "Nonkes, Noa", "Agaronian, Sergei", "Kanoulas, Evangelos", "Petcu, Roxana" ]
Leveraging Graph Structures to Detect Hallucinations in Large Language Models
textgraphs-1.7
Poster
2407.04485
[ "https://github.com/noanonkes/Hallucination-Detection-in-LLMs" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.8.bib
@inproceedings{yao-etal-2024-semantic, title = "Semantic Graphs for Syntactic Simplification: A Revisit from the Age of {LLM}", author = "Yao, Peiran and Guzhva, Kostyantyn and Barbosa, Denilson", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.8", pages = "105--115", abstract = "Symbolic sentence meaning representations, such as AMR (Abstract Meaning Representation) provide expressive and structured semantic graphs that act as intermediates that simplify downstream NLP tasks. However, the instruction-following capability of large language models (LLMs) offers a shortcut to effectively solve NLP tasks, questioning the utility of semantic graphs. Meanwhile, recent work has also shown the difficulty of using meaning representations merely as a helpful auxiliary for LLMs. We revisit the position of semantic graphs in syntactic simplification, the task of simplifying sentence structures while preserving their meaning, which requires semantic understanding, and evaluate it on a new complex and natural dataset. The AMR-based method that we propose, AMRS$^3$, demonstrates that state-of-the-art meaning representations can lead to easy-to-implement simplification methods with competitive performance and unique advantages in cost, interpretability, and generalization. With AMRS$^3$ as an anchor, we discover that syntactic simplification is a task where semantic graphs are helpful in LLM prompting. We propose AMRCoC prompting that guides LLMs to emulate graph algorithms for explicit symbolic reasoning on AMR graphs, and show its potential for improving LLM on semantic-centered tasks like syntactic simplification.", }
Symbolic sentence meaning representations, such as AMR (Abstract Meaning Representation) provide expressive and structured semantic graphs that act as intermediates that simplify downstream NLP tasks. However, the instruction-following capability of large language models (LLMs) offers a shortcut to effectively solve NLP tasks, questioning the utility of semantic graphs. Meanwhile, recent work has also shown the difficulty of using meaning representations merely as a helpful auxiliary for LLMs. We revisit the position of semantic graphs in syntactic simplification, the task of simplifying sentence structures while preserving their meaning, which requires semantic understanding, and evaluate it on a new complex and natural dataset. The AMR-based method that we propose, AMRS$^3$, demonstrates that state-of-the-art meaning representations can lead to easy-to-implement simplification methods with competitive performance and unique advantages in cost, interpretability, and generalization. With AMRS$^3$ as an anchor, we discover that syntactic simplification is a task where semantic graphs are helpful in LLM prompting. We propose AMRCoC prompting that guides LLMs to emulate graph algorithms for explicit symbolic reasoning on AMR graphs, and show its potential for improving LLM on semantic-centered tasks like syntactic simplification.
[ "Yao, Peiran", "Guzhva, Kostyantyn", "Barbosa, Denilson" ]
Semantic Graphs for Syntactic Simplification: A Revisit from the Age of LLM
textgraphs-1.8
Poster
2407.04067
[ "https://github.com/U-Alberta/AMRS3" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.9.bib
@inproceedings{sakhovskiy-etal-2024-textgraphs, title = "{T}ext{G}raphs 2024 Shared Task on Text-Graph Representations for Knowledge Graph Question Answering", author = {Sakhovskiy, Andrey and Salnikov, Mikhail and Nikishina, Irina and Usmanova, Aida and Kraft, Angelie and M{\"o}ller, Cedric and Banerjee, Debayan and Huang, Junbo and Jiang, Longquan and Abdullah, Rana and Yan, Xi and Ustalov, Dmitry and Tutubalina, Elena and Usbeck, Ricardo and Panchenko, Alexander}, editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.9", pages = "116--125", abstract = "This paper describes the results of the Knowledge Graph Question Answering (KGQA) shared task that was co-located with the TextGraphs 2024 workshop. In this task, given a textual question and a list of entities with the corresponding KG subgraphs, the participating system should choose the entity that correctly answers the question. Our competition attracted thirty teams, four of which outperformed our strong ChatGPT-based zero-shot baseline. In this paper, we overview the participating systems and analyze their performance according to a large-scale automatic evaluation. To the best of our knowledge, this is the first competition aimed at the KGQA problem using the interaction between large language models (LLMs) and knowledge graphs.", }
This paper describes the results of the Knowledge Graph Question Answering (KGQA) shared task that was co-located with the TextGraphs 2024 workshop. In this task, given a textual question and a list of entities with the corresponding KG subgraphs, the participating system should choose the entity that correctly answers the question. Our competition attracted thirty teams, four of which outperformed our strong ChatGPT-based zero-shot baseline. In this paper, we overview the participating systems and analyze their performance according to a large-scale automatic evaluation. To the best of our knowledge, this is the first competition aimed at the KGQA problem using the interaction between large language models (LLMs) and knowledge graphs.
[ "Sakhovskiy, Andrey", "Salnikov, Mikhail", "Nikishina, Irina", "Usmanova, Aida", "Kraft, Angelie", "M{\\\"o}ller, Cedric", "Banerjee, Debayan", "Huang, Junbo", "Jiang, Longquan", "Abdullah, Rana", "Yan, Xi", "Ustalov, Dmitry", "Tutubalina, Elena", "Usbeck, Ricardo", "Panchenko, Alex", "er" ]
TextGraphs 2024 Shared Task on Text-Graph Representations for Knowledge Graph Question Answering
textgraphs-1.9
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.10.bib
@inproceedings{kurdiukov-etal-2024-nlp, title = "nlp{\_}enjoyers at {T}ext{G}raphs-17 Shared Task: Text-Graph Representations for Knowledge Graph Question Answering using all-{MPN}et", author = "Kurdiukov, Nikita and Zinkovich, Viktoriia and Karpukhin, Sergey and Tikhomirov, Pavel", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.10", pages = "126--130", abstract = "This paper presents a model for solving the Multiple Choice Question Answering (MCQA) problem, focusing on the impact of subgraph extraction from a Knowledge Graph on model performance. The proposed method combines textual and graph information by adding linearized subgraphs directly into the main question prompt with separate tokens, enhancing the performance of models working with each modality separately. The study also includes an examination of Large Language Model (LLM) backbones and the benefits of linearized subgraphs and sequence length, with efficient training achieved through fine-tuning with LoRA. The top benchmark, using subgraphs and MPNet, achieved an F1 score of 0.3887. The main limitation of the experiments is the reliance on pre-generated subgraphs/triplets from the graph, and the lack of exploration of in-context learning and prompting strategies with decoder-based architectures.", }
This paper presents a model for solving the Multiple Choice Question Answering (MCQA) problem, focusing on the impact of subgraph extraction from a Knowledge Graph on model performance. The proposed method combines textual and graph information by adding linearized subgraphs directly into the main question prompt with separate tokens, enhancing the performance of models working with each modality separately. The study also includes an examination of Large Language Model (LLM) backbones and the benefits of linearized subgraphs and sequence length, with efficient training achieved through fine-tuning with LoRA. The top benchmark, using subgraphs and MPNet, achieved an F1 score of 0.3887. The main limitation of the experiments is the reliance on pre-generated subgraphs/triplets from the graph, and the lack of exploration of in-context learning and prompting strategies with decoder-based architectures.
[ "Kurdiukov, Nikita", "Zinkovich, Viktoriia", "Karpukhin, Sergey", "Tikhomirov, Pavel" ]
nlp_enjoyers at TextGraphs-17 Shared Task: Text-Graph Representations for Knowledge Graph Question Answering using all-MPNet
textgraphs-1.10
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.11.bib
@inproceedings{tang-etal-2024-hw, title = "{HW}-{TSC} at {T}ext{G}raphs-17 Shared Task: Enhancing Inference Capabilities of {LLM}s with Knowledge Graphs", author = "Tang, Wei and Qiao, Xiaosong and Zhao, Xiaofeng and Zhang, Min and Su, Chang and Li, Yuang and Li, Yinglu and Liu, Yilun and Yao, Feiyu and Tao, Shimin and Yang, Hao and Xianghui, He", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.11", pages = "131--136", abstract = "In this paper, we present an effective method for TextGraphs-17 Shared Task. This task requires selecting an entity from the candidate entities that is relevant to the given question and answer. The selection process is aided by utilizing the shortest path graph in the knowledge graph, connecting entities in the query to the candidate entity. This task aims to explore how to enhance LLMs output with KGs, although current LLMs have certain logical reasoning capabilities, they may not be certain about their own outputs, and the answers they produce may be correct by chance through incorrect paths. In this case, we have introduced a LLM prompt design strategy based on self-ranking and emotion. Specifically, we let the large model score its own answer choices to reflect its confidence in the answer. Additionally, we add emotional incentives to the prompts to encourage the model to carefully examine the questions. Our submissions was conducted under zero-resource setting, and we achieved the second place in the task with an F1-score of 0.8321.", }
In this paper, we present an effective method for TextGraphs-17 Shared Task. This task requires selecting an entity from the candidate entities that is relevant to the given question and answer. The selection process is aided by utilizing the shortest path graph in the knowledge graph, connecting entities in the query to the candidate entity. This task aims to explore how to enhance LLMs output with KGs, although current LLMs have certain logical reasoning capabilities, they may not be certain about their own outputs, and the answers they produce may be correct by chance through incorrect paths. In this case, we have introduced a LLM prompt design strategy based on self-ranking and emotion. Specifically, we let the large model score its own answer choices to reflect its confidence in the answer. Additionally, we add emotional incentives to the prompts to encourage the model to carefully examine the questions. Our submissions was conducted under zero-resource setting, and we achieved the second place in the task with an F1-score of 0.8321.
[ "Tang, Wei", "Qiao, Xiaosong", "Zhao, Xiaofeng", "Zhang, Min", "Su, Chang", "Li, Yuang", "Li, Yinglu", "Liu, Yilun", "Yao, Feiyu", "Tao, Shimin", "Yang, Hao", "Xianghui, He" ]
HW-TSC at TextGraphs-17 Shared Task: Enhancing Inference Capabilities of LLMs with Knowledge Graphs
textgraphs-1.11
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.12.bib
@inproceedings{rakesh-etal-2024-tigformer, title = "{TIGFORMER} at {T}ext{G}raphs-17 Shared Task: A Late Interaction Method for text and Graph Representations in {KBQA} Classification Task", author = "Rakesh, Mayank and Saikia, Parikshit and Shrivastava, Saket", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.12", pages = "137--141", abstract = "This paper introduces a novel late interaction mechanism for knowledge base question answering (KBQA) systems, combining Graphormer and transformer representations. We conducted extensive experiments, comparing various pooling mechanisms and configurations. Our results demonstrate significant improvements in F1-score compared to traditional baselines. Specifically, we found that attention pooling, in conjunction with linearized graph and question features alongside sub-graph representations, yields the best performance. Our study highlights the importance of advanced interaction mechanisms and the integration of diverse modalities in KBQA systems.", }
This paper introduces a novel late interaction mechanism for knowledge base question answering (KBQA) systems, combining Graphormer and transformer representations. We conducted extensive experiments, comparing various pooling mechanisms and configurations. Our results demonstrate significant improvements in F1-score compared to traditional baselines. Specifically, we found that attention pooling, in conjunction with linearized graph and question features alongside sub-graph representations, yields the best performance. Our study highlights the importance of advanced interaction mechanisms and the integration of diverse modalities in KBQA systems.
[ "Rakesh, Mayank", "Saikia, Parikshit", "Shrivastava, Saket" ]
TIGFORMER at TextGraphs-17 Shared Task: A Late Interaction Method for text and Graph Representations in KBQA Classification Task
textgraphs-1.12
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.12/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.13.bib
@inproceedings{moses-etal-2024-nlpeople, title = "{NLP}eople at {T}ext{G}raphs-17 Shared Task: Chain of Thought Questioning to Elicit Decompositional Reasoning", author = "Moses, Movina and Kuruvanthodi, Vishnudev and Elkaref, Mohab and Tanaka, Shinnosuke and Barry, James and Mel, Geeth and Watson, Campbell", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.13", pages = "142--148", abstract = "This paper presents the approach of the NLPeople team for the Text-Graph Representations for KGQA Shared Task at TextGraphs-17. The task involved selecting an answer for a given question from a list of candidate entities. We show that prompting Large Language models (LLMs) to break down a natural language question into a series of sub-questions, allows models to understand complex questions. The LLMs arrive at the final answer by answering the intermediate questions using their internal knowledge and without needing additional context. Our approach to the task uses an ensemble of prompting strategies to guide how LLMs interpret various types of questions. Our submission achieves an F1 score of 85.90, ranking 1st among the other participants in the task.", }
This paper presents the approach of the NLPeople team for the Text-Graph Representations for KGQA Shared Task at TextGraphs-17. The task involved selecting an answer for a given question from a list of candidate entities. We show that prompting Large Language models (LLMs) to break down a natural language question into a series of sub-questions, allows models to understand complex questions. The LLMs arrive at the final answer by answering the intermediate questions using their internal knowledge and without needing additional context. Our approach to the task uses an ensemble of prompting strategies to guide how LLMs interpret various types of questions. Our submission achieves an F1 score of 85.90, ranking 1st among the other participants in the task.
[ "Moses, Movina", "Kuruvanthodi, Vishnudev", "Elkaref, Mohab", "Tanaka, Shinnosuke", "Barry, James", "Mel, Geeth", "Watson, Campbell" ]
NLPeople at TextGraphs-17 Shared Task: Chain of Thought Questioning to Elicit Decompositional Reasoning
textgraphs-1.13
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.14.bib
@inproceedings{lysyuk-braslavski-2024-skoltech, title = "Skoltech at {T}ext{G}raphs-17 Shared Task: Finding {GPT}-4 Prompting Strategies for Multiple Choice Questions", author = "Lysyuk, Maria and Braslavski, Pavel", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.14", pages = "149--153", abstract = "In this paper, we present our solution to the TextGraphs-17 Shared Task on Text-Graph Representations for KGQA. GPT-4 alone, with chain-of-thought reasoning and a given set of answers, achieves an F1 score of 0.78. By employing subgraph size as a feature, Wikidata answer description as an additional context, and question rephrasing technique, we further strengthen this result. These tricks help to answer questions that were not initially answered and to eliminate irrelevant, identical answers. We have managed to achieve an F1 score of 0.83 and took 2nd place, improving the score by 0.05 over the baseline. An open implementation of our method is available on GitHub.", }
In this paper, we present our solution to the TextGraphs-17 Shared Task on Text-Graph Representations for KGQA. GPT-4 alone, with chain-of-thought reasoning and a given set of answers, achieves an F1 score of 0.78. By employing subgraph size as a feature, Wikidata answer description as an additional context, and question rephrasing technique, we further strengthen this result. These tricks help to answer questions that were not initially answered and to eliminate irrelevant, identical answers. We have managed to achieve an F1 score of 0.83 and took 2nd place, improving the score by 0.05 over the baseline. An open implementation of our method is available on GitHub.
[ "Lysyuk, Maria", "Braslavski, Pavel" ]
Skoltech at TextGraphs-17 Shared Task: Finding GPT-4 Prompting Strategies for Multiple Choice Questions
textgraphs-1.14
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.14/
[]
[]
[]
0
https://aclanthology.org/2024.textgraphs-1.15.bib
@inproceedings{belikova-etal-2024-jellybell, title = "{J}elly{B}ell at {T}ext{G}raphs-17 Shared Task: Fusing Large Language Models with External Knowledge for Enhanced Question Answering", author = "Belikova, Julia and Beliakin, Evegeniy and Konovalov, Vasily", editor = "Ustalov, Dmitry and Gao, Yanjun and Pachenko, Alexander and Tutubalina, Elena and Nikishina, Irina and Ramesh, Arti and Sakhovskiy, Andrey and Usbeck, Ricardo and Penn, Gerald and Valentino, Marco", booktitle = "Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.textgraphs-1.15", pages = "154--160", abstract = "This work describes an approach to develop Knowledge Graph Question Answering (KGQA) system for TextGraphs-17 shared task. The task focuses on the fusion of Large Language Models (LLMs) with Knowledge Graphs (KGs). The goal is to select a KG entity (out of several candidates) which corresponds to an answer given a textual question. Our approach applies LLM to identify the correct answer among the list of possible candidates. We confirm that integrating external information is particularly beneficial when the subject entities are not well-known, and using RAG can negatively impact the performance of LLM on questions related to popular entities, as the retrieved context might be misleading. With our result, we achieved 2nd place in the post-evaluation phase.", }
This work describes an approach to develop Knowledge Graph Question Answering (KGQA) system for TextGraphs-17 shared task. The task focuses on the fusion of Large Language Models (LLMs) with Knowledge Graphs (KGs). The goal is to select a KG entity (out of several candidates) which corresponds to an answer given a textual question. Our approach applies LLM to identify the correct answer among the list of possible candidates. We confirm that integrating external information is particularly beneficial when the subject entities are not well-known, and using RAG can negatively impact the performance of LLM on questions related to popular entities, as the retrieved context might be misleading. With our result, we achieved 2nd place in the post-evaluation phase.
[ "Belikova, Julia", "Beliakin, Evegeniy", "Konovalov, Vasily" ]
JellyBell at TextGraphs-17 Shared Task: Fusing Large Language Models with External Knowledge for Enhanced Question Answering
textgraphs-1.15
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.textgraphs-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.1.bib
@inproceedings{kirtac-germano-2024-enhanced, title = "Enhanced Financial Sentiment Analysis and Trading Strategy Development Using Large Language Models", author = "Kirtac, Kemal and Germano, Guido", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.1", pages = "1--10", abstract = "This study examines a novel methodology for enhanced financial sentiment analysis and trading strategy development using large language models (LLMs) such as OPT, BERT, FinBERT, LLAMA 3, and RoBERTa. Utilizing a dataset of 965,375 U.S. financial news articles from 2010 to 2023, our research demonstrates that the GPT-3-based OPT significantly outperforms other models, achieving a prediction accuracy of 74.4{\%} for stock market returns. Our findings reveal that the advanced capabilities of LLMs, particularly OPT, surpass traditional sentiment analysis methods such as the Loughran-McDonald dictionary model in predicting and explaining stock returns. For instance, a self-financing strategy based on OPT scores achieves a Sharpe ratio of 3.05 over our sample period, compared to a Sharpe ratio of 1.23 for the strategy based on the dictionary model. This study highlights the superior performance of LLMs in financial sentiment analysis, encouraging further research into integrating artificial intelligence and LLMs in financial markets.", }
This study examines a novel methodology for enhanced financial sentiment analysis and trading strategy development using large language models (LLMs) such as OPT, BERT, FinBERT, LLAMA 3, and RoBERTa. Utilizing a dataset of 965,375 U.S. financial news articles from 2010 to 2023, our research demonstrates that the GPT-3-based OPT significantly outperforms other models, achieving a prediction accuracy of 74.4{\%} for stock market returns. Our findings reveal that the advanced capabilities of LLMs, particularly OPT, surpass traditional sentiment analysis methods such as the Loughran-McDonald dictionary model in predicting and explaining stock returns. For instance, a self-financing strategy based on OPT scores achieves a Sharpe ratio of 3.05 over our sample period, compared to a Sharpe ratio of 1.23 for the strategy based on the dictionary model. This study highlights the superior performance of LLMs in financial sentiment analysis, encouraging further research into integrating artificial intelligence and LLMs in financial markets.
[ "Kirtac, Kemal", "Germano, Guido" ]
Enhanced Financial Sentiment Analysis and Trading Strategy Development Using Large Language Models
wassa-1.1
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.2.bib
@inproceedings{gendron-gaelguibon-2024-sec, title = "{SEC}: Context-Aware Metric Learning for Efficient Emotion Recognition in Conversation", author = "Gendron, Barbara and GaelGuibon, GaelGuibon", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.2", pages = "11--22", abstract = "The advent of deep learning models has made a considerable contribution to the achievement of Emotion Recognition in Conversation (ERC). However, this task still remains an important challenge due to the plurality and subjectivity of human emotions. Previous work on ERC provides predictive models using mostly graph-based conversation representations. In this work, we propose a way to model the conversational context that we incorporate into a metric learning training strategy, with a two-step process. This allows us to perform ERC in a flexible classification scenario and end up with a lightweight yet efficient model. Using metric learning through a Siamese Network architecture, we achieve 57.71 in macro F1 score for emotion classification in conversation on DailyDialog dataset, which outperforms the related work. This state-of-the-art result is promising in terms of the use of metric learning for emotion recognition, yet perfectible compared to the micro F1 score obtained.", }
The advent of deep learning models has made a considerable contribution to the achievement of Emotion Recognition in Conversation (ERC). However, this task still remains an important challenge due to the plurality and subjectivity of human emotions. Previous work on ERC provides predictive models using mostly graph-based conversation representations. In this work, we propose a way to model the conversational context that we incorporate into a metric learning training strategy, with a two-step process. This allows us to perform ERC in a flexible classification scenario and end up with a lightweight yet efficient model. Using metric learning through a Siamese Network architecture, we achieve 57.71 in macro F1 score for emotion classification in conversation on DailyDialog dataset, which outperforms the related work. This state-of-the-art result is promising in terms of the use of metric learning for emotion recognition, yet perfectible compared to the micro F1 score obtained.
[ "Gendron, Barbara", "GaelGuibon, GaelGuibon" ]
SEC: Context-Aware Metric Learning for Efficient Emotion Recognition in Conversation
wassa-1.2
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.3.bib
@inproceedings{yan-etal-2024-modeling, title = "Modeling Complex Interactions in Long Documents for Aspect-Based Sentiment Analysis", author = "Yan, Zehong and Hsu, Wynne and Lee, Mong-Li and Bartram-Shaw, David", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.3", pages = "23--34", abstract = "The growing number of online articles and reviews necessitates innovative techniques for document-level aspect-based sentiment analysis. Capturing the context in which an aspect is mentioned is crucial. Existing models have focused on relatively short reviews and may fail to consider distant contextual information. This is especially so in longer documents where an aspect may be referred to in multiple ways across dispersed sentences. This work introduces a hierarchical Transformer-based architecture that encodes information at different level of granularities with attention aggregation mechanisms to learn the local and global aspect-specific document representations. For empirical validation, we curate two datasets of long documents: one on social issues, and another covering various topics involving trust-related issues. Experimental results show that the proposed architecture outperforms state-of-the-art methods for document-level aspect-based sentiment classification. We also demonstrate the potential applicability of our approach for long document trust prediction.", }
The growing number of online articles and reviews necessitates innovative techniques for document-level aspect-based sentiment analysis. Capturing the context in which an aspect is mentioned is crucial. Existing models have focused on relatively short reviews and may fail to consider distant contextual information. This is especially so in longer documents where an aspect may be referred to in multiple ways across dispersed sentences. This work introduces a hierarchical Transformer-based architecture that encodes information at different level of granularities with attention aggregation mechanisms to learn the local and global aspect-specific document representations. For empirical validation, we curate two datasets of long documents: one on social issues, and another covering various topics involving trust-related issues. Experimental results show that the proposed architecture outperforms state-of-the-art methods for document-level aspect-based sentiment classification. We also demonstrate the potential applicability of our approach for long document trust prediction.
[ "Yan, Zehong", "Hsu, Wynne", "Lee, Mong-Li", "Bartram-Shaw, David" ]
Modeling Complex Interactions in Long Documents for Aspect-Based Sentiment Analysis
wassa-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.4.bib
@inproceedings{schafer-etal-2024-hierarchical, title = "Hierarchical Adversarial Correction to Mitigate Identity Term Bias in Toxicity Detection", author = {Sch{\"a}fer, Johannes and Heid, Ulrich and Klinger, Roman}, editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.4", pages = "35--51", abstract = "Corpora that are the fundament for toxicity detection contain such expressions typically directed against a target individual or group, e.g., people of a specific gender or ethnicity. Prior work has shown that the target identity mention can constitute a confounding variable. As an example, a model might learn that Christians are always mentioned in the context of hate speech. This misguided focus can lead to a limited generalization to newly emerging targets that are not found in the training data. In this paper, we hypothesize and subsequently show that this issue can be mitigated by considering targets on different levels of specificity. We distinguish levels of (1) the existence of a target, (2) a class (e.g., that the target is a religious group), or (3) a specific target group (e.g., Christians or Muslims). We define a target label hierarchy based on these three levels and then exploit this hierarchy in an adversarial correction for the lowest level (i.e. (3)) while maintaining some basic target features. This approach does not lower the toxicity detection performance but increases the generalization to targets not being available at training time.", }
Corpora that are the fundament for toxicity detection contain such expressions typically directed against a target individual or group, e.g., people of a specific gender or ethnicity. Prior work has shown that the target identity mention can constitute a confounding variable. As an example, a model might learn that Christians are always mentioned in the context of hate speech. This misguided focus can lead to a limited generalization to newly emerging targets that are not found in the training data. In this paper, we hypothesize and subsequently show that this issue can be mitigated by considering targets on different levels of specificity. We distinguish levels of (1) the existence of a target, (2) a class (e.g., that the target is a religious group), or (3) a specific target group (e.g., Christians or Muslims). We define a target label hierarchy based on these three levels and then exploit this hierarchy in an adversarial correction for the lowest level (i.e. (3)) while maintaining some basic target features. This approach does not lower the toxicity detection performance but increases the generalization to targets not being available at training time.
[ "Sch{\\\"a}fer, Johannes", "Heid, Ulrich", "Klinger, Roman" ]
Hierarchical Adversarial Correction to Mitigate Identity Term Bias in Toxicity Detection
wassa-1.4
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.5.bib
@inproceedings{ushio-camacho-collados-2024-systematic, title = "A Systematic Analysis on the Temporal Generalization of Language Models in Social Media", author = "Ushio, Asahi and Camacho-Collados, Jose", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.5", pages = "52--62", abstract = "In machine learning, temporal shifts occur when there are differences between training and test splits in terms of time. For streaming data such as news or social media, models are commonly trained on a fixed corpus from a certain period of time, and they can become obsolete due to the dynamism and evolving nature of online content. This paper focuses on temporal shifts in social media and, in particular, Twitter. We propose a unified evaluation scheme to assess the performance of language models (LMs) under temporal shift on standard social media tasks. LMs are tested on five diverse social media NLP tasks under different temporal settings, which revealed two important findings: (i) the decrease in performance under temporal shift is consistent across different models for entity-focused tasks such as named entity recognition or disambiguation, and hate speech detection, but not significant in the other tasks analysed (i.e., topic and sentiment classification); and (ii) continuous pre-training on the test period does not improve the temporal adaptability of LMs.", }
In machine learning, temporal shifts occur when there are differences between training and test splits in terms of time. For streaming data such as news or social media, models are commonly trained on a fixed corpus from a certain period of time, and they can become obsolete due to the dynamism and evolving nature of online content. This paper focuses on temporal shifts in social media and, in particular, Twitter. We propose a unified evaluation scheme to assess the performance of language models (LMs) under temporal shift on standard social media tasks. LMs are tested on five diverse social media NLP tasks under different temporal settings, which revealed two important findings: (i) the decrease in performance under temporal shift is consistent across different models for entity-focused tasks such as named entity recognition or disambiguation, and hate speech detection, but not significant in the other tasks analysed (i.e., topic and sentiment classification); and (ii) continuous pre-training on the test period does not improve the temporal adaptability of LMs.
[ "Ushio, Asahi", "Camacho-Collados, Jose" ]
A Systematic Analysis on the Temporal Generalization of Language Models in Social Media
wassa-1.5
Poster
2405.13017
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.5/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.6.bib
@inproceedings{smid-etal-2024-llama, title = "{LL}a{MA}-Based Models for Aspect-Based Sentiment Analysis", author = "{\v{S}}m{\'\i}d, Jakub and Priban, Pavel and Kral, Pavel", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.6", pages = "63--70", abstract = "While large language models (LLMs) show promise for various tasks, their performance in compound aspect-based sentiment analysis (ABSA) tasks lags behind fine-tuned models. However, the potential of LLMs fine-tuned for ABSA remains unexplored. This paper examines the capabilities of open-source LLMs fine-tuned for ABSA, focusing on LLaMA-based models. We evaluate the performance across four tasks and eight English datasets, finding that the fine-tuned Orca 2 model surpasses state-of-the-art results in all tasks. However, all models struggle in zero-shot and few-shot scenarios compared to fully fine-tuned ones. Additionally, we conduct error analysis to identify challenges faced by fine-tuned models.", }
While large language models (LLMs) show promise for various tasks, their performance in compound aspect-based sentiment analysis (ABSA) tasks lags behind fine-tuned models. However, the potential of LLMs fine-tuned for ABSA remains unexplored. This paper examines the capabilities of open-source LLMs fine-tuned for ABSA, focusing on LLaMA-based models. We evaluate the performance across four tasks and eight English datasets, finding that the fine-tuned Orca 2 model surpasses state-of-the-art results in all tasks. However, all models struggle in zero-shot and few-shot scenarios compared to fully fine-tuned ones. Additionally, we conduct error analysis to identify challenges faced by fine-tuned models.
[ "{\\v{S}}m{\\'\\i}d, Jakub", "Priban, Pavel", "Kral, Pavel" ]
LLaMA-Based Models for Aspect-Based Sentiment Analysis
wassa-1.6
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.7.bib
@inproceedings{antypas-etal-2024-multi, title = "A Multi-Faceted {NLP} Analysis of Misinformation Spreaders in {T}witter", author = "Antypas, Dimosthenis and Preece, Alun and Camacho-Collados, Jose", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.7", pages = "71--83", abstract = "Social media is an integral part of the daily life of an increasingly large number of people worldwide. Used for entertainment, communication and news updates, it constitutes a source of information that has been extensively used to study human behaviour. Unfortunately, the open nature of social media platforms along with the difficult task of supervising their content has led to a proliferation of misinformation posts. In this paper, we aim to identify the textual differences between the profiles of user that share misinformation from questionable sources and those that do not. Our goal is to better understand user behaviour in order to be better equipped to combat this issue. To this end, we identify Twitter (X) accounts of potential misinformation spreaders and apply transformer models specialised in social media to extract characteristics such as sentiment, emotion, topic and presence of hate speech. Our results indicate that, while there may be some differences between the behaviour of users that share misinformation and those that do not, there are no large differences when it comes to the type of content shared.", }
Social media is an integral part of the daily life of an increasingly large number of people worldwide. Used for entertainment, communication and news updates, it constitutes a source of information that has been extensively used to study human behaviour. Unfortunately, the open nature of social media platforms along with the difficult task of supervising their content has led to a proliferation of misinformation posts. In this paper, we aim to identify the textual differences between the profiles of user that share misinformation from questionable sources and those that do not. Our goal is to better understand user behaviour in order to be better equipped to combat this issue. To this end, we identify Twitter (X) accounts of potential misinformation spreaders and apply transformer models specialised in social media to extract characteristics such as sentiment, emotion, topic and presence of hate speech. Our results indicate that, while there may be some differences between the behaviour of users that share misinformation and those that do not, there are no large differences when it comes to the type of content shared.
[ "Antypas, Dimosthenis", "Preece, Alun", "Camacho-Collados, Jose" ]
A Multi-Faceted NLP Analysis of Misinformation Spreaders in Twitter
wassa-1.7
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.8.bib
@inproceedings{ronningstad-etal-2024-entity, title = "Entity-Level Sentiment: More than the Sum of Its Parts", author = "R{\o}nningstad, Egil and Klinger, Roman and Velldal, Erik and {\O}vrelid, Lilja", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.8", pages = "84--96", abstract = "In sentiment analysis of longer texts, there may be a variety of topics discussed, of entities mentioned, and of sentiments expressed regarding each entity. We find a lack of studies exploring how such texts express their sentiment towards each entity of interest, and how these sentiments can be modelled. In order to better understand how sentiment regarding persons and organizations (each entity in our scope) is expressed in longer texts, we have collected a dataset of expert annotations where the overall sentiment regarding each entity is identified, together with the sentence-level sentiment for these entities separately. We show that the reader{'}s perceived sentiment regarding an entity often differs from an arithmetic aggregation of sentiments at the sentence level. Only 70{\%} of the positive and 55{\%} of the negative entities receive a correct overall sentiment label when we aggregate the (human-annotated) sentiment labels for the sentences where the entity is mentioned. Our dataset reveals the complexity of entity-specific sentiment in longer texts, and allows for more precise modelling and evaluation of such sentiment expressions.", }
In sentiment analysis of longer texts, there may be a variety of topics discussed, of entities mentioned, and of sentiments expressed regarding each entity. We find a lack of studies exploring how such texts express their sentiment towards each entity of interest, and how these sentiments can be modelled. In order to better understand how sentiment regarding persons and organizations (each entity in our scope) is expressed in longer texts, we have collected a dataset of expert annotations where the overall sentiment regarding each entity is identified, together with the sentence-level sentiment for these entities separately. We show that the reader{'}s perceived sentiment regarding an entity often differs from an arithmetic aggregation of sentiments at the sentence level. Only 70{\%} of the positive and 55{\%} of the negative entities receive a correct overall sentiment label when we aggregate the (human-annotated) sentiment labels for the sentences where the entity is mentioned. Our dataset reveals the complexity of entity-specific sentiment in longer texts, and allows for more precise modelling and evaluation of such sentiment expressions.
[ "R{\\o}nningstad, Egil", "Klinger, Roman", "Velldal, Erik", "{\\O}vrelid, Lilja" ]
Entity-Level Sentiment: More than the Sum of Its Parts
wassa-1.8
Poster
2407.03916
[ "https://github.com/ltgoslo/ELSA" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.9.bib
@inproceedings{raza-etal-2024-mbias, title = "{MBIAS}: Mitigating Bias in Large Language Models While Retaining Context", author = "Raza, Shaina and Raval, Ananya and Chatrath, Veronica", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.9", pages = "97--111", abstract = "The deployment of Large Language Models (LLMs) in diverse applications necessitates an assurance of safety without compromising the contextual integrity of the generated content. Traditional approaches, including safety-specific fine-tuning or adversarial testing, often yield safe outputs at the expense of contextual meaning. This can result in a diminished capacity to handle nuanced aspects of bias and toxicity, such as underrepresentation or negative portrayals across various demographics. To address these challenges, we introduce MBIAS, an LLM framework carefully instruction fine-tuned on a custom dataset designed specifically for safety interventions. MBIAS is designed to significantly reduce biases and toxic elements in LLM outputs while preserving the main information. This work also details our further use of LLMs: as annotator under human supervision and as evaluator of generated content. Empirical analysis reveals that MBIAS achieves a reduction in bias and toxicity by over 30{\%} in standard evaluations, and by more than 90{\%} in diverse demographic tests, highlighting the robustness of our approach. We make the dataset and the fine-tuned MBIAS model available to the research community for further investigation and to ensure reproducibility. The code for this project can be accessed here https://github.com/shainarazavi/MBIAS.", }
The deployment of Large Language Models (LLMs) in diverse applications necessitates an assurance of safety without compromising the contextual integrity of the generated content. Traditional approaches, including safety-specific fine-tuning or adversarial testing, often yield safe outputs at the expense of contextual meaning. This can result in a diminished capacity to handle nuanced aspects of bias and toxicity, such as underrepresentation or negative portrayals across various demographics. To address these challenges, we introduce MBIAS, an LLM framework carefully instruction fine-tuned on a custom dataset designed specifically for safety interventions. MBIAS is designed to significantly reduce biases and toxic elements in LLM outputs while preserving the main information. This work also details our further use of LLMs: as annotator under human supervision and as evaluator of generated content. Empirical analysis reveals that MBIAS achieves a reduction in bias and toxicity by over 30{\%} in standard evaluations, and by more than 90{\%} in diverse demographic tests, highlighting the robustness of our approach. We make the dataset and the fine-tuned MBIAS model available to the research community for further investigation and to ensure reproducibility. The code for this project can be accessed here https://github.com/shainarazavi/MBIAS.
[ "Raza, Shaina", "Raval, Ananya", "Chatrath, Veronica" ]
MBIAS: Mitigating Bias in Large Language Models While Retaining Context
wassa-1.9
Poster
2405.11290
[ "https://github.com/shainarazavi/mbias" ]
https://huggingface.co/papers/2405.11290
3
1
0
3
https://aclanthology.org/2024.wassa-1.9/
[ "newsmediabias/MBIAS" ]
[]
[]
1
https://aclanthology.org/2024.wassa-1.10.bib
@inproceedings{ohagi-2024-polarization, title = "Polarization of Autonomous Generative {AI} Agents Under Echo Chambers", author = "Ohagi, Masaya", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.10", pages = "112--124", abstract = "Online social networks often create echo chambers where people only hear opinions reinforcing their beliefs.An echo chamber often generates polarization, leading to conflicts between people with radical opinions.The echo chamber has been viewed as a human-specific problem, but this implicit assumption is becoming less reasonable as large language models, such as ChatGPT, acquire social abilities. In response to this situation, we investigated the potential for polarization to occur among a group of autonomous AI agents based on generative language models in an echo chamber environment. We had AI agents discuss specific topics and analyzed how the group{'}s opinions changed as the discussion progressed. As a result, we found that the group of agents based on ChatGPT tended to become polarized in echo chamber environments. The analysis of opinion transitions shows that this result is caused by ChatGPT{'}s high prompt understanding ability to update its opinion by considering its own and surrounding agents{'} opinions. We conducted additional experiments to investigate under what specific conditions AI agents tended to polarize. As a result, we identified factors that influence polarization, such as the agent{'}s persona.", }
Online social networks often create echo chambers where people only hear opinions reinforcing their beliefs.An echo chamber often generates polarization, leading to conflicts between people with radical opinions.The echo chamber has been viewed as a human-specific problem, but this implicit assumption is becoming less reasonable as large language models, such as ChatGPT, acquire social abilities. In response to this situation, we investigated the potential for polarization to occur among a group of autonomous AI agents based on generative language models in an echo chamber environment. We had AI agents discuss specific topics and analyzed how the group{'}s opinions changed as the discussion progressed. As a result, we found that the group of agents based on ChatGPT tended to become polarized in echo chamber environments. The analysis of opinion transitions shows that this result is caused by ChatGPT{'}s high prompt understanding ability to update its opinion by considering its own and surrounding agents{'} opinions. We conducted additional experiments to investigate under what specific conditions AI agents tended to polarize. As a result, we identified factors that influence polarization, such as the agent{'}s persona.
[ "Ohagi, Masaya" ]
Polarization of Autonomous Generative AI Agents Under Echo Chambers
wassa-1.10
Poster
2402.12212
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.11.bib
@inproceedings{przybyla-etal-2024-know, title = "Know Thine Enemy: Adaptive Attacks on Misinformation Detection Using Reinforcement Learning", author = "Przyby{\l}a, Piotr and McGill, Euan and Saggion, Horacio", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.11", pages = "125--140", abstract = "We present XARELLO: a generator of adversarial examples for testing the robustness of text classifiers based on reinforcement learning. Our solution is adaptive, it learns from previous successes and failures in order to better adjust to the vulnerabilities of the attacked model. This reflects the behaviour of a persistent and experienced attacker, which are common in the misinformation-spreading environment. We evaluate our approach using several victim classifiers and credibility-assessment tasks, showing it generates better-quality examples with less queries, and is especially effective against the modern LLMs. We also perform a qualitative analysis to understand the language patterns in the misinformation text that play a role in the attacks.", }
We present XARELLO: a generator of adversarial examples for testing the robustness of text classifiers based on reinforcement learning. Our solution is adaptive, it learns from previous successes and failures in order to better adjust to the vulnerabilities of the attacked model. This reflects the behaviour of a persistent and experienced attacker, which are common in the misinformation-spreading environment. We evaluate our approach using several victim classifiers and credibility-assessment tasks, showing it generates better-quality examples with less queries, and is especially effective against the modern LLMs. We also perform a qualitative analysis to understand the language patterns in the misinformation text that play a role in the attacks.
[ "Przyby{\\l}a, Piotr", "McGill, Euan", "Saggion, Horacio" ]
Know Thine Enemy: Adaptive Attacks on Misinformation Detection Using Reinforcement Learning
wassa-1.11
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.12.bib
@inproceedings{zhu-etal-2024-model, title = "The Model Arena for Cross-lingual Sentiment Analysis: A Comparative Study in the Era of Large Language Models", author = "Zhu, Xiliang and Gardiner, Shayna and Rold{\'a}n, Tere and Rossouw, David", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.12", pages = "141--152", abstract = "Sentiment analysis serves as a pivotal component in Natural Language Processing (NLP). Advancements in multilingual pre-trained models such as XLM-R and mT5 have contributed to the increasing interest in cross-lingual sentiment analysis. The recent emergence in Large Language Models (LLM) has significantly advanced general NLP tasks, however, the capability of such LLMs in cross-lingual sentiment analysis has not been fully studied. This work undertakes an empirical analysis to compare the cross-lingual transfer capability of public Small Multilingual Language Models (SMLM) like XLM-R, against English-centric LLMs such as Llama-3, in the context of sentiment analysis across English, Spanish, French and Chinese. Our findings reveal that among public models, SMLMs exhibit superior zero-shot cross-lingual performance relative to LLMs. However, in few-shot cross-lingual settings, public LLMs demonstrate an enhanced adaptive potential. In addition, we observe that proprietary GPT-3.5 and GPT-4 lead in zero-shot cross-lingual capability, but are outpaced by public models in few-shot scenarios.", }
Sentiment analysis serves as a pivotal component in Natural Language Processing (NLP). Advancements in multilingual pre-trained models such as XLM-R and mT5 have contributed to the increasing interest in cross-lingual sentiment analysis. The recent emergence in Large Language Models (LLM) has significantly advanced general NLP tasks, however, the capability of such LLMs in cross-lingual sentiment analysis has not been fully studied. This work undertakes an empirical analysis to compare the cross-lingual transfer capability of public Small Multilingual Language Models (SMLM) like XLM-R, against English-centric LLMs such as Llama-3, in the context of sentiment analysis across English, Spanish, French and Chinese. Our findings reveal that among public models, SMLMs exhibit superior zero-shot cross-lingual performance relative to LLMs. However, in few-shot cross-lingual settings, public LLMs demonstrate an enhanced adaptive potential. In addition, we observe that proprietary GPT-3.5 and GPT-4 lead in zero-shot cross-lingual capability, but are outpaced by public models in few-shot scenarios.
[ "Zhu, Xiliang", "Gardiner, Shayna", "Rold{\\'a}n, Tere", "Rossouw, David" ]
The Model Arena for Cross-lingual Sentiment Analysis: A Comparative Study in the Era of Large Language Models
wassa-1.12
Poster
2406.19358
[ "" ]
https://huggingface.co/papers/2406.19358
0
1
0
4
https://aclanthology.org/2024.wassa-1.12/
[]
[]
[]
1
https://aclanthology.org/2024.wassa-1.13.bib
@inproceedings{wehrli-etal-2024-guiding, title = "Guiding Sentiment Analysis with Hierarchical Text Clustering: Analyzing the {G}erman {X}/{T}witter Discourse on Face Masks in the 2020 {COVID}-19 Pandemic", author = "Wehrli, Silvan and Ezekannagha, Chisom and Hattab, Georges and Boender, Tamara and Arnrich, Bert and Irrgang, Christopher", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.13", pages = "153--167", abstract = "Social media are a critical component of the information ecosystem during public health crises. Understanding the public discourse is essential for effective communication and misinformation mitigation. Computational methods can aid these efforts through online social listening. We combined hierarchical text clustering and sentiment analysis to examine the face mask-wearing discourse in Germany during the COVID-19 pandemic using a dataset of 353,420 German X (formerly Twitter) posts from 2020. For sentiment analysis, we annotated a subsample of the data to train a neural network for classifying the sentiments of posts (neutral, negative, or positive). In combination with clustering, this approach uncovered sentiment patterns of different topics and their subtopics, reflecting the online public response to mask mandates in Germany. We show that our approach can be used to examine long-term narratives and sentiment dynamics and to identify specific topics that explain peaks of interest in the social media discourse.", }
Social media are a critical component of the information ecosystem during public health crises. Understanding the public discourse is essential for effective communication and misinformation mitigation. Computational methods can aid these efforts through online social listening. We combined hierarchical text clustering and sentiment analysis to examine the face mask-wearing discourse in Germany during the COVID-19 pandemic using a dataset of 353,420 German X (formerly Twitter) posts from 2020. For sentiment analysis, we annotated a subsample of the data to train a neural network for classifying the sentiments of posts (neutral, negative, or positive). In combination with clustering, this approach uncovered sentiment patterns of different topics and their subtopics, reflecting the online public response to mask mandates in Germany. We show that our approach can be used to examine long-term narratives and sentiment dynamics and to identify specific topics that explain peaks of interest in the social media discourse.
[ "Wehrli, Silvan", "Ezekannagha, Chisom", "Hattab, Georges", "Boender, Tamara", "Arnrich, Bert", "Irrgang, Christopher" ]
Guiding Sentiment Analysis with Hierarchical Text Clustering: Analyzing the German X/Twitter Discourse on Face Masks in the 2020 COVID-19 Pandemic
wassa-1.13
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.14.bib
@inproceedings{etienne-etal-2024-emotion, title = "Emotion Identification for {F}rench in Written Texts: Considering Modes of Emotion Expression as a Step Towards Text Complexity Analysis", author = "{\'E}tienne, Aline and Battistelli, Delphine and Lecorv{\'e}, Gw{\'e}nol{\'e}", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.14", pages = "168--185", abstract = "The objective of this paper is to predict (A) whether a sentence in a written text expresses an emotion, (B) the mode(s) in which the emotion is expressed, (C) whether it is basic or complex, and (D) its emotional category.One of our major contributions, in addition to a dataset and a model, is to integrate the fact that an emotion can be expressed in different modes: from a direct mode, essentially lexicalized, to a more indirect mode, where emotions will only be suggested, a mode that NLP approaches generally don{'}t take into account. The scope is on written texts, i.e. it does not focus on conversational or multi-modal data. In this context, modes of expression are seen as a factor towards the automatic analysis of complexity in texts.Experiments on French texts show acceptable results compared to the human annotators{'} agreement to predict the mode and category, and outperforming results compared to using a large language model with in-context learning (i.e. no fine-tuning) on all tasks.Dataset and model can be downloaded on HuggingFace: https://huggingface.co/TextToKids .", }
The objective of this paper is to predict (A) whether a sentence in a written text expresses an emotion, (B) the mode(s) in which the emotion is expressed, (C) whether it is basic or complex, and (D) its emotional category.One of our major contributions, in addition to a dataset and a model, is to integrate the fact that an emotion can be expressed in different modes: from a direct mode, essentially lexicalized, to a more indirect mode, where emotions will only be suggested, a mode that NLP approaches generally don{'}t take into account. The scope is on written texts, i.e. it does not focus on conversational or multi-modal data. In this context, modes of expression are seen as a factor towards the automatic analysis of complexity in texts.Experiments on French texts show acceptable results compared to the human annotators{'} agreement to predict the mode and category, and outperforming results compared to using a large language model with in-context learning (i.e. no fine-tuning) on all tasks.Dataset and model can be downloaded on HuggingFace: https://huggingface.co/TextToKids .
[ "{\\'E}tienne, Aline", "Battistelli, Delphine", "Lecorv{\\'e}, Gw{\\'e}nol{\\'e}" ]
Emotion Identification for French in Written Texts: Considering Modes of Emotion Expression as a Step Towards Text Complexity Analysis
wassa-1.14
Poster
[ "" ]
https://huggingface.co/papers/2405.14385
1
0
0
3
https://aclanthology.org/2024.wassa-1.14/
[ "TextToKids/CamemBERT-base-EmoTextToKids" ]
[ "TextToKids/EmoTextToKids-sentences" ]
[]
1
https://aclanthology.org/2024.wassa-1.15.bib
@inproceedings{feldkamp-etal-2024-comparing, title = "Comparing Tools for Sentiment Analysis of {D}anish Literature from Hymns to Fairy Tales: Low-Resource Language and Domain Challenges", author = "Feldkamp, Pascale and Kostkan, Jan and Overgaard, Ea and Jacobsen, Mia and Bizzoni, Yuri", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.15", pages = "186--199", abstract = "While Sentiment Analysis has become increasingly central in computational approaches to literary texts, the literary domain still poses important challenges for the detection of textual sentiment due to its highly complex use of language and devices - from subtle humor to poetic imagery. Furthermore these challenges are only further amplified in low-resource language and domain settings. In this paper we investigate the application and efficacy of different Sentiment Analysis tools on Danish literary texts, using historical fairy tales and religious hymns as our datasets. The scarcity of linguistic resources for Danish and the historical context of the data further compounds the challenges for the tools. We compare human annotations to the continuous valence scores of both transformer- and dictionary-based Sentiment Analysis methods to assess their performance, seeking to understand how distinct methods handle the language of Danish prose and poetry.", }
While Sentiment Analysis has become increasingly central in computational approaches to literary texts, the literary domain still poses important challenges for the detection of textual sentiment due to its highly complex use of language and devices - from subtle humor to poetic imagery. Furthermore these challenges are only further amplified in low-resource language and domain settings. In this paper we investigate the application and efficacy of different Sentiment Analysis tools on Danish literary texts, using historical fairy tales and religious hymns as our datasets. The scarcity of linguistic resources for Danish and the historical context of the data further compounds the challenges for the tools. We compare human annotations to the continuous valence scores of both transformer- and dictionary-based Sentiment Analysis methods to assess their performance, seeking to understand how distinct methods handle the language of Danish prose and poetry.
[ "Feldkamp, Pascale", "Kostkan, Jan", "Overgaard, Ea", "Jacobsen, Mia", "Bizzoni, Yuri" ]
Comparing Tools for Sentiment Analysis of Danish Literature from Hymns to Fairy Tales: Low-Resource Language and Domain Challenges
wassa-1.15
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.16.bib
@inproceedings{steel-ruths-2024-multi, title = "Multi-Target User Stance Discovery on {R}eddit", author = "Steel, Benjamin and Ruths, Derek", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.16", pages = "200--214", abstract = "We consider how to credibly and reliably assess the opinions of individuals using their social media posts. To this end, this paper makes three contributions. First, we assemble a workflow and approach to applying modern natural language processing (NLP) methods to multi-target user stance detection in the wild. Second, we establish why the multi-target modeling of user stance is qualitatively more complicated than uni-target user-stance detection. Finally, we validate our method by showing how multi-dimensional measurement of user opinions not only reproduces known opinion polling results, but also enables the study of opinion dynamics at high levels of temporal and semantic resolution.", }
We consider how to credibly and reliably assess the opinions of individuals using their social media posts. To this end, this paper makes three contributions. First, we assemble a workflow and approach to applying modern natural language processing (NLP) methods to multi-target user stance detection in the wild. Second, we establish why the multi-target modeling of user stance is qualitatively more complicated than uni-target user-stance detection. Finally, we validate our method by showing how multi-dimensional measurement of user opinions not only reproduces known opinion polling results, but also enables the study of opinion dynamics at high levels of temporal and semantic resolution.
[ "Steel, Benjamin", "Ruths, Derek" ]
Multi-Target User Stance Discovery on Reddit
wassa-1.16
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.16/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.17.bib
@inproceedings{shokri-etal-2024-subjectivity, title = "Subjectivity Detection in {E}nglish News using Large Language Models", author = "Shokri, Mohammad and Sharma, Vivek and Filatova, Elena and Jain, Shweta and Levitan, Sarah", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.17", pages = "215--226", abstract = "Trust in media has reached a historical low as consumers increasingly doubt the credibility of the news they encounter. This growing skepticism is exacerbated by the prevalence of opinion-driven articles, which can influence readers{'} beliefs to align with the authors{'} viewpoints. In response to this trend, this study examines the expression of opinions in news by detecting subjective and objective language. We conduct an analysis of the subjectivity present in various news datasets and evaluate how different language models detect subjectivity and generalize to out-of-distribution data. We also investigate the use of in-context learning (ICL) within large language models (LLMs) and propose a straightforward prompting method that outperforms standard ICL and chain-of-thought (CoT) prompts.", }
Trust in media has reached a historical low as consumers increasingly doubt the credibility of the news they encounter. This growing skepticism is exacerbated by the prevalence of opinion-driven articles, which can influence readers{'} beliefs to align with the authors{'} viewpoints. In response to this trend, this study examines the expression of opinions in news by detecting subjective and objective language. We conduct an analysis of the subjectivity present in various news datasets and evaluate how different language models detect subjectivity and generalize to out-of-distribution data. We also investigate the use of in-context learning (ICL) within large language models (LLMs) and propose a straightforward prompting method that outperforms standard ICL and chain-of-thought (CoT) prompts.
[ "Shokri, Mohammad", "Sharma, Vivek", "Filatova, Elena", "Jain, Shweta", "Levitan, Sarah" ]
Subjectivity Detection in English News using Large Language Models
wassa-1.17
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.17/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.18.bib
@inproceedings{alhamed-etal-2024-monitoring, title = "Monitoring Depression Severity and Symptoms in User-Generated Content: An Annotation Scheme and Guidelines", author = "Alhamed, Falwah and Bendayan, Rebecca and Ive, Julia and Specia, Lucia", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.18", pages = "227--233", abstract = "Depression is a highly prevalent condition recognized by the World Health Organization as a leading contributor to global disability. Many people suffering from depression express their thoughts and feelings using social media, which thus becomes a source of data for research in this domain. However, existing annotation schemes tailored to studying depression symptoms in social media data remain limited. Reliable and valid annotation guidelines are crucial for accurately measuring mental health conditions for those studies. This paper addresses this gap by presenting a novel depression annotation scheme and guidelines for detecting depression symptoms and their severity in social media text. Our approach leverages validated depression questionnaires and incorporates the expertise of psychologists and psychiatrists during scheme refinement. The resulting annotation scheme achieves high inter-rater agreement, demonstrating its potential for suitable depression assessment in social media contexts.", }
Depression is a highly prevalent condition recognized by the World Health Organization as a leading contributor to global disability. Many people suffering from depression express their thoughts and feelings using social media, which thus becomes a source of data for research in this domain. However, existing annotation schemes tailored to studying depression symptoms in social media data remain limited. Reliable and valid annotation guidelines are crucial for accurately measuring mental health conditions for those studies. This paper addresses this gap by presenting a novel depression annotation scheme and guidelines for detecting depression symptoms and their severity in social media text. Our approach leverages validated depression questionnaires and incorporates the expertise of psychologists and psychiatrists during scheme refinement. The resulting annotation scheme achieves high inter-rater agreement, demonstrating its potential for suitable depression assessment in social media contexts.
[ "Alhamed, Falwah", "Bendayan, Rebecca", "Ive, Julia", "Specia, Lucia" ]
Monitoring Depression Severity and Symptoms in User-Generated Content: An Annotation Scheme and Guidelines
wassa-1.18
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.18/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.19.bib
@inproceedings{etori-gini-2024-rideke, title = "{R}ide{KE}: Leveraging Low-resource {T}witter User-generated Content for Sentiment and Emotion Detection on Code-switched {RHS} Dataset.", author = "Etori, Naome and Gini, Maria", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.19", pages = "234--249", abstract = "Social media has become a crucial open-access platform enabling individuals to freely express opinions and share experiences. These platforms contain user-generated content facilitating instantaneous communication and feedback. However, leveraging low-resource language data from Twitter can be challenging due to the scarcity and poor quality of content with significant variations in language use, such as slang and code-switching. Automatically identifying tweets in low-resource languages can also be challenging because Twitter primarily supports high-resource languages; low-resource languages often lack robust linguistic and contextual support. This paper analyzes Kenyan code-switched data from Twitter using four transformer-based pretrained models for sentiment and emotion classification tasks using supervised and semi-supervised methods. We detail the methodology behind data collection, the annotation procedure, and the challenges encountered during the data curation phase. Our results show that XLM-R outperforms other models; for sentiment analysis, XLM-R supervised model achieves the highest accuracy (69.2{\%}) and F1 score (66.1{\%}), XLM-R semi-supervised (67.2{\%} accuracy, 64.1{\%} F1 score). In emotion analysis, DistilBERT supervised leads in accuracy (59.8{\%}) and F1 score (31{\%}), mBERT semi-supervised (accuracy (59{\%} and F1 score 26.5{\%}). AfriBERTa models show the lowest accuracy and F1 scores. This indicates that the semi-supervised method{'}s performance is constrained by the small labeled dataset.", }
Social media has become a crucial open-access platform enabling individuals to freely express opinions and share experiences. These platforms contain user-generated content facilitating instantaneous communication and feedback. However, leveraging low-resource language data from Twitter can be challenging due to the scarcity and poor quality of content with significant variations in language use, such as slang and code-switching. Automatically identifying tweets in low-resource languages can also be challenging because Twitter primarily supports high-resource languages; low-resource languages often lack robust linguistic and contextual support. This paper analyzes Kenyan code-switched data from Twitter using four transformer-based pretrained models for sentiment and emotion classification tasks using supervised and semi-supervised methods. We detail the methodology behind data collection, the annotation procedure, and the challenges encountered during the data curation phase. Our results show that XLM-R outperforms other models; for sentiment analysis, XLM-R supervised model achieves the highest accuracy (69.2{\%}) and F1 score (66.1{\%}), XLM-R semi-supervised (67.2{\%} accuracy, 64.1{\%} F1 score). In emotion analysis, DistilBERT supervised leads in accuracy (59.8{\%}) and F1 score (31{\%}), mBERT semi-supervised (accuracy (59{\%} and F1 score 26.5{\%}). AfriBERTa models show the lowest accuracy and F1 scores. This indicates that the semi-supervised method{'}s performance is constrained by the small labeled dataset.
[ "Etori, Naome", "Gini, Maria" ]
RideKE: Leveraging Low-resource Twitter User-generated Content for Sentiment and Emotion Detection on Code-switched RHS Dataset.
wassa-1.19
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.19/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.20.bib
@inproceedings{dzienisiewicz-etal-2024-polygraph, title = "{POL}ygraph: {P}olish Fake News Dataset", author = "Dzienisiewicz, Daniel and Grali{\'n}ski, Filip and Jab{\l}o{\'n}ski, Piotr and Kubis, Marek and Sk{\'o}rzewski, Pawe{\l} and Wierzchon, Piotr", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.20", pages = "250--263", abstract = "This paper presents the POLygraph dataset, a unique resource for fake news detection in Polish. The dataset, created by an interdisciplinary team, is composed of two parts: the {``}fake-or-not{''} dataset with 11,360 pairs of news articles (identified by their URLs) and corresponding labels, and the {``}fake-they-say{''} dataset with 5,082 news articles (identified by their URLs) and tweets commenting on them. Unlike existing datasets, POLygraph encompasses a variety of approaches from source literature, providing a comprehensive resource for fake news detection. The data was collected through manual annotation by expert and non-expert annotators. The project also developed a software tool that uses advanced machine learning techniques to analyze the data and determine content authenticity. The tool and dataset are expected to benefit various entities, from public sector institutions to publishers and fact-checking organizations. Further dataset exploration will foster fake news detection and potentially stimulate the implementation of similar models in other languages. The paper focuses on the creation and composition of the dataset, so it does not include a detailed evaluation of the software tool for content authenticity analysis, which is planned at a later stage of the project.", }
This paper presents the POLygraph dataset, a unique resource for fake news detection in Polish. The dataset, created by an interdisciplinary team, is composed of two parts: the {``}fake-or-not{''} dataset with 11,360 pairs of news articles (identified by their URLs) and corresponding labels, and the {``}fake-they-say{''} dataset with 5,082 news articles (identified by their URLs) and tweets commenting on them. Unlike existing datasets, POLygraph encompasses a variety of approaches from source literature, providing a comprehensive resource for fake news detection. The data was collected through manual annotation by expert and non-expert annotators. The project also developed a software tool that uses advanced machine learning techniques to analyze the data and determine content authenticity. The tool and dataset are expected to benefit various entities, from public sector institutions to publishers and fact-checking organizations. Further dataset exploration will foster fake news detection and potentially stimulate the implementation of similar models in other languages. The paper focuses on the creation and composition of the dataset, so it does not include a detailed evaluation of the software tool for content authenticity analysis, which is planned at a later stage of the project.
[ "Dzienisiewicz, Daniel", "Grali{\\'n}ski, Filip", "Jab{\\l}o{\\'n}ski, Piotr", "Kubis, Marek", "Sk{\\'o}rzewski, Pawe{\\l}", "Wierzchon, Piotr" ]
POLygraph: Polish Fake News Dataset
wassa-1.20
Poster
2407.01393
[ "" ]
https://huggingface.co/papers/2407.01393
0
0
0
6
https://aclanthology.org/2024.wassa-1.20/
[]
[]
[]
1
https://aclanthology.org/2024.wassa-1.21.bib
@inproceedings{dasgupta-sinha-2024-exploring, title = "Exploring Language Models to Analyze Market Demand Sentiments from News", author = "Dasgupta, Tirthankar and Sinha, Manjira", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.21", pages = "264--272", abstract = "Obtaining demand trends for products is an essential aspect of supply chain planning. It helps in generating scenarios for simulation before actual demands start pouring in. Presently, experts obtain this number manually from different News sources. In this paper, we have presented methods that can automate the information acquisition process. We have presented a joint framework that performs information extraction and sentiment analysis to acquire demand related information from business text documents. The proposed system leverages a TwinBERT-based deep neural network model to first extract product information for which demand is associated and then identify the respective sentiment polarity. The articles are also subjected to causal analytics, that, together yield rich contextual information about reasons for rise or fall of demand of various products. The enriched information is targeted for the decision-makers, analysts and knowledge workers. We have exhaustively evaluated our proposed models with datasets curated and annotated for two different domains namely, automobile sector and housing. The proposed model outperforms the existing baseline systems.", }
Obtaining demand trends for products is an essential aspect of supply chain planning. It helps in generating scenarios for simulation before actual demands start pouring in. Presently, experts obtain this number manually from different News sources. In this paper, we have presented methods that can automate the information acquisition process. We have presented a joint framework that performs information extraction and sentiment analysis to acquire demand related information from business text documents. The proposed system leverages a TwinBERT-based deep neural network model to first extract product information for which demand is associated and then identify the respective sentiment polarity. The articles are also subjected to causal analytics, that, together yield rich contextual information about reasons for rise or fall of demand of various products. The enriched information is targeted for the decision-makers, analysts and knowledge workers. We have exhaustively evaluated our proposed models with datasets curated and annotated for two different domains namely, automobile sector and housing. The proposed model outperforms the existing baseline systems.
[ "Dasgupta, Tirthankar", "Sinha, Manjira" ]
Exploring Language Models to Analyze Market Demand Sentiments from News
wassa-1.21
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.21/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.22.bib
@inproceedings{furniturewala-etal-2024-impact, title = "Impact of Decoding Methods on Human Alignment of Conversational {LLM}s", author = "Furniturewala, Shaz and Jaidka, Kokil and Sharma, Yashvardhan", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.22", pages = "273--279", abstract = "To be included into chatbot systems, Large language models (LLMs) must be aligned with human conversational conventions. However, being trained mainly on web-scraped data gives existing LLMs a voice closer to informational text than actual human speech. In this paper, we examine the effect of decoding methods on the alignment between LLM-generated and human conversations, including Beam Search, Top K Sampling, and Nucleus Sampling. We present new measures of alignment in substance, style, and psychometric orientation, and experiment with two conversation datasets. Our results provide subtle insights: better alignment is attributed to fewer beams in Beam Search and lower values of P in Nucleus Sampling. We also find that task-oriented and open-ended datasets perform differently in terms of alignment, indicating the significance of taking into account the context of the interaction.", }
To be included into chatbot systems, Large language models (LLMs) must be aligned with human conversational conventions. However, being trained mainly on web-scraped data gives existing LLMs a voice closer to informational text than actual human speech. In this paper, we examine the effect of decoding methods on the alignment between LLM-generated and human conversations, including Beam Search, Top K Sampling, and Nucleus Sampling. We present new measures of alignment in substance, style, and psychometric orientation, and experiment with two conversation datasets. Our results provide subtle insights: better alignment is attributed to fewer beams in Beam Search and lower values of P in Nucleus Sampling. We also find that task-oriented and open-ended datasets perform differently in terms of alignment, indicating the significance of taking into account the context of the interaction.
[ "Furniturewala, Shaz", "Jaidka, Kokil", "Sharma, Yashvardhan" ]
Impact of Decoding Methods on Human Alignment of Conversational LLMs
wassa-1.22
Poster
2407.19526
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.22/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.23.bib
@inproceedings{fujikawa-etal-2024-loneliness, title = "Loneliness Episodes: A {J}apanese Dataset for Loneliness Detection and Analysis", author = "Fujikawa, Naoya and Toan, Nguyen and Ito, Kazuhiro and Wakamiya, Shoko and Aramaki, Eiji", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.23", pages = "280--293", abstract = "Loneliness, a significant public health concern, is closely connected to both physical and mental well-being. Hence, detection and intervention for individuals experiencing loneliness are crucial. Identifying loneliness in text is straightforward when it is explicitly stated but challenging when it is implicit. Detecting implicit loneliness requires a manually annotated dataset because whereas explicit loneliness can be detected using keywords, implicit loneliness cannot be. However, there are no freely available datasets with clear annotation guidelines for implicit loneliness. In this study, we construct a freely accessible Japanese loneliness dataset with annotation guidelines grounded in the psychological definition of loneliness. This dataset covers loneliness intensity and the contributing factors of loneliness. We train two models to classify whether loneliness is expressed and the intensity of loneliness. The model classifying loneliness versus non-loneliness achieves an F1-score of 0.833, but the model for identifying the intensity of loneliness has a low F1-score of 0.400, which is likely due to label imbalance and a shortage of a certain label in the dataset. We validate performance in another domain, specifically X (formerly Twitter), and observe a decrease. In addition, we propose improvement suggestions for domain adaptation.", }
Loneliness, a significant public health concern, is closely connected to both physical and mental well-being. Hence, detection and intervention for individuals experiencing loneliness are crucial. Identifying loneliness in text is straightforward when it is explicitly stated but challenging when it is implicit. Detecting implicit loneliness requires a manually annotated dataset because whereas explicit loneliness can be detected using keywords, implicit loneliness cannot be. However, there are no freely available datasets with clear annotation guidelines for implicit loneliness. In this study, we construct a freely accessible Japanese loneliness dataset with annotation guidelines grounded in the psychological definition of loneliness. This dataset covers loneliness intensity and the contributing factors of loneliness. We train two models to classify whether loneliness is expressed and the intensity of loneliness. The model classifying loneliness versus non-loneliness achieves an F1-score of 0.833, but the model for identifying the intensity of loneliness has a low F1-score of 0.400, which is likely due to label imbalance and a shortage of a certain label in the dataset. We validate performance in another domain, specifically X (formerly Twitter), and observe a decrease. In addition, we propose improvement suggestions for domain adaptation.
[ "Fujikawa, Naoya", "Toan, Nguyen", "Ito, Kazuhiro", "Wakamiya, Shoko", "Aramaki, Eiji" ]
Loneliness Episodes: A Japanese Dataset for Loneliness Detection and Analysis
wassa-1.23
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.23/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.24.bib
@inproceedings{hayashi-etal-2024-estimation, title = "Estimation of Happiness Changes through Longitudinal Analysis of Employees{'} Texts", author = "Hayashi, Junko and Ito, Kazuhiro and Manabe, Masae and Watanabe, Yasushi and Nakayama, Masataka and Uchida, Yukiko and Wakamiya, Shoko and Aramaki, Eiji", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.24", pages = "294--304", abstract = "Measuring happiness as a determinant of well-being is increasingly recognized as crucial. While previous studies have utilized free-text descriptions to estimate happiness on a broad scale, limited research has focused on tracking individual fluctuations in happiness over time owing to the challenges associated with longitudinal data collection. This study addresses this issue by obtaining longitudinal data from two workplaces over two and six months respectively.Subsequently, the data is used to construct a happiness estimation model and assess individual happiness levels.Evaluation of the model performance using correlation coefficients shows variability in the correlation values among individuals.Notably, the model performs satisfactorily in estimating 9 of the 11 users{'} happiness scores, with a correlation coefficient of 0.4 or higher. To investigate the factors affecting the model performance, we examine the relationship between the model performance and variables such as sentence length, lexical diversity, and personality traits. Correlations are observed between these features and model performance.", }
Measuring happiness as a determinant of well-being is increasingly recognized as crucial. While previous studies have utilized free-text descriptions to estimate happiness on a broad scale, limited research has focused on tracking individual fluctuations in happiness over time owing to the challenges associated with longitudinal data collection. This study addresses this issue by obtaining longitudinal data from two workplaces over two and six months respectively.Subsequently, the data is used to construct a happiness estimation model and assess individual happiness levels.Evaluation of the model performance using correlation coefficients shows variability in the correlation values among individuals.Notably, the model performs satisfactorily in estimating 9 of the 11 users{'} happiness scores, with a correlation coefficient of 0.4 or higher. To investigate the factors affecting the model performance, we examine the relationship between the model performance and variables such as sentence length, lexical diversity, and personality traits. Correlations are observed between these features and model performance.
[ "Hayashi, Junko", "Ito, Kazuhiro", "Manabe, Masae", "Watanabe, Yasushi", "Nakayama, Masataka", "Uchida, Yukiko", "Wakamiya, Shoko", "Aramaki, Eiji" ]
Estimation of Happiness Changes through Longitudinal Analysis of Employees' Texts
wassa-1.24
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.24/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.25.bib
@inproceedings{savinova-hoek-2024-subjectivity, title = "Subjectivity Theory vs. Speaker Intuitions: Explaining the Results of a Subjectivity Regressor Trained on Native Speaker Judgements", author = "Savinova, Elena and Hoek, Jet", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.25", pages = "305--315", abstract = {In this paper, we address the issue of explainability in a transformer-based subjectivity regressor trained on native English speakers{'} judgements. The main goal of this work is to test how the regressor{'}s predictions, and therefore native speakers{'} intuitions, relate to theoretical accounts of subjectivity. We approach this goal using two methods: a top-down manual selection of theoretically defined subjectivity features and a bottom-up extraction of top subjective and objective features using the LIME explanation method. The explainability of the subjectivity regressor is evaluated on a British news dataset containing sentences taken from social media news posts and from articles on the websites of the same news outlets. Both methods provide converging evidence that theoretically defined subjectivity features, such as emoji, evaluative adjectives, exclamations, questions, intensifiers, and first person pronouns, are prominent predictors of subjectivity scores. Thus, our findings show that the predictions of the regressor, and therefore native speakers{'} perceptions of subjectivity, align with subjectivity theory. However, an additional comparison of the effects of different subjectivity features in author text and the text of cited sources reveals that the distinction between author and source subjectivity might not be as salient for na{\"\i}ve speakers as it is in the theory.}, }
In this paper, we address the issue of explainability in a transformer-based subjectivity regressor trained on native English speakers{'} judgements. The main goal of this work is to test how the regressor{'}s predictions, and therefore native speakers{'} intuitions, relate to theoretical accounts of subjectivity. We approach this goal using two methods: a top-down manual selection of theoretically defined subjectivity features and a bottom-up extraction of top subjective and objective features using the LIME explanation method. The explainability of the subjectivity regressor is evaluated on a British news dataset containing sentences taken from social media news posts and from articles on the websites of the same news outlets. Both methods provide converging evidence that theoretically defined subjectivity features, such as emoji, evaluative adjectives, exclamations, questions, intensifiers, and first person pronouns, are prominent predictors of subjectivity scores. Thus, our findings show that the predictions of the regressor, and therefore native speakers{'} perceptions of subjectivity, align with subjectivity theory. However, an additional comparison of the effects of different subjectivity features in author text and the text of cited sources reveals that the distinction between author and source subjectivity might not be as salient for na{\"\i}ve speakers as it is in the theory.
[ "Savinova, Elena", "Hoek, Jet" ]
Subjectivity Theory vs. Speaker Intuitions: Explaining the Results of a Subjectivity Regressor Trained on Native Speaker Judgements
wassa-1.25
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.25/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.26.bib
@inproceedings{soni-etal-2024-comparing, title = "Comparing Pre-trained Human Language Models: Is it Better with Human Context as Groups, Individual Traits, or Both?", author = "Soni, Nikita and Balasubramanian, Niranjan and Schwartz, H. and Hovy, Dirk", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.26", pages = "316--328", abstract = "Pre-trained language models consider the context of neighboring words and documents but lack any author context of the human generating the text. However, language depends on the author{'}s states, traits, social, situational, and environmental attributes, collectively referred to as human context (Soni et al., 2024). Human-centered natural language processing requires incorporating human context into language models. Currently, two methods exist: pre-training with 1) group-wise attributes (e.g., over-45-year-olds) or 2) individual traits. Group attributes are simple but coarse {---} not all 45-year-olds write the same way {---} while individual traits allow for more personalized representations, but require more complex modeling and data. It is unclear which approach benefits what tasks. We compare pre-training models with human context via 1) group attributes, 2) individual users, and 3) a combined approach on five user- and document-level tasks. Our results show that there is no best approach, but that human-centered language modeling holds avenues for different methods.", }
Pre-trained language models consider the context of neighboring words and documents but lack any author context of the human generating the text. However, language depends on the author{'}s states, traits, social, situational, and environmental attributes, collectively referred to as human context (Soni et al., 2024). Human-centered natural language processing requires incorporating human context into language models. Currently, two methods exist: pre-training with 1) group-wise attributes (e.g., over-45-year-olds) or 2) individual traits. Group attributes are simple but coarse {---} not all 45-year-olds write the same way {---} while individual traits allow for more personalized representations, but require more complex modeling and data. It is unclear which approach benefits what tasks. We compare pre-training models with human context via 1) group attributes, 2) individual users, and 3) a combined approach on five user- and document-level tasks. Our results show that there is no best approach, but that human-centered language modeling holds avenues for different methods.
[ "Soni, Nikita", "Balasubramanian, Niranjan", "Schwartz, H.", "Hovy, Dirk" ]
Comparing Pre-trained Human Language Models: Is it Better with Human Context as Groups, Individual Traits, or Both?
wassa-1.26
Poster
2401.12492
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.26/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.27.bib
@inproceedings{juros-etal-2024-llms, title = "{LLM}s for Targeted Sentiment in News Headlines: Exploring the Descriptive-Prescriptive Dilemma", author = "Juro{\v{s}}, Jana and Majer, Laura and Snajder, Jan", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.27", pages = "329--343", abstract = "News headlines often evoke sentiment by intentionally portraying entities in particular ways, making targeted sentiment analysis (TSA) of headlines a worthwhile but difficult task. Due to its subjectivity, creating TSA datasets can involve various annotation paradigms, from descriptive to prescriptive, either encouraging or limiting subjectivity. LLMs are a good fit for TSA due to their broad linguistic and world knowledge and in-context learning abilities, yet their performance depends on prompt design. In this paper, we compare the accuracy of state-of-the-art LLMs and fine-tuned encoder models for TSA of news headlines using descriptive and prescriptive datasets across several languages. Exploring the descriptive{--}prescriptive continuum, we analyze how performance is affected by prompt prescriptiveness, ranging from plain zero-shot to elaborate few-shot prompts. Finally, we evaluate the ability of LLMs to quantify uncertainty via calibration error and comparison to human label variation. We find that LLMs outperform fine-tuned encoders on descriptive datasets, while calibration and F1-score generally improve with increased prescriptiveness, yet the optimal level varies.", }
News headlines often evoke sentiment by intentionally portraying entities in particular ways, making targeted sentiment analysis (TSA) of headlines a worthwhile but difficult task. Due to its subjectivity, creating TSA datasets can involve various annotation paradigms, from descriptive to prescriptive, either encouraging or limiting subjectivity. LLMs are a good fit for TSA due to their broad linguistic and world knowledge and in-context learning abilities, yet their performance depends on prompt design. In this paper, we compare the accuracy of state-of-the-art LLMs and fine-tuned encoder models for TSA of news headlines using descriptive and prescriptive datasets across several languages. Exploring the descriptive{--}prescriptive continuum, we analyze how performance is affected by prompt prescriptiveness, ranging from plain zero-shot to elaborate few-shot prompts. Finally, we evaluate the ability of LLMs to quantify uncertainty via calibration error and comparison to human label variation. We find that LLMs outperform fine-tuned encoders on descriptive datasets, while calibration and F1-score generally improve with increased prescriptiveness, yet the optimal level varies.
[ "Juro{\\v{s}}, Jana", "Majer, Laura", "Snajder, Jan" ]
LLMs for Targeted Sentiment in News Headlines: Exploring the Descriptive-Prescriptive Dilemma
wassa-1.27
Poster
2403.00418
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.27/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.28.bib
@inproceedings{sharma-sirts-2024-context, title = "Context is Important in Depressive Language: A Study of the Interaction Between the Sentiments and Linguistic Markers in {R}eddit Discussions", author = "Sharma, Neha and Sirts, Kairit", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.28", pages = "344--361", abstract = "Research exploring linguistic markers in individuals with depression has demonstrated that language usage can serve as an indicator of mental health. This study investigates the impact of discussion topic as context on linguistic markers and emotional expression in depression, using a Reddit dataset to explore interaction effects. Contrary to common findings, our sentiment analysis revealed a broader range of emotional intensity in depressed individuals, with both higher negative and positive sentiments than controls. This pattern was driven by posts containing no emotion words, revealing the limitations of the lexicon based approaches in capturing the full emotional context. We observed several interesting results demonstrating the importance of contextual analyses. For instance, the use of 1st person singular pronouns and words related to anger and sadness correlated with increased positive sentiments, whereas a higher rate of present-focused words was associated with more negative sentiments. Our findings highlight the importance of discussion contexts while interpreting the language used in depression, revealing that the emotional intensity and meaning of linguistic markers can vary based on the topic of discussion.", }
Research exploring linguistic markers in individuals with depression has demonstrated that language usage can serve as an indicator of mental health. This study investigates the impact of discussion topic as context on linguistic markers and emotional expression in depression, using a Reddit dataset to explore interaction effects. Contrary to common findings, our sentiment analysis revealed a broader range of emotional intensity in depressed individuals, with both higher negative and positive sentiments than controls. This pattern was driven by posts containing no emotion words, revealing the limitations of the lexicon based approaches in capturing the full emotional context. We observed several interesting results demonstrating the importance of contextual analyses. For instance, the use of 1st person singular pronouns and words related to anger and sadness correlated with increased positive sentiments, whereas a higher rate of present-focused words was associated with more negative sentiments. Our findings highlight the importance of discussion contexts while interpreting the language used in depression, revealing that the emotional intensity and meaning of linguistic markers can vary based on the topic of discussion.
[ "Sharma, Neha", "Sirts, Kairit" ]
Context is Important in Depressive Language: A Study of the Interaction Between the Sentiments and Linguistic Markers in Reddit Discussions
wassa-1.28
Poster
2405.18061
[ "https://github.com/nehasharma666/depression" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.28/
[]
[]
[]
0
https://aclanthology.org/2024.wassa-1.29.bib
@inproceedings{kurniawan-etal-2024-aggregate, title = "To Aggregate or Not to Aggregate. That is the Question: A Case Study on Annotation Subjectivity in Span Prediction", author = "Kurniawan, Kemal and Mistica, Meladel and Baldwin, Timothy and Lau, Jey Han", editor = "De Clercq, Orph{\'e}e and Barriere, Valentin and Barnes, Jeremy and Klinger, Roman and Sedoc, Jo{\~a}o and Tafreshi, Shabnam", booktitle = "Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wassa-1.29", pages = "362--368", abstract = "This paper explores the task of automatic prediction of text spans in a legal problem description that support a legal area label. We use a corpus of problem descriptions written by laypeople in English that is annotated by practising lawyers. Inherent subjectivity exists in our task because legal area categorisation is a complex task, and lawyers often have different views on a problem. Experiments show that training on majority-voted spans outperforms training on disaggregated ones.", }
This paper explores the task of automatic prediction of text spans in a legal problem description that support a legal area label. We use a corpus of problem descriptions written by laypeople in English that is annotated by practising lawyers. Inherent subjectivity exists in our task because legal area categorisation is a complex task, and lawyers often have different views on a problem. Experiments show that training on majority-voted spans outperforms training on disaggregated ones.
[ "Kurniawan, Kemal", "Mistica, Meladel", "Baldwin, Timothy", "Lau, Jey Han" ]
To Aggregate or Not to Aggregate. That is the Question: A Case Study on Annotation Subjectivity in Span Prediction
wassa-1.29
Poster
2408.02257
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.wassa-1.29/
[]
[]
[]
0