Datasets:

bibtex_url
stringlengths
41
52
proceedings
stringlengths
38
49
bibtext
stringlengths
788
3.49k
abstract
stringlengths
0
2.12k
authors
sequencelengths
1
58
title
stringlengths
16
181
id
stringlengths
7
18
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
170 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
56
num_comments
int64
-1
9
n_authors
int64
-1
57
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
99
Datasets
sequencelengths
0
5
Spaces
sequencelengths
0
57
https://aclanthology.org/2024.semeval-1.230.bib
https://aclanthology.org/2024.semeval-1.230/
@inproceedings{datta-etal-2024-weighted, title = "Weighted Layer Averaging {R}o{BERT}a for Black-Box Machine-Generated Text Detection", author = "Datta, Ayan and Chandramania, Aryan and Mamidi, Radhika", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.230", doi = "10.18653/v1/2024.semeval-1.230", pages = "1623--1626", abstract = "We propose a novel approach for machine-generated text detection using a RoBERTa model with weighted layer averaging and AdaLoRA for parameter-efficient fine-tuning. Our method incorporates information from all model layers, capturing diverse linguistic cues beyond those accessible from the final layer alone. To mitigate potential overfitting and improve generalizability, we leverage AdaLoRA, which injects trainable low-rank matrices into each Transformer layer, significantly reducing the number of trainable parameters. Furthermore, we employ data mixing to ensure our model encounters text from various domains and generators during training, enhancing its ability to generalize to unseen data. This work highlights the potential of combining layer-wise information with parameter-efficient fine-tuning and data mixing for effective machine-generated text detection.", }
We propose a novel approach for machine-generated text detection using a RoBERTa model with weighted layer averaging and AdaLoRA for parameter-efficient fine-tuning. Our method incorporates information from all model layers, capturing diverse linguistic cues beyond those accessible from the final layer alone. To mitigate potential overfitting and improve generalizability, we leverage AdaLoRA, which injects trainable low-rank matrices into each Transformer layer, significantly reducing the number of trainable parameters. Furthermore, we employ data mixing to ensure our model encounters text from various domains and generators during training, enhancing its ability to generalize to unseen data. This work highlights the potential of combining layer-wise information with parameter-efficient fine-tuning and data mixing for effective machine-generated text detection.
[ "Datta, Ayan", "Ch", "ramania, Aryan", "Mamidi, Radhika" ]
Weighted Layer Averaging RoBERTa for Black-Box Machine-Generated Text Detection
semeval-1.230
Poster
[ "https://github.com/advin4603/ai-detection-with-wla" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.231.bib
https://aclanthology.org/2024.semeval-1.231/
@inproceedings{bafna-etal-2024-mast, title = "Mast Kalandar at {S}em{E}val-2024 Task 8: On the Trail of Textual Origins: {R}o{BERT}a-{B}i{LSTM} Approach to Detect {AI}-Generated Text", author = "Bafna, Jainit and Mittal, Hardik and Sethia, Suyash and Shrivastava, Manish and Mamidi, Radhika", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.231", doi = "10.18653/v1/2024.semeval-1.231", pages = "1627--1633", abstract = "Large Language Models (LLMs) have showcased impressive abilities in generating fluent responses to diverse user queries. However, concerns regarding the potential misuse ofsuch texts in journalism, educational, and academic contexts have surfaced. SemEval 2024introduces the task of Multigenerator, Multidomain, and Multilingual Black-Box MachineGenerated Text Detection, aiming to developautomated systems for identifying machinegenerated text and detecting potential misuse. In this paper, we i) propose a RoBERTaBiLSTM based classifier designed to classifytext into two categories: AI-generated or human ii) conduct a comparative study of ourmodel with baseline approaches to evaluate itseffectiveness. This paper contributes to the advancement of automatic text detection systemsin addressing the challenges posed by machinegenerated text misuse. Our architecture ranked46th on the official leaderboard with an accuracy of 80.83 among 125.", }
Large Language Models (LLMs) have showcased impressive abilities in generating fluent responses to diverse user queries. However, concerns regarding the potential misuse ofsuch texts in journalism, educational, and academic contexts have surfaced. SemEval 2024introduces the task of Multigenerator, Multidomain, and Multilingual Black-Box MachineGenerated Text Detection, aiming to developautomated systems for identifying machinegenerated text and detecting potential misuse. In this paper, we i) propose a RoBERTaBiLSTM based classifier designed to classifytext into two categories: AI-generated or human ii) conduct a comparative study of ourmodel with baseline approaches to evaluate itseffectiveness. This paper contributes to the advancement of automatic text detection systemsin addressing the challenges posed by machinegenerated text misuse. Our architecture ranked46th on the official leaderboard with an accuracy of 80.83 among 125.
[ "Bafna, Jainit", "Mittal, Hardik", "Sethia, Suyash", "Shrivastava, Manish", "Mamidi, Radhika" ]
Mast Kalandar at SemEval-2024 Task 8: On the Trail of Textual Origins: RoBERTa-BiLSTM Approach to Detect AI-Generated Text
semeval-1.231
Poster
2407.02978
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.232.bib
https://aclanthology.org/2024.semeval-1.232/
@inproceedings{piao-etal-2024-hw, title = "{HW}-{TSC} 2024 Submission for the {S}em{E}val-2024 Task 1: Semantic Textual Relatedness ({STR})", author = "Piao, Mengyao and Chang, Su and Li, Yuang and Qiao, Xiaosong and Zhao, Xiaofeng and Li, Yinglu and Zhang, Min and Yang, Hao", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.232", doi = "10.18653/v1/2024.semeval-1.232", pages = "1634--1638", abstract = "The degree of semantic relatedness of two units of language has long been considered fundamental to understanding meaning. In this paper, we present the system of Huawei Translation Services Center (HW-TSC) for Task 1 of SemEval 2024, which aims to automatically measure the semantic relatedness of sentence pairs in African and Asian languages. The task dataset for this task covers about 14 different languages, These languages originate from five distinct language families and are predominantly spoken in Africa and Asia. For this shared task, we describe our proposed solutions, including ideas and the implementation steps of the task, as well as the outcomes of each experiment on the development dataset. To enhance the performance, we leverage these experimental outcomes and construct an ensemble one. Our results demonstrate that our system achieves impressive performance on test datasets in unsupervised track B and ranked first place for the Punjabi language pair.", }
The degree of semantic relatedness of two units of language has long been considered fundamental to understanding meaning. In this paper, we present the system of Huawei Translation Services Center (HW-TSC) for Task 1 of SemEval 2024, which aims to automatically measure the semantic relatedness of sentence pairs in African and Asian languages. The task dataset for this task covers about 14 different languages, These languages originate from five distinct language families and are predominantly spoken in Africa and Asia. For this shared task, we describe our proposed solutions, including ideas and the implementation steps of the task, as well as the outcomes of each experiment on the development dataset. To enhance the performance, we leverage these experimental outcomes and construct an ensemble one. Our results demonstrate that our system achieves impressive performance on test datasets in unsupervised track B and ranked first place for the Punjabi language pair.
[ "Piao, Mengyao", "Chang, Su", "Li, Yuang", "Qiao, Xiaosong", "Zhao, Xiaofeng", "Li, Yinglu", "Zhang, Min", "Yang, Hao" ]
HW-TSC 2024 Submission for the SemEval-2024 Task 1: Semantic Textual Relatedness (STR)
semeval-1.232
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.233.bib
https://aclanthology.org/2024.semeval-1.233/
@inproceedings{wang-etal-2024-knowcomp, title = "{K}now{C}omp at {S}em{E}val-2024 Task 9: Conceptualization-Augmented Prompting with Large Language Models for Lateral Reasoning", author = "Wang, Weiqi and Xu, Baixuan and Shi, Haochen and Bai, Jiaxin and Hu, Qi and Song, Yangqiu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.233", doi = "10.18653/v1/2024.semeval-1.233", pages = "1639--1645", abstract = "Lateral thinking is essential in breaking away from conventional thought patterns and finding innovative solutions to problems. Despite this, language models often struggle with reasoning tasks that require lateral thinking. In this paper, we present our system for SemEval-2024 Task 9{'}s BrainTeaser challenge, which requires language models to answer brain teaser questions that typically involve lateral reasoning scenarios. Our framework is based on large language models and incorporates a zero-shot prompting method that integrates conceptualizations of automatically detected instances in the question. We also transform the task of question answering into a declarative format to enhance the discriminatory ability of large language models. Our zero-shot evaluation results with ChatGPT indicate that our approach outperforms baselines, including zero-shot and few-shot prompting and chain-of-thought reasoning. Additionally, our system ranks ninth on the official leaderboard, demonstrating its strong performance.", }
Lateral thinking is essential in breaking away from conventional thought patterns and finding innovative solutions to problems. Despite this, language models often struggle with reasoning tasks that require lateral thinking. In this paper, we present our system for SemEval-2024 Task 9{'}s BrainTeaser challenge, which requires language models to answer brain teaser questions that typically involve lateral reasoning scenarios. Our framework is based on large language models and incorporates a zero-shot prompting method that integrates conceptualizations of automatically detected instances in the question. We also transform the task of question answering into a declarative format to enhance the discriminatory ability of large language models. Our zero-shot evaluation results with ChatGPT indicate that our approach outperforms baselines, including zero-shot and few-shot prompting and chain-of-thought reasoning. Additionally, our system ranks ninth on the official leaderboard, demonstrating its strong performance.
[ "Wang, Weiqi", "Xu, Baixuan", "Shi, Haochen", "Bai, Jiaxin", "Hu, Qi", "Song, Yangqiu" ]
KnowComp at SemEval-2024 Task 9: Conceptualization-Augmented Prompting with Large Language Models for Lateral Reasoning
semeval-1.233
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.234.bib
https://aclanthology.org/2024.semeval-1.234/
@inproceedings{li-etal-2024-hw, title = "{HW}-{TSC} at {S}em{E}val-2024 Task 9: Exploring Prompt Engineering Strategies for Brain Teaser Puzzles Through {LLM}s", author = "Li, Yinglu and Yanqing, Zhao and Zhang, Min and Deng, Yadong and Geng, Aiju and Liu, Xiaoqin and Ren, Mengxin and Li, Yuang and Chang, Su and Zhao, Xiaofeng", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.234", doi = "10.18653/v1/2024.semeval-1.234", pages = "1646--1651", abstract = "Large Language Models (LLMs) have demonstrated impressive performance on many Natural Language Processing (NLP) tasks. However, their ability to solve more creative, lateral thinking puzzles remains relatively unexplored. In this work, we develop methods to enhance the lateral thinking and puzzle-solving capabilities of LLMs. We curate a dataset of word-type and sentence-type brain teasers requiring creative problem-solving abilities beyond commonsense reasoning. We first evaluate the zero-shot performance of models like GPT-3.5 and GPT-4 on this dataset. To improve their puzzle-solving skills, we employ prompting techniques like providing reasoning clues and chaining multiple examples to demonstrate the desired thinking process. We also fine-tune the state-of-the-art Mixtral 7x8b LLM on ourdataset. Our methods enable the models to achieve strong results, securing 2nd and 3rd places in the brain teaser task. Our work highlights the potential of LLMs in acquiring complex reasoning abilities with the appropriate training. The efficacy of our approaches opens up new research avenues into advancing lateral thinking and creative problem-solving with AI systems.", }
Large Language Models (LLMs) have demonstrated impressive performance on many Natural Language Processing (NLP) tasks. However, their ability to solve more creative, lateral thinking puzzles remains relatively unexplored. In this work, we develop methods to enhance the lateral thinking and puzzle-solving capabilities of LLMs. We curate a dataset of word-type and sentence-type brain teasers requiring creative problem-solving abilities beyond commonsense reasoning. We first evaluate the zero-shot performance of models like GPT-3.5 and GPT-4 on this dataset. To improve their puzzle-solving skills, we employ prompting techniques like providing reasoning clues and chaining multiple examples to demonstrate the desired thinking process. We also fine-tune the state-of-the-art Mixtral 7x8b LLM on ourdataset. Our methods enable the models to achieve strong results, securing 2nd and 3rd places in the brain teaser task. Our work highlights the potential of LLMs in acquiring complex reasoning abilities with the appropriate training. The efficacy of our approaches opens up new research avenues into advancing lateral thinking and creative problem-solving with AI systems.
[ "Li, Yinglu", "Yanqing, Zhao", "Zhang, Min", "Deng, Yadong", "Geng, Aiju", "Liu, Xiaoqin", "Ren, Mengxin", "Li, Yuang", "Chang, Su", "Zhao, Xiaofeng" ]
HW-TSC at SemEval-2024 Task 9: Exploring Prompt Engineering Strategies for Brain Teaser Puzzles Through LLMs
semeval-1.234
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.235.bib
https://aclanthology.org/2024.semeval-1.235/
@inproceedings{krumov-etal-2024-su, title = "{SU}-{FMI} at {S}em{E}val-2024 Task 5: From {BERT} Fine-Tuning to {LLM} Prompt Engineering - Approaches in Legal Argument Reasoning", author = "Krumov, Kristiyan and Boytcheva, Svetla and Koytchev, Ivan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.235", doi = "10.18653/v1/2024.semeval-1.235", pages = "1652--1658", abstract = "This paper presents our approach and findings for SemEval-2024 Task 5, focusing on legal argument reasoning. We explored the effectiveness of fine-tuning pre-trained BERT models and the innovative application of large language models (LLMs) through prompt engineering in the context of legal texts. Our methodology involved a combination of techniques to address the challenges posed by legal language processing, including handling long texts and optimizing natural language understanding (NLU) capabilities for the legal domain. Our contributions were validated by achieving a third-place ranking on the SemEval 2024 Task 5 Leaderboard. The results underscore the potential of LLMs and prompt engineering in enhancing legal reasoning tasks, offering insights into the evolving landscape of NLU technologies within the legal field.", }
This paper presents our approach and findings for SemEval-2024 Task 5, focusing on legal argument reasoning. We explored the effectiveness of fine-tuning pre-trained BERT models and the innovative application of large language models (LLMs) through prompt engineering in the context of legal texts. Our methodology involved a combination of techniques to address the challenges posed by legal language processing, including handling long texts and optimizing natural language understanding (NLU) capabilities for the legal domain. Our contributions were validated by achieving a third-place ranking on the SemEval 2024 Task 5 Leaderboard. The results underscore the potential of LLMs and prompt engineering in enhancing legal reasoning tasks, offering insights into the evolving landscape of NLU technologies within the legal field.
[ "Krumov, Kristiyan", "Boytcheva, Svetla", "Koytchev, Ivan" ]
SU-FMI at SemEval-2024 Task 5: From BERT Fine-Tuning to LLM Prompt Engineering - Approaches in Legal Argument Reasoning
semeval-1.235
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.236.bib
https://aclanthology.org/2024.semeval-1.236/
@inproceedings{zhunis-chuang-2024-challenges, title = "Challenges at {S}em{E}val 2024 Task 7: Contrastive Learning Approach on Numeral-Aware Language Generation", author = "Zhunis, Ali and Chuang, Hao-yun", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.236", doi = "10.18653/v1/2024.semeval-1.236", pages = "1659--1662", abstract = "Although Large Language Model (LLM) excels on generating headline on ROUGE evaluation, it still fails to reason number and generate news article headline with accurate number. Attending SemEval-2024 Task 7 subtask 3, our team aims on using contrastive loss to increase the understanding of the number from their different expression, and knows to identify between different number and its respective expression. This system description paper uses T5 and BART as the baseline model, comparing its result with and without the constrative loss. The result shows that BART with contrastive loss have excelled all the models, and its performance on the number accuracy has the highest performance among all.", }
Although Large Language Model (LLM) excels on generating headline on ROUGE evaluation, it still fails to reason number and generate news article headline with accurate number. Attending SemEval-2024 Task 7 subtask 3, our team aims on using contrastive loss to increase the understanding of the number from their different expression, and knows to identify between different number and its respective expression. This system description paper uses T5 and BART as the baseline model, comparing its result with and without the constrative loss. The result shows that BART with contrastive loss have excelled all the models, and its performance on the number accuracy has the highest performance among all.
[ "Zhunis, Ali", "Chuang, Hao-yun" ]
Challenges at SemEval 2024 Task 7: Contrastive Learning Approach on Numeral-Aware Language Generation
semeval-1.236
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.237.bib
https://aclanthology.org/2024.semeval-1.237/
@inproceedings{rosener-etal-2024-team, title = "Team Bolaca at {S}em{E}val-2024 Task 6: Sentence-transformers are all you need", author = {R{\"o}sener, B{\'e}la and Wei, Hong-bo and Vandici, Ilinca}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.237", doi = "10.18653/v1/2024.semeval-1.237", pages = "1663--1666", abstract = "Our team tackled the SemEval-2024 Task 6, focusing on identifying fluent over-generation hallucinations in NLP outputs. We proposed a pragmatic solution using a logistic regression classifier and a feed-forward ANN, harnessing SBERT embeddings for feature extraction.", }
Our team tackled the SemEval-2024 Task 6, focusing on identifying fluent over-generation hallucinations in NLP outputs. We proposed a pragmatic solution using a logistic regression classifier and a feed-forward ANN, harnessing SBERT embeddings for feature extraction.
[ "R{\\\"o}sener, B{\\'e}la", "Wei, Hong-bo", "V", "ici, Ilinca" ]
Team Bolaca at SemEval-2024 Task 6: Sentence-transformers are all you need
semeval-1.237
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.238.bib
https://aclanthology.org/2024.semeval-1.238/
@inproceedings{shirnin-etal-2024-aipom, title = "{AI}pom at {S}em{E}val-2024 Task 8: Detecting {AI}-produced Outputs in M4", author = "Shirnin, Alexander and Andreev, Nikita and Mikhailov, Vladislav and Artemova, Ekaterina", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.238", doi = "10.18653/v1/2024.semeval-1.238", pages = "1667--1672", abstract = "This paper describes AIpom, a system designed to detect a boundary between human-written and machine-generated text (SemEval-2024 Task 8, Subtask C: Human-Machine Mixed Text Detection). We propose a two-stage pipeline combining predictions from an instruction-tuned decoder-only model and encoder-only sequence taggers. AIpom is ranked second on the leaderboard while achieving a Mean Absolute Error of 15.94. Ablation studies confirm the benefits of pipelining encoder and decoder models, particularly in terms of improved performance.", }
This paper describes AIpom, a system designed to detect a boundary between human-written and machine-generated text (SemEval-2024 Task 8, Subtask C: Human-Machine Mixed Text Detection). We propose a two-stage pipeline combining predictions from an instruction-tuned decoder-only model and encoder-only sequence taggers. AIpom is ranked second on the leaderboard while achieving a Mean Absolute Error of 15.94. Ablation studies confirm the benefits of pipelining encoder and decoder models, particularly in terms of improved performance.
[ "Shirnin, Alex", "er", "Andreev, Nikita", "Mikhailov, Vladislav", "Artemova, Ekaterina" ]
AIpom at SemEval-2024 Task 8: Detecting AI-produced Outputs in M4
semeval-1.238
Poster
2403.19354
[ "https://github.com/25icecreamflavors/aipom" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.239.bib
https://aclanthology.org/2024.semeval-1.239/
@inproceedings{marks-etal-2024-clac, title = "{CL}a{C} at {S}em{E}val-2024 Task 2: Faithful Clinical Trial Inference", author = "Marks, Jennifer and Davari, Mohammadreza and Kosseim, Leila", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.239", doi = "10.18653/v1/2024.semeval-1.239", pages = "1673--1677", abstract = "This paper presents the methodology used for our participation in SemEval 2024 Task 2 (Jullien et al., 2024) {--} Safe Biomedical Natural Language Inference for Clinical Trials. The task involved Natural Language Inference (NLI) on clinical trial data, where statements were provided regarding information within Clinical Trial Reports (CTRs). These statements could pertain to a single CTR or compare two CTRs, requiring the identification of the inference relation (entailment vs contradiction) between CTR-statement pairs. Evaluation was based on F1, Faithfulness, and Consistency metrics, with priority given to the latter two by the organizers. Our approach aims to maximize Faithfulness and Consistency, guided by intuitive definitions provided by the organizers, without detailed metric calculations. Experimentally, our approach yielded models achieving maximal Faithfulness (top rank) and average Consistency (mid rank) at the expense of F1 (low rank). Future work will focus on refining our approach to achieve a balance among all three metrics.", }
This paper presents the methodology used for our participation in SemEval 2024 Task 2 (Jullien et al., 2024) {--} Safe Biomedical Natural Language Inference for Clinical Trials. The task involved Natural Language Inference (NLI) on clinical trial data, where statements were provided regarding information within Clinical Trial Reports (CTRs). These statements could pertain to a single CTR or compare two CTRs, requiring the identification of the inference relation (entailment vs contradiction) between CTR-statement pairs. Evaluation was based on F1, Faithfulness, and Consistency metrics, with priority given to the latter two by the organizers. Our approach aims to maximize Faithfulness and Consistency, guided by intuitive definitions provided by the organizers, without detailed metric calculations. Experimentally, our approach yielded models achieving maximal Faithfulness (top rank) and average Consistency (mid rank) at the expense of F1 (low rank). Future work will focus on refining our approach to achieve a balance among all three metrics.
[ "Marks, Jennifer", "Davari, Mohammadreza", "Kosseim, Leila" ]
CLaC at SemEval-2024 Task 2: Faithful Clinical Trial Inference
semeval-1.239
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.240.bib
https://aclanthology.org/2024.semeval-1.240/
@inproceedings{borra-etal-2024-malto, title = "{MALTO} at {S}em{E}val-2024 Task 6: Leveraging Synthetic Data for {LLM} Hallucination Detection", author = "Borra, Federico and Savelli, Claudio and Rosso, Giacomo and Koudounas, Alkis and Giobergia, Flavio", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.240", doi = "10.18653/v1/2024.semeval-1.240", pages = "1678--1684", abstract = "In Natural Language Generation (NLG), contemporary Large Language Models (LLMs) face several challenges, such as generating fluent yet inaccurate outputs and reliance on fluency-centric metrics. This often leads to neural networks exhibiting {``}hallucinations.{''} The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text. To tackle these issues, we introduce two key components, a data augmentation pipeline incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.", }
In Natural Language Generation (NLG), contemporary Large Language Models (LLMs) face several challenges, such as generating fluent yet inaccurate outputs and reliance on fluency-centric metrics. This often leads to neural networks exhibiting {``}hallucinations.{''} The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text. To tackle these issues, we introduce two key components, a data augmentation pipeline incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.
[ "Borra, Federico", "Savelli, Claudio", "Rosso, Giacomo", "Koudounas, Alkis", "Giobergia, Flavio" ]
MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection
semeval-1.240
Poster
2403.00964
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.241.bib
https://aclanthology.org/2024.semeval-1.241/
@inproceedings{bhamidipati-etal-2024-maha, title = "Maha Bhaashya at {S}em{E}val-2024 Task 6: Zero-Shot Multi-task Hallucination Detection", author = "Bhamidipati, Patanjali and Malladi, Advaith and Shrivastava, Manish and Mamidi, Radhika", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.241", doi = "10.18653/v1/2024.semeval-1.241", pages = "1685--1689", abstract = "In recent studies, the extensive utilization oflarge language models has underscored the importance of robust evaluation methodologiesfor assessing text generation quality and relevance to specific tasks. This has revealeda prevalent issue known as hallucination, anemergent condition in the model where generated text lacks faithfulness to the source anddeviates from the evaluation criteria. In thisstudy, we formally define hallucination and propose a framework for its quantitative detectionin a zero-shot setting, leveraging our definitionand the assumption that model outputs entailtask and sample specific inputs. In detectinghallucinations, our solution achieves an accuracy of 0.78 in a model-aware setting and 0.61in a model-agnostic setting. Notably, our solution maintains computational efficiency, requiring far less computational resources than other SOTA approaches, aligning with the trendtowards lightweight and compressed models.", }
In recent studies, the extensive utilization oflarge language models has underscored the importance of robust evaluation methodologiesfor assessing text generation quality and relevance to specific tasks. This has revealeda prevalent issue known as hallucination, anemergent condition in the model where generated text lacks faithfulness to the source anddeviates from the evaluation criteria. In thisstudy, we formally define hallucination and propose a framework for its quantitative detectionin a zero-shot setting, leveraging our definitionand the assumption that model outputs entailtask and sample specific inputs. In detectinghallucinations, our solution achieves an accuracy of 0.78 in a model-aware setting and 0.61in a model-agnostic setting. Notably, our solution maintains computational efficiency, requiring far less computational resources than other SOTA approaches, aligning with the trendtowards lightweight and compressed models.
[ "Bhamidipati, Patanjali", "Malladi, Advaith", "Shrivastava, Manish", "Mamidi, Radhika" ]
Maha Bhaashya at SemEval-2024 Task 6: Zero-Shot Multi-task Hallucination Detection
semeval-1.241
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.242.bib
https://aclanthology.org/2024.semeval-1.242/
@inproceedings{ciccarelli-etal-2024-team, title = "Team art-nat-{HHU} at {S}em{E}val-2024 Task 8: Stylistically Informed Fusion Model for {MGT}-Detection", author = "Ciccarelli, Vittorio and Genz, Cornelia and Mastracchio, Nele and Petersen, Wiebke and Stein, Anna and Xia, Hanxin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.242", doi = "10.18653/v1/2024.semeval-1.242", pages = "1690--1697", abstract = "This paper presents our solution for subtask A of shared task 8 of SemEval 2024 for classifying human- and machine-written texts in English across multiple domains. We propose a fusion model consisting of RoBERTa based pre-classifier and two MLPs that have been trained to correct the pre-classifier using linguistic features. Our model achieved an accuracy of 85{\%}.", }
This paper presents our solution for subtask A of shared task 8 of SemEval 2024 for classifying human- and machine-written texts in English across multiple domains. We propose a fusion model consisting of RoBERTa based pre-classifier and two MLPs that have been trained to correct the pre-classifier using linguistic features. Our model achieved an accuracy of 85{\%}.
[ "Ciccarelli, Vittorio", "Genz, Cornelia", "Mastracchio, Nele", "Petersen, Wiebke", "Stein, Anna", "Xia, Hanxin" ]
Team art-nat-HHU at SemEval-2024 Task 8: Stylistically Informed Fusion Model for MGT-Detection
semeval-1.242
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.243.bib
https://aclanthology.org/2024.semeval-1.243/
@inproceedings{ghahramani-kure-etal-2024-aima, title = "{AIMA} at {S}em{E}val-2024 Task 3: Simple Yet Powerful Emotion Cause Pair Analysis", author = "Ghahramani Kure, Alireza and Dehghani, Mahshid and Abootorabi, Mohammad Mahdi and Ghazizadeh, Nona and Dalili, Seyed Arshan and Asgari, Ehsaneddin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.243", doi = "10.18653/v1/2024.semeval-1.243", pages = "1698--1703", abstract = "The SemEval-2024 Task 3 presents two subtasks focusing on emotion-cause pair extraction within conversational contexts. Subtask 1 revolves around the extraction of textual emotion-cause pairs, where causes are defined and annotated as textual spans within the conversation. Conversely, Subtask 2 extends the analysis to encompass multimodal cues, including language, audio, and vision, acknowledging instances where causes may not be exclusively represented in the textual data. Our proposed model for emotion-cause analysis is meticulously structured into three core segments: (i) embedding extraction, (ii) cause-pair extraction {\&} emotion classification, and (iii) cause extraction using QA after finding pairs. Leveraging state-of-the-art techniques and fine-tuning on task-specific datasets, our model effectively unravels the intricate web of conversational dynamics and extracts subtle cues signifying causality in emotional expressions. Our team, AIMA, demonstrated strong performance in the SemEval-2024 Task 3 competition. We ranked as the 10th in subtask 1 and the 6th in subtask 2 out of 23 teams.", }
The SemEval-2024 Task 3 presents two subtasks focusing on emotion-cause pair extraction within conversational contexts. Subtask 1 revolves around the extraction of textual emotion-cause pairs, where causes are defined and annotated as textual spans within the conversation. Conversely, Subtask 2 extends the analysis to encompass multimodal cues, including language, audio, and vision, acknowledging instances where causes may not be exclusively represented in the textual data. Our proposed model for emotion-cause analysis is meticulously structured into three core segments: (i) embedding extraction, (ii) cause-pair extraction {\&} emotion classification, and (iii) cause extraction using QA after finding pairs. Leveraging state-of-the-art techniques and fine-tuning on task-specific datasets, our model effectively unravels the intricate web of conversational dynamics and extracts subtle cues signifying causality in emotional expressions. Our team, AIMA, demonstrated strong performance in the SemEval-2024 Task 3 competition. We ranked as the 10th in subtask 1 and the 6th in subtask 2 out of 23 teams.
[ "Ghahramani Kure, Alireza", "Dehghani, Mahshid", "Abootorabi, Mohammad Mahdi", "Ghazizadeh, Nona", "Dalili, Seyed Arshan", "Asgari, Ehsaneddin" ]
AIMA at SemEval-2024 Task 3: Simple Yet Powerful Emotion Cause Pair Analysis
semeval-1.243
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.244.bib
https://aclanthology.org/2024.semeval-1.244/
@inproceedings{abootorabi-etal-2024-aima, title = "{AIMA} at {S}em{E}val-2024 Task 10: History-Based Emotion Recognition in {H}indi-{E}nglish Code-Mixed Conversations", author = "Abootorabi, Mohammad Mahdi and Ghazizadeh, Nona and Dalili, Seyed Arshan and Ghahramani Kure, Alireza and Dehghani, Mahshid and Asgari, Ehsaneddin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.244", doi = "10.18653/v1/2024.semeval-1.244", pages = "1704--1710", abstract = "In this study, we introduce a solution to the SemEval 2024 Task 10 on subtask 1, dedicated to Emotion Recognition in Conversation (ERC) in code-mixed Hindi-English conversations. ERC in code-mixed conversations presents unique challenges, as existing models are typically trained on monolingual datasets and may not perform well on code-mixed data. To address this, we propose a series of models that incorporate both the previous and future context of the current utterance, as well as the sequential information of the conversation. To facilitate the processing of code-mixed data, we developed a Hinglish-to-English translation pipeline to translate the code-mixed conversations into English. We designed four different base models, each utilizing powerful pre-trained encoders to extract features from the input but with varying architectures. By ensembling all of these models, we developed a final model that outperforms all other baselines.", }
In this study, we introduce a solution to the SemEval 2024 Task 10 on subtask 1, dedicated to Emotion Recognition in Conversation (ERC) in code-mixed Hindi-English conversations. ERC in code-mixed conversations presents unique challenges, as existing models are typically trained on monolingual datasets and may not perform well on code-mixed data. To address this, we propose a series of models that incorporate both the previous and future context of the current utterance, as well as the sequential information of the conversation. To facilitate the processing of code-mixed data, we developed a Hinglish-to-English translation pipeline to translate the code-mixed conversations into English. We designed four different base models, each utilizing powerful pre-trained encoders to extract features from the input but with varying architectures. By ensembling all of these models, we developed a final model that outperforms all other baselines.
[ "Abootorabi, Mohammad Mahdi", "Ghazizadeh, Nona", "Dalili, Seyed Arshan", "Ghahramani Kure, Alireza", "Dehghani, Mahshid", "Asgari, Ehsaneddin" ]
AIMA at SemEval-2024 Task 10: History-Based Emotion Recognition in Hindi-English Code-Mixed Conversations
semeval-1.244
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.245.bib
https://aclanthology.org/2024.semeval-1.245/
@inproceedings{chen-etal-2024-team, title = "Team {MGTD}4{ADL} at {S}em{E}val-2024 Task 8: Leveraging (Sentence) Transformer Models with Contrastive Learning for Identifying Machine-Generated Text", author = {Chen, Huixin and B{\"u}ssing, Jan and R{\"u}gamer, David and Nie, Ercong}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.245", doi = "10.18653/v1/2024.semeval-1.245", pages = "1711--1718", abstract = "This paper outlines our approach to SemEval-2024 Task 8 (Subtask B), which focuses on discerning machine-generated text from human-written content, while also identifying the text sources, i.e., from which Large Language Model (LLM) the target text is generated. Our detection system is built upon Transformer-based techniques, leveraging various pre-trained language models (PLMs), including sentence transformer models. Additionally, we incorporate Contrastive Learning (CL) into the classifier to improve the detecting capabilities and employ Data Augmentation methods. Ultimately, our system achieves a peak accuracy of 76.96{\%} on the test set of the competition, configured using a sentence transformer model integrated with CL methodology.", }
This paper outlines our approach to SemEval-2024 Task 8 (Subtask B), which focuses on discerning machine-generated text from human-written content, while also identifying the text sources, i.e., from which Large Language Model (LLM) the target text is generated. Our detection system is built upon Transformer-based techniques, leveraging various pre-trained language models (PLMs), including sentence transformer models. Additionally, we incorporate Contrastive Learning (CL) into the classifier to improve the detecting capabilities and employ Data Augmentation methods. Ultimately, our system achieves a peak accuracy of 76.96{\%} on the test set of the competition, configured using a sentence transformer model integrated with CL methodology.
[ "Chen, Huixin", "B{\\\"u}ssing, Jan", "R{\\\"u}gamer, David", "Nie, Ercong" ]
Team MGTD4ADL at SemEval-2024 Task 8: Leveraging (Sentence) Transformer Models with Contrastive Learning for Identifying Machine-Generated Text
semeval-1.245
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.246.bib
https://aclanthology.org/2024.semeval-1.246/
@inproceedings{singh-etal-2024-clustercore, title = "{C}luster{C}ore at {S}em{E}val-2024 Task 7: Few Shot Prompting With Large Language Models for Numeral-Aware Headline Generation", author = "Singh, Monika and Kumar, Sujit and ., Tanveen and Ranbir Singh, Sanasam", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.246", doi = "10.18653/v1/2024.semeval-1.246", pages = "1719--1726", abstract = "The generation of headlines, a crucial aspect of abstractive summarization, aims to compress an entire article into a concise, single line of text despite the effectiveness of modern encoder-decoder models for text generation and summarization tasks. The encoder-decoder model commonly faces challenges in accurately generating numerical content within headlines. This study empirically explored LLMs for numeral-aware headline generation and proposed few-shot prompting with LLMs for numeral-aware headline generations. Experiments conducted on the NumHG dataset and NumEval-2024 test set suggest that fine-tuning LLMs on NumHG dataset enhances the performance of LLMs for numeral aware headline generation. Furthermore, few-shot prompting with LLMs surpassed the performance of fine-tuned LLMs for numeral-aware headline generation.", }
The generation of headlines, a crucial aspect of abstractive summarization, aims to compress an entire article into a concise, single line of text despite the effectiveness of modern encoder-decoder models for text generation and summarization tasks. The encoder-decoder model commonly faces challenges in accurately generating numerical content within headlines. This study empirically explored LLMs for numeral-aware headline generation and proposed few-shot prompting with LLMs for numeral-aware headline generations. Experiments conducted on the NumHG dataset and NumEval-2024 test set suggest that fine-tuning LLMs on NumHG dataset enhances the performance of LLMs for numeral aware headline generation. Furthermore, few-shot prompting with LLMs surpassed the performance of fine-tuned LLMs for numeral-aware headline generation.
[ "Singh, Monika", "Kumar, Sujit", "., Tanveen", "Ranbir Singh, Sanasam" ]
ClusterCore at SemEval-2024 Task 7: Few Shot Prompting With Large Language Models for Numeral-Aware Headline Generation
semeval-1.246
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.247.bib
https://aclanthology.org/2024.semeval-1.247/
@inproceedings{ghahroodi-asgari-2024-hierarchyeverywhere, title = "{H}ierarchy{E}verywhere at {S}em{E}val-2024 Task 4: Detection of Persuasion Techniques in Memes Using Hierarchical Text Classifier", author = "Ghahroodi, Omid and Asgari, Ehsaneddin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.247", doi = "10.18653/v1/2024.semeval-1.247", pages = "1727--1732", abstract = "Text classification is an important task in natural language processing. Hierarchical Text Classification (HTC) is a subset of text classification task-type. HTC tackles multi-label classification challenges by leveraging tree structures that delineate relationships between classes, thereby striving to enhance classification accuracy through the utilization of inter-class relationships. Memes, as prevalent vehicles of modern communication within social networks, hold immense potential as instruments for propagandistic dissemination due to their profound impact on users. In SemEval-2024 Task 4, the identification of propaganda and its various forms in memes is explored through two sub-tasks: (i) utilizing only the textual component of memes, and (ii) incorporating both textual and pictorial elements. In this study, we address the proposed problem through the lens of HTC, using state-of-the-art hierarchical text classification methodologies to detect propaganda in memes. Our system achieved first place in English Sub-task 2a, underscoring its efficacy in tackling the complexities inherent in propaganda detection within the meme landscape.", }
Text classification is an important task in natural language processing. Hierarchical Text Classification (HTC) is a subset of text classification task-type. HTC tackles multi-label classification challenges by leveraging tree structures that delineate relationships between classes, thereby striving to enhance classification accuracy through the utilization of inter-class relationships. Memes, as prevalent vehicles of modern communication within social networks, hold immense potential as instruments for propagandistic dissemination due to their profound impact on users. In SemEval-2024 Task 4, the identification of propaganda and its various forms in memes is explored through two sub-tasks: (i) utilizing only the textual component of memes, and (ii) incorporating both textual and pictorial elements. In this study, we address the proposed problem through the lens of HTC, using state-of-the-art hierarchical text classification methodologies to detect propaganda in memes. Our system achieved first place in English Sub-task 2a, underscoring its efficacy in tackling the complexities inherent in propaganda detection within the meme landscape.
[ "Ghahroodi, Omid", "Asgari, Ehsaneddin" ]
HierarchyEverywhere at SemEval-2024 Task 4: Detection of Persuasion Techniques in Memes Using Hierarchical Text Classifier
semeval-1.247
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.248.bib
https://aclanthology.org/2024.semeval-1.248/
@inproceedings{panagiotopoulos-etal-2024-ails, title = "{AILS}-{NTUA} at {S}em{E}val-2024 Task 9: Cracking Brain Teasers: Transformer Models for Lateral Thinking Puzzles", author = "Panagiotopoulos, Ioannis and Filandrianos, George and Lymperaiou, Maria and Stamou, Giorgos", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.248", doi = "10.18653/v1/2024.semeval-1.248", pages = "1733--1746", abstract = "In this paper, we outline our submission for the SemEval-2024 Task 9 competition: {`}BRAINTEASER: A Novel Task Defying Common Sense{'}. We engage in both sub-tasks: Sub-task A-Sentence Puzzle and Sub-task B-Word Puzzle. We evaluate a plethora of pre-trained transformer-based language models of different sizes through fine-tuning. Subsequently, we undertake an analysis of their scores and responses to aid future researchers in understanding and utilizing these models effectively. Our top-performing approaches secured competitive positions on the competition leaderboard across both sub-tasks. In the evaluation phase, our best submission attained an average accuracy score of 81.7{\%} in the Sentence Puzzle, and 85.4{\%} in the Word Puzzle, significantly outperforming the best neural baseline (ChatGPT) by more than 20{\%} and 30{\%} respectively.", }
In this paper, we outline our submission for the SemEval-2024 Task 9 competition: {`}BRAINTEASER: A Novel Task Defying Common Sense{'}. We engage in both sub-tasks: Sub-task A-Sentence Puzzle and Sub-task B-Word Puzzle. We evaluate a plethora of pre-trained transformer-based language models of different sizes through fine-tuning. Subsequently, we undertake an analysis of their scores and responses to aid future researchers in understanding and utilizing these models effectively. Our top-performing approaches secured competitive positions on the competition leaderboard across both sub-tasks. In the evaluation phase, our best submission attained an average accuracy score of 81.7{\%} in the Sentence Puzzle, and 85.4{\%} in the Word Puzzle, significantly outperforming the best neural baseline (ChatGPT) by more than 20{\%} and 30{\%} respectively.
[ "Panagiotopoulos, Ioannis", "Fil", "rianos, George", "Lymperaiou, Maria", "Stamou, Giorgos" ]
AILS-NTUA at SemEval-2024 Task 9: Cracking Brain Teasers: Transformer Models for Lateral Thinking Puzzles
semeval-1.248
Poster
2404.01084
[ "https://github.com/giannispana/ails-ntua-at-semeval-2024-task-9-brainteaser" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.249.bib
https://aclanthology.org/2024.semeval-1.249/
@inproceedings{belikova-kosenko-2024-deeppavlov, title = "{D}eep{P}avlov at {S}em{E}val-2024 Task 3: Multimodal Large Language Models in Emotion Reasoning", author = "Belikova, Julia and Kosenko, Dmitrii", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.249", doi = "10.18653/v1/2024.semeval-1.249", pages = "1747--1757", abstract = "This paper presents the solution of the DeepPavlov team for the Multimodal Sentiment Cause Analysis competition in SemEval-2024 Task 3, Subtask 2 (Wang et al., 2024). In the evaluation leaderboard, our approach ranks 7th with an F1-score of 0.2132. Large Language Models (LLMs) are transformative in their ability to comprehend and generate human-like text. With recent advancements, Multimodal Large Language Models (MLLMs) have expanded LLM capabilities, integrating different modalities such as audio, vision, and language. Our work delves into the state-of-the-art MLLM Video-LLaMA, its associated modalities, and its application to the emotion reasoning downstream task, Multimodal Emotion Cause Analysis in Conversations (MECAC). We investigate the model{'}s performance in several modes: zero-shot, few-shot, individual embeddings, and fine-tuned, providing insights into their limits and potential enhancements for emotion understanding.", }
This paper presents the solution of the DeepPavlov team for the Multimodal Sentiment Cause Analysis competition in SemEval-2024 Task 3, Subtask 2 (Wang et al., 2024). In the evaluation leaderboard, our approach ranks 7th with an F1-score of 0.2132. Large Language Models (LLMs) are transformative in their ability to comprehend and generate human-like text. With recent advancements, Multimodal Large Language Models (MLLMs) have expanded LLM capabilities, integrating different modalities such as audio, vision, and language. Our work delves into the state-of-the-art MLLM Video-LLaMA, its associated modalities, and its application to the emotion reasoning downstream task, Multimodal Emotion Cause Analysis in Conversations (MECAC). We investigate the model{'}s performance in several modes: zero-shot, few-shot, individual embeddings, and fine-tuned, providing insights into their limits and potential enhancements for emotion understanding.
[ "Belikova, Julia", "Kosenko, Dmitrii" ]
DeepPavlov at SemEval-2024 Task 3: Multimodal Large Language Models in Emotion Reasoning
semeval-1.249
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.250.bib
https://aclanthology.org/2024.semeval-1.250/
@inproceedings{gupta-etal-2024-irel, title = "i{REL} at {S}em{E}val-2024 Task 9: Improving Conventional Prompting Methods for Brain Teasers", author = "Gupta, Harshit and Chaudhary, Manav and Subramanian, Shivansh and Raha, Tathagata and Varma, Vasudeva", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.250", doi = "10.18653/v1/2024.semeval-1.250", pages = "1758--1766", abstract = "This paper describes our approach for SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. The BRAINTEASER task comprises multiple-choice Question Answering designed to evaluate the models{'} lateral thinking capabilities. It consists of Sentence Puzzle and Word Puzzle subtasks that require models to defy default commonsense associations and exhibit unconventional thinking. We propose a unique strategy to improve the performance of pre-trained language models, notably the Gemini 1.0 Pro Model, in both subtasks. We employ static and dynamic few-shot prompting techniques and introduce a model-generated reasoning strategy that utilizes the LLM{'}s reasoning capabilities to improve performance. Our approach demonstrated significant improvements, showing that it performed better than the baseline models by a considerable margin but fell short of performing as well as the human annotators, thus highlighting the efficacy of the proposed strategies.", }
This paper describes our approach for SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. The BRAINTEASER task comprises multiple-choice Question Answering designed to evaluate the models{'} lateral thinking capabilities. It consists of Sentence Puzzle and Word Puzzle subtasks that require models to defy default commonsense associations and exhibit unconventional thinking. We propose a unique strategy to improve the performance of pre-trained language models, notably the Gemini 1.0 Pro Model, in both subtasks. We employ static and dynamic few-shot prompting techniques and introduce a model-generated reasoning strategy that utilizes the LLM{'}s reasoning capabilities to improve performance. Our approach demonstrated significant improvements, showing that it performed better than the baseline models by a considerable margin but fell short of performing as well as the human annotators, thus highlighting the efficacy of the proposed strategies.
[ "Gupta, Harshit", "Chaudhary, Manav", "Subramanian, Shivansh", "Raha, Tathagata", "Varma, Vasudeva" ]
iREL at SemEval-2024 Task 9: Improving Conventional Prompting Methods for Brain Teasers
semeval-1.250
Poster
2405.16129
[ "https://github.com/TheAthleticCoder/iREL-at-SemEval-2024-Task-9" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.251.bib
https://aclanthology.org/2024.semeval-1.251/
@inproceedings{sadeghi-etal-2024-utebc, title = "u{T}e{BC}-{NLP} at {S}em{E}val-2024 Task 9: Can {LLM}s be Lateral Thinkers?", author = "Sadeghi, Pouya and Abaskohi, Amirhossein and Yaghoobzadeh, Yadollah", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.251", doi = "10.18653/v1/2024.semeval-1.251", pages = "1767--1778", abstract = "Inspired by human cognition, Jiang et al. 2023 create a benchmark for assessing LLMs{'} lateral thinking{---}thinking outside the box. Building upon this benchmark, we investigate how different prompting methods enhance LLMs{'} performance on this task to reveal their inherent power for outside-the-box thinking ability. Through participating in SemEval-2024, task 9, Sentence Puzzle sub-task, we explore prompt engineering methods: chain of thoughts (CoT) and direct prompting, enhancing with informative descriptions, and employing contextualizing prompts using a retrieval augmented generation (RAG) pipeline. Our experiments involve three LLMs including GPT-3.5, GPT-4, and Zephyr-7B-beta. We generate a dataset of thinking paths between riddles and options using GPT-4, validated by humans for quality. Findings indicate that compressed informative prompts enhance performance. Dynamic in-context learning enhances model performance significantly. Furthermore, fine-tuning Zephyr on our dataset enhances performance across other commonsense datasets, underscoring the value of innovative thinking.", }
Inspired by human cognition, Jiang et al. 2023 create a benchmark for assessing LLMs{'} lateral thinking{---}thinking outside the box. Building upon this benchmark, we investigate how different prompting methods enhance LLMs{'} performance on this task to reveal their inherent power for outside-the-box thinking ability. Through participating in SemEval-2024, task 9, Sentence Puzzle sub-task, we explore prompt engineering methods: chain of thoughts (CoT) and direct prompting, enhancing with informative descriptions, and employing contextualizing prompts using a retrieval augmented generation (RAG) pipeline. Our experiments involve three LLMs including GPT-3.5, GPT-4, and Zephyr-7B-beta. We generate a dataset of thinking paths between riddles and options using GPT-4, validated by humans for quality. Findings indicate that compressed informative prompts enhance performance. Dynamic in-context learning enhances model performance significantly. Furthermore, fine-tuning Zephyr on our dataset enhances performance across other commonsense datasets, underscoring the value of innovative thinking.
[ "Sadeghi, Pouya", "Abaskohi, Amirhossein", "Yaghoobzadeh, Yadollah" ]
uTeBC-NLP at SemEval-2024 Task 9: Can LLMs be Lateral Thinkers?
semeval-1.251
Poster
2404.02474
[ "https://github.com/ipouyall/can-llms-be-lateral-thinkers" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.252.bib
https://aclanthology.org/2024.semeval-1.252/
@inproceedings{chikoti-etal-2024-iitk, title = "{IITK} at {S}em{E}val-2024 Task 4: Hierarchical Embeddings for Detection of Persuasion Techniques in Memes", author = "Chikoti, Shreenaga and Mehta, Shrey and Modi, Ashutosh", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.252", doi = "10.18653/v1/2024.semeval-1.252", pages = "1779--1787", abstract = "Memes are one of the most popular types of content used in an online disinformation campaign. They are primarily effective on social media platforms since they can easily reach many users. Memes in a disinformation campaign achieve their goal of influencing the users through several rhetorical and psychological techniques, such as causal oversimplification, name-calling, and smear. The SemEval 2024 Task 4 Multilingual Detection of Persuasion Technique in Memes on identifying such techniques in the memes is divided across three sub-tasks: (1) Hierarchical multi-label classification using only textual content of the meme, (2) Hierarchical multi-label classification using both, textual and visual content of the meme and (3) Binary classification of whether the meme contains a persuasion technique or not using it{'}s textual and visual content. This paper proposes an ensemble of Class Definition Prediction (CDP) and hyperbolic embeddings-based approaches for this task. We enhance meme classification accuracy and comprehensiveness by integrating HypEmo{'}s hierarchical label embeddings (Chen et al., 2023) and a multi-task learning framework for emotion prediction. We achieve a hierarchical F1-score of 0.60, 0.67, and 0.48 on the respective sub-tasks.", }
Memes are one of the most popular types of content used in an online disinformation campaign. They are primarily effective on social media platforms since they can easily reach many users. Memes in a disinformation campaign achieve their goal of influencing the users through several rhetorical and psychological techniques, such as causal oversimplification, name-calling, and smear. The SemEval 2024 Task 4 Multilingual Detection of Persuasion Technique in Memes on identifying such techniques in the memes is divided across three sub-tasks: (1) Hierarchical multi-label classification using only textual content of the meme, (2) Hierarchical multi-label classification using both, textual and visual content of the meme and (3) Binary classification of whether the meme contains a persuasion technique or not using it{'}s textual and visual content. This paper proposes an ensemble of Class Definition Prediction (CDP) and hyperbolic embeddings-based approaches for this task. We enhance meme classification accuracy and comprehensiveness by integrating HypEmo{'}s hierarchical label embeddings (Chen et al., 2023) and a multi-task learning framework for emotion prediction. We achieve a hierarchical F1-score of 0.60, 0.67, and 0.48 on the respective sub-tasks.
[ "Chikoti, Shreenaga", "Mehta, Shrey", "Modi, Ashutosh" ]
IITK at SemEval-2024 Task 4: Hierarchical Embeddings for Detection of Persuasion Techniques in Memes
semeval-1.252
Poster
2404.04520
[ "https://github.com/exploration-lab/iitk-semeval-2024-task-4-pursuasion-techniques" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.253.bib
https://aclanthology.org/2024.semeval-1.253/
@inproceedings{liu-etal-2024-hit, title = "{HIT}-{MI}{\&}{T} Lab at {S}em{E}val-2024 Task 6: {D}e{BERT}a-based Entailment Model is a Reliable Hallucination Detector", author = "Liu, Wei and Shi, Wanyao and Zhang, Zijian and Huang, Hui", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.253", doi = "10.18653/v1/2024.semeval-1.253", pages = "1788--1797", abstract = "This paper describes our submission for SemEval-2024 Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes. We propose four groups of methods for hallucination detection: 1) Entailment Recognition; 2) Similarity Search; 3) Factuality Verification; 4) Confidence Estimation. The four methods rely on either the semantic relationship between the hypothesis and its source (target) or on the model-aware features during decoding. We participated in both the model-agnostic and model-aware tracks. Our method{'}s effectiveness is validated by our high rankings 3rd in the model-agnostic track and 5th in the model-aware track. We have released our code on GitHub.", }
This paper describes our submission for SemEval-2024 Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes. We propose four groups of methods for hallucination detection: 1) Entailment Recognition; 2) Similarity Search; 3) Factuality Verification; 4) Confidence Estimation. The four methods rely on either the semantic relationship between the hypothesis and its source (target) or on the model-aware features during decoding. We participated in both the model-agnostic and model-aware tracks. Our method{'}s effectiveness is validated by our high rankings 3rd in the model-agnostic track and 5th in the model-aware track. We have released our code on GitHub.
[ "Liu, Wei", "Shi, Wanyao", "Zhang, Zijian", "Huang, Hui" ]
HIT-MI&T Lab at SemEval-2024 Task 6: DeBERTa-based Entailment Model is a Reliable Hallucination Detector
semeval-1.253
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.254.bib
https://aclanthology.org/2024.semeval-1.254/
@inproceedings{shi-etal-2024-ualberta, title = "{UA}lberta at {S}em{E}val-2024 Task 1: A Potpourri of Methods for Quantifying Multilingual Semantic Textual Relatedness and Similarity", author = "Shi, Ning and Li, Senyu and Luo, Guoqing and Mirzaei, Amirreza and Rafiei, Ali and Riley, Jai and Sheikhi, Hadi and Siavashpour, Mahvash and Tavakoli, Mohammad and Hauer, Bradley and Kondrak, Grzegorz", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.254", doi = "10.18653/v1/2024.semeval-1.254", pages = "1798--1805", abstract = "We describe our systems for SemEval-2024 Task 1: Semantic Textual Relatedness. We investigate the correlation between semantic relatedness and semantic similarity. Specifically, we test two hypotheses: (1) similarity is a special case of relatedness, and (2) semantic relatedness is preserved under translation. We experiment with a variety of approaches which are based on explicit semantics, downstream applications, contextual embeddings, large language models (LLMs), as well as ensembles of methods. We find empirical support for our theoretical insights. In addition, our best ensemble system yields highly competitive results in a number of diverse categories. Our code and data are available on GitHub.", }
We describe our systems for SemEval-2024 Task 1: Semantic Textual Relatedness. We investigate the correlation between semantic relatedness and semantic similarity. Specifically, we test two hypotheses: (1) similarity is a special case of relatedness, and (2) semantic relatedness is preserved under translation. We experiment with a variety of approaches which are based on explicit semantics, downstream applications, contextual embeddings, large language models (LLMs), as well as ensembles of methods. We find empirical support for our theoretical insights. In addition, our best ensemble system yields highly competitive results in a number of diverse categories. Our code and data are available on GitHub.
[ "Shi, Ning", "Li, Senyu", "Luo, Guoqing", "Mirzaei, Amirreza", "Rafiei, Ali", "Riley, Jai", "Sheikhi, Hadi", "Siavashpour, Mahvash", "Tavakoli, Mohammad", "Hauer, Bradley", "Kondrak, Grzegorz" ]
UAlberta at SemEval-2024 Task 1: A Potpourri of Methods for Quantifying Multilingual Semantic Textual Relatedness and Similarity
semeval-1.254
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.255.bib
https://aclanthology.org/2024.semeval-1.255/
@inproceedings{zhao-etal-2024-hw, title = "{HW}-{TSC} at {S}em{E}val-2024 Task 5: Self-Eval? A Confident {LLM} System for Auto Prediction and Evaluation for the Legal Argument Reasoning Task", author = "Zhao, Xiaofeng and Qiao, Xiaosong and Ou, Kaiwen and Zhang, Min and Chang, Su and Piao, Mengyao and Li, Yuang and Li, Yinglu and Zhu, Ming and Liu, Yilun", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.255", doi = "10.18653/v1/2024.semeval-1.255", pages = "1806--1810", abstract = "In this article, we present an effective system for semeval-2024 task 5. The task involves assessing the feasibility of a given solution in civil litigation cases based on relevant legal provisions, issues, solutions, and analysis. This task demands a high level of proficiency in U.S. law and natural language reasoning. In this task, we designed a self-eval LLM system that simultaneously performs reasoning and self-assessment tasks. We created a confidence interval and a prompt instructing the LLM to output the answer to a question along with its confidence level. We designed a series of experiments to prove the effectiveness of the self-eval mechanism. In order to avoid the randomness of the results, the final result is obtained by voting on three results generated by the GPT-4. Our submission was conducted under zero-resource setting, and we achieved first place in the task with an F1-score of 0.8231 and an accuracy of 0.8673.", }
In this article, we present an effective system for semeval-2024 task 5. The task involves assessing the feasibility of a given solution in civil litigation cases based on relevant legal provisions, issues, solutions, and analysis. This task demands a high level of proficiency in U.S. law and natural language reasoning. In this task, we designed a self-eval LLM system that simultaneously performs reasoning and self-assessment tasks. We created a confidence interval and a prompt instructing the LLM to output the answer to a question along with its confidence level. We designed a series of experiments to prove the effectiveness of the self-eval mechanism. In order to avoid the randomness of the results, the final result is obtained by voting on three results generated by the GPT-4. Our submission was conducted under zero-resource setting, and we achieved first place in the task with an F1-score of 0.8231 and an accuracy of 0.8673.
[ "Zhao, Xiaofeng", "Qiao, Xiaosong", "Ou, Kaiwen", "Zhang, Min", "Chang, Su", "Piao, Mengyao", "Li, Yuang", "Li, Yinglu", "Zhu, Ming", "Liu, Yilun" ]
HW-TSC at SemEval-2024 Task 5: Self-Eval? A Confident LLM System for Auto Prediction and Evaluation for the Legal Argument Reasoning Task
semeval-1.255
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.256.bib
https://aclanthology.org/2024.semeval-1.256/
@inproceedings{patel-etal-2024-iitk, title = "{IITK} at {S}em{E}val-2024 Task 10: Who is the speaker? Improving Emotion Recognition and Flip Reasoning in Conversations via Speaker Embeddings", author = "Patel, Shubham and Shukla, Divyaksh and Modi, Ashutosh", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.256", doi = "10.18653/v1/2024.semeval-1.256", pages = "1811--1820", abstract = "This paper presents our approach for the SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversations. We propose a transformer-based speaker-centric model for the Emotion Flip Reasoning (EFR) task and a masked-memory network along with a speaker participation vector for the Emotion Recognition in Conversations (ERC) task. We propose a Probable Trigger Zone, which is more likely to contain the utterances causing the emotion of a speaker to flip. In EFR, sub-task 3, the proposed approach archives a 5.9 (F1 score) improvement over the provided task baseline. The ablation study results highlight the significance of various design choices in the proposed method.", }
This paper presents our approach for the SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversations. We propose a transformer-based speaker-centric model for the Emotion Flip Reasoning (EFR) task and a masked-memory network along with a speaker participation vector for the Emotion Recognition in Conversations (ERC) task. We propose a Probable Trigger Zone, which is more likely to contain the utterances causing the emotion of a speaker to flip. In EFR, sub-task 3, the proposed approach archives a 5.9 (F1 score) improvement over the provided task baseline. The ablation study results highlight the significance of various design choices in the proposed method.
[ "Patel, Shubham", "Shukla, Divyaksh", "Modi, Ashutosh" ]
IITK at SemEval-2024 Task 10: Who is the speaker? Improving Emotion Recognition and Flip Reasoning in Conversations via Speaker Embeddings
semeval-1.256
Poster
2404.04525
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.257.bib
https://aclanthology.org/2024.semeval-1.257/
@inproceedings{voznyuk-konovalov-2024-deeppavlov, title = "{D}eep{P}avlov at {S}em{E}val-2024 Task 8: Leveraging Transfer Learning for Detecting Boundaries of Machine-Generated Texts", author = "Voznyuk, Anastasia and Konovalov, Vasily", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.257", doi = "10.18653/v1/2024.semeval-1.257", pages = "1821--1829", abstract = "The Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection shared task in the SemEval-2024 competition aims to tackle the problem of misusing collaborative human-AI writing. Although there are a lot of existing detectors of AI content, they are often designed to give a binary answer and thus may not be suitable for more nuanced problem of finding the boundaries between human-written and machine-generated texts, while hybrid human-AI writing becomes more and more popular. In this paper, we address the boundary detection problem. Particularly, we present a pipeline for augmenting data for supervised fine-tuning of DeBERTaV3. We receive new best MAE score, according to the leaderboard of the competition, with this pipeline.", }
The Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection shared task in the SemEval-2024 competition aims to tackle the problem of misusing collaborative human-AI writing. Although there are a lot of existing detectors of AI content, they are often designed to give a binary answer and thus may not be suitable for more nuanced problem of finding the boundaries between human-written and machine-generated texts, while hybrid human-AI writing becomes more and more popular. In this paper, we address the boundary detection problem. Particularly, we present a pipeline for augmenting data for supervised fine-tuning of DeBERTaV3. We receive new best MAE score, according to the leaderboard of the competition, with this pipeline.
[ "Voznyuk, Anastasia", "Konovalov, Vasily" ]
DeepPavlov at SemEval-2024 Task 8: Leveraging Transfer Learning for Detecting Boundaries of Machine-Generated Texts
semeval-1.257
Poster
2405.10629
[ "https://github.com/natriistorm/semeval2024-boundary-detection" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.258.bib
https://aclanthology.org/2024.semeval-1.258/
@inproceedings{liang-etal-2024-bit, title = "Bit{\_}numeval at {S}em{E}val-2024 Task 7: Enhance Numerical Sensitivity and Reasoning Completeness for Quantitative Understanding", author = "Liang, Xinyue and Li, Jiawei and Yang, Yizhe and Gao, Yang", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.258", doi = "10.18653/v1/2024.semeval-1.258", pages = "1830--1841", abstract = "In this paper, we describe the methods used for Quantitative Natural Language Inference (QNLI), and Quantitative Question Answering (QQA) in task1 of Semeval2024 NumEval. The challenge{'}s focus is to enhance the model{'}s quantitative understanding consequently improving its performance on certain tasks. We accomplish this task from two perspectives: (1) By integrating real-world numerical comparison data during the supervised fine-tuning (SFT) phase, we enhanced the model{'}s numerical sensitivity. (2) We develop an innovative reward model scoring mechanism, leveraging reinforcement learning from human feedback (RLHF) techniques to improve the model{'}s reasoning completeness.", }
In this paper, we describe the methods used for Quantitative Natural Language Inference (QNLI), and Quantitative Question Answering (QQA) in task1 of Semeval2024 NumEval. The challenge{'}s focus is to enhance the model{'}s quantitative understanding consequently improving its performance on certain tasks. We accomplish this task from two perspectives: (1) By integrating real-world numerical comparison data during the supervised fine-tuning (SFT) phase, we enhanced the model{'}s numerical sensitivity. (2) We develop an innovative reward model scoring mechanism, leveraging reinforcement learning from human feedback (RLHF) techniques to improve the model{'}s reasoning completeness.
[ "Liang, Xinyue", "Li, Jiawei", "Yang, Yizhe", "Gao, Yang" ]
Bit_numeval at SemEval-2024 Task 7: Enhance Numerical Sensitivity and Reasoning Completeness for Quantitative Understanding
semeval-1.258
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.259.bib
https://aclanthology.org/2024.semeval-1.259/
@inproceedings{zhou-etal-2024-mainlp, title = "{M}ai{NLP} at {S}em{E}val-2024 Task 1: Analyzing Source Language Selection in Cross-Lingual Textual Relatedness", author = "Zhou, Shijia and Shan, Huangyan and Plank, Barbara and Litschko, Robert", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.259", doi = "10.18653/v1/2024.semeval-1.259", pages = "1842--1853", abstract = "This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness (STR), on Track C: Cross-lingual. The task aims to detect semantic relatedness of two sentences from the same languages. For cross-lingual approach we developed a set of linguistics-inspired models trained with several task-specific strategies. We 1) utilize language vectors for selection of donor languages; 2) investigate the multi-source approach for training; 3) use transliteration of non-latin script to study impact of {``}script gap{''}; 4) opt machine translation for data augmentation. We additionally compare the performance of XLM-RoBERTa and Furina with the same training strategy. Our submission achieved the first place in the C8 (Kinyarwanda) test.", }
This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness (STR), on Track C: Cross-lingual. The task aims to detect semantic relatedness of two sentences from the same languages. For cross-lingual approach we developed a set of linguistics-inspired models trained with several task-specific strategies. We 1) utilize language vectors for selection of donor languages; 2) investigate the multi-source approach for training; 3) use transliteration of non-latin script to study impact of {``}script gap{''}; 4) opt machine translation for data augmentation. We additionally compare the performance of XLM-RoBERTa and Furina with the same training strategy. Our submission achieved the first place in the C8 (Kinyarwanda) test.
[ "Zhou, Shijia", "Shan, Huangyan", "Plank, Barbara", "Litschko, Robert" ]
MaiNLP at SemEval-2024 Task 1: Analyzing Source Language Selection in Cross-Lingual Textual Relatedness
semeval-1.259
Poster
2404.02570
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.260.bib
https://aclanthology.org/2024.semeval-1.260/
@inproceedings{kumar-etal-2024-nlp, title = "{NLP}{\_}{T}eam1@{SSN} at {S}em{E}val-2024 Task 1: Impact of language models in Sentence-{BERT} for Semantic Textual Relatedness in Low-resource Languages", author = "Kumar, Senthil and Chandrabose, Aravindan and B, Gokulakrishnan and Tp, Karthikraja", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.260", doi = "10.18653/v1/2024.semeval-1.260", pages = "1854--1859", abstract = "Semantic Textual Relatedness (STR) will provide insight into the limitations of existing models and support ongoing work on semantic representations. Track A in Shared Task-1, provides pairs of sentences with semantic relatedness scores for 9 languages out of which 7 are low-resources. These languages are from four different language families. We developed models for 8 languages (except for Amharic) in Track A, using Sentence Transformers (SBERT) architecture, and fine-tuned them with multilingual and monolingual pre-trained language models (PLM). Our models for English (eng), Algerian Arabic (arq), andKinyarwanda (kin) languages were ranked 12, 5, and 8 respectively. Our submissions are ranked 5th among 40 submissions in Track A with an average Spearman correlation score of 0.74. However, we observed that the usage of monolingual PLMs did not guarantee better than multilingual PLMs in Marathi (mar), and Telugu (tel) languages in our case.", }
Semantic Textual Relatedness (STR) will provide insight into the limitations of existing models and support ongoing work on semantic representations. Track A in Shared Task-1, provides pairs of sentences with semantic relatedness scores for 9 languages out of which 7 are low-resources. These languages are from four different language families. We developed models for 8 languages (except for Amharic) in Track A, using Sentence Transformers (SBERT) architecture, and fine-tuned them with multilingual and monolingual pre-trained language models (PLM). Our models for English (eng), Algerian Arabic (arq), andKinyarwanda (kin) languages were ranked 12, 5, and 8 respectively. Our submissions are ranked 5th among 40 submissions in Track A with an average Spearman correlation score of 0.74. However, we observed that the usage of monolingual PLMs did not guarantee better than multilingual PLMs in Marathi (mar), and Telugu (tel) languages in our case.
[ "Kumar, Senthil", "Ch", "rabose, Aravindan", "B, Gokulakrishnan", "Tp, Karthikraja" ]
NLP_Team1@SSN at SemEval-2024 Task 1: Impact of language models in Sentence-BERT for Semantic Textual Relatedness in Low-resource Languages
semeval-1.260
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.261.bib
https://aclanthology.org/2024.semeval-1.261/
@inproceedings{gibbons-etal-2024-shefcdteam, title = "{S}hef{CDT}eam at {S}em{E}val-2024 Task 4: A Text-to-Text Model for Multi-Label Classification", author = "Gibbons, Meredith and Mi, Maggie and Song, Xingyi and Villavicencio, Aline", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.261", doi = "10.18653/v1/2024.semeval-1.261", pages = "1860--1867", abstract = "This paper presents our findings for SemEval2024 Task 4. We submit only to subtask 1, applying the text-to-text framework using a FLAN-T5 model with a combination of parameter efficient fine-tuning methods - low-rankadaptation and prompt tuning. Overall, we find that the system performs well in English, but performance is limited in Bulgarian, North Macedonian and Arabic. Our analysis raises interesting questions about the effects of labelorder and label names when applying the text-to-text framework.", }
This paper presents our findings for SemEval2024 Task 4. We submit only to subtask 1, applying the text-to-text framework using a FLAN-T5 model with a combination of parameter efficient fine-tuning methods - low-rankadaptation and prompt tuning. Overall, we find that the system performs well in English, but performance is limited in Bulgarian, North Macedonian and Arabic. Our analysis raises interesting questions about the effects of labelorder and label names when applying the text-to-text framework.
[ "Gibbons, Meredith", "Mi, Maggie", "Song, Xingyi", "Villavicencio, Aline" ]
ShefCDTeam at SemEval-2024 Task 4: A Text-to-Text Model for Multi-Label Classification
semeval-1.261
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.262.bib
https://aclanthology.org/2024.semeval-1.262/
@inproceedings{guo-fan-2024-nlpnchu, title = "{NLPNCHU} at {S}em{E}val-2024 Task 4: A Comparison of {MDHC} Strategy and In-domain Pre-training for Multilingual Detection of Persuasion Techniques in Memes", author = "Guo, Shih-wei and Fan, Yao-chung", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.262", doi = "10.18653/v1/2024.semeval-1.262", pages = "1868--1875", abstract = "This study presents a systematic method for identifying 22 persuasive techniques used in multilingual memes. We explored various fine-tuning techniques and classification strategies, such as data augmentation, problem transformation, and hierarchical multi-label classification strategies. Identifying persuasive techniques in memes involves a multimodal task. We fine-tuned the XLM-RoBERTA-large-twitter language model, focusing on domain-specific language modeling, and integrated it with the CLIP visual model{'}s embedding to consider image and text features simultaneously. In our experiments, we evaluated the effectiveness of our approach by using official validation data in English. Our system in the competition, achieving competitive rankings in Subtask1 and Subtask2b across four languages: English, Bulgarian, North Macedonian, and Arabic. Significantly, we achieved 2nd place ranking for Arabic language in Subtask 1.", }
This study presents a systematic method for identifying 22 persuasive techniques used in multilingual memes. We explored various fine-tuning techniques and classification strategies, such as data augmentation, problem transformation, and hierarchical multi-label classification strategies. Identifying persuasive techniques in memes involves a multimodal task. We fine-tuned the XLM-RoBERTA-large-twitter language model, focusing on domain-specific language modeling, and integrated it with the CLIP visual model{'}s embedding to consider image and text features simultaneously. In our experiments, we evaluated the effectiveness of our approach by using official validation data in English. Our system in the competition, achieving competitive rankings in Subtask1 and Subtask2b across four languages: English, Bulgarian, North Macedonian, and Arabic. Significantly, we achieved 2nd place ranking for Arabic language in Subtask 1.
[ "Guo, Shih-wei", "Fan, Yao-chung" ]
NLPNCHU at SemEval-2024 Task 4: A Comparison of MDHC Strategy and In-domain Pre-training for Multilingual Detection of Persuasion Techniques in Memes
semeval-1.262
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.263.bib
https://aclanthology.org/2024.semeval-1.263/
@inproceedings{chen-etal-2024-mothman, title = "Mothman at {S}em{E}val-2024 Task 9: An Iterative System for Chain-of-Thought Prompt Optimization", author = "Chen, Alvin Po-Chun and Groshan, Ray and Von Bayern, Sean", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.263", doi = "10.18653/v1/2024.semeval-1.263", pages = "1876--1888", abstract = "Extensive research exists on the performance of large language models on logic-based tasks, whereas relatively little has been done on their ability to generate creative solutions on lateral thinking tasks. The BrainTeaser shared task tests lateral thinking and uses adversarial datasets to prevent memorization, resulting in poor performance for out-of-the-box models. We propose a system for iterative, chain-of-thought prompt engineering which optimizes prompts using human evaluation. Using this shared task, we demonstrate our system{'}s ability to significantly improve model performance by optimizing prompts and evaluate the input dataset.", }
Extensive research exists on the performance of large language models on logic-based tasks, whereas relatively little has been done on their ability to generate creative solutions on lateral thinking tasks. The BrainTeaser shared task tests lateral thinking and uses adversarial datasets to prevent memorization, resulting in poor performance for out-of-the-box models. We propose a system for iterative, chain-of-thought prompt engineering which optimizes prompts using human evaluation. Using this shared task, we demonstrate our system{'}s ability to significantly improve model performance by optimizing prompts and evaluate the input dataset.
[ "Chen, Alvin Po-Chun", "Groshan, Ray", "Von Bayern, Sean" ]
Mothman at SemEval-2024 Task 9: An Iterative System for Chain-of-Thought Prompt Optimization
semeval-1.263
Poster
2405.02517
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.264.bib
https://aclanthology.org/2024.semeval-1.264/
@inproceedings{moosavi-monazzah-feghhi-2024-zero, title = "Zero Shot is All You Need at {S}em{E}val-2024 Task 9: A study of State of the Art {LLM}s on Lateral Thinking Puzzles", author = "Moosavi Monazzah, Erfan and Feghhi, Mahdi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.264", doi = "10.18653/v1/2024.semeval-1.264", pages = "1889--1893", abstract = "The successful deployment of large language models in numerous NLP tasks has spurred the demand for tackling more complex tasks, which were previously unattainable. SemEval-2024 Task 9 introduces the brainteaser dataset that necessitates intricate, human-like reasoning to solve puzzles that challenge common sense. At first glance, the riddles in the dataset may appear trivial for humans to solve. However, these riddles demand lateral thinking, which deviates from vertical thinking that is the dominant form when it comes to current reasoning tasks. In this paper, we examine the ability of current state-of-the-art LLMs to solve this task. Our study is diversified by selecting both open and closed source LLMs with varying numbers of parameters. Additionally, we extend the task dataset with synthetic explanations derived from the LLMs{'} reasoning processes during task resolution. These could serve as a valuable resource for further expanding the task dataset and developing more robust methods for tasks that require complex reasoning. All the codes and datasets are available in paper{'}s GitHub repository.", }
The successful deployment of large language models in numerous NLP tasks has spurred the demand for tackling more complex tasks, which were previously unattainable. SemEval-2024 Task 9 introduces the brainteaser dataset that necessitates intricate, human-like reasoning to solve puzzles that challenge common sense. At first glance, the riddles in the dataset may appear trivial for humans to solve. However, these riddles demand lateral thinking, which deviates from vertical thinking that is the dominant form when it comes to current reasoning tasks. In this paper, we examine the ability of current state-of-the-art LLMs to solve this task. Our study is diversified by selecting both open and closed source LLMs with varying numbers of parameters. Additionally, we extend the task dataset with synthetic explanations derived from the LLMs{'} reasoning processes during task resolution. These could serve as a valuable resource for further expanding the task dataset and developing more robust methods for tasks that require complex reasoning. All the codes and datasets are available in paper{'}s GitHub repository.
[ "Moosavi Monazzah, Erfan", "Feghhi, Mahdi" ]
Zero Shot is All You Need at SemEval-2024 Task 9: A study of State of the Art LLMs on Lateral Thinking Puzzles
semeval-1.264
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.265.bib
https://aclanthology.org/2024.semeval-1.265/
@inproceedings{gema-etal-2024-edinburgh, title = "{E}dinburgh Clinical {NLP} at {S}em{E}val-2024 Task 2: Fine-tune your model unless you have access to {GPT}-4", author = "Gema, Aryo and Hong, Giwon and Minervini, Pasquale and Daines, Luke and Alex, Beatrice", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.265", doi = "10.18653/v1/2024.semeval-1.265", pages = "1894--1904", abstract = "The NLI4CT task assesses Natural Language Inference systems in predicting whether hypotheses entail or contradict evidence from Clinical Trial Reports. In this study, we evaluate various Large Language Models (LLMs) with multiple strategies, including Chain-of-Thought, In-Context Learning, and Parameter-Efficient Fine-Tuning (PEFT). We propose a PEFT method to improve the consistency of LLMs by merging adapters that were fine-tuned separately using triplet and language modelling objectives. We found that merging the two PEFT adapters improves the F1 score (+0.0346) and consistency (+0.152) of the LLMs. However, our novel methods did not produce more accurate results than GPT-4 in terms of faithfulness and consistency. Averaging the three metrics, GPT-4 ranks joint-first in the competition with 0.8328. Finally, our contamination analysis with GPT-4 indicates that there was no test data leakage. Our code is available at https://github.com/EdinburghClinicalNLP/semeval{\_}nli4ct.", }
The NLI4CT task assesses Natural Language Inference systems in predicting whether hypotheses entail or contradict evidence from Clinical Trial Reports. In this study, we evaluate various Large Language Models (LLMs) with multiple strategies, including Chain-of-Thought, In-Context Learning, and Parameter-Efficient Fine-Tuning (PEFT). We propose a PEFT method to improve the consistency of LLMs by merging adapters that were fine-tuned separately using triplet and language modelling objectives. We found that merging the two PEFT adapters improves the F1 score (+0.0346) and consistency (+0.152) of the LLMs. However, our novel methods did not produce more accurate results than GPT-4 in terms of faithfulness and consistency. Averaging the three metrics, GPT-4 ranks joint-first in the competition with 0.8328. Finally, our contamination analysis with GPT-4 indicates that there was no test data leakage. Our code is available at https://github.com/EdinburghClinicalNLP/semeval{\_}nli4ct.
[ "Gema, Aryo", "Hong, Giwon", "Minervini, Pasquale", "Daines, Luke", "Alex, Beatrice" ]
Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4
semeval-1.265
Poster
2404.00484
[ "https://github.com/EdinburghClinicalNLP/semeval_nli4ct" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.266.bib
https://aclanthology.org/2024.semeval-1.266/
@inproceedings{abdel-salam-etal-2024-caresai, title = "{C}ares{AI} at {S}em{E}val-2024 Task 2: Improving Natural Language Inference in Clinical Trial Data using Model Ensemble and Data Explanation", author = "Abdel-salam, Reem and Adewunmi, Mary and Akinwale, Mercy", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.266", doi = "10.18653/v1/2024.semeval-1.266", pages = "1905--1911", abstract = "Large language models (LLMs) have demonstrated state-of-the-art performance across multiple domains in various natural language tasks. Entailment tasks, however, are more difficult to achieve with a high-performance model. The task is to use safe natural language models to conclude biomedical clinical trial reports (CTRs). The Natural Language Inference for Clinical Trial Data (NLI4CT) task aims to define a given entailment and hypothesis based on CTRs. This paper aims to address the challenges of medical abbreviations and numerical data that can be logically inferred from one another due to acronyms, using different data pre-processing techniques to explain such data. This paper presents a model for NLI4CT SemEval 2024 task 2 that trains the data with DeBERTa, BioLink, BERT, GPT2, BioGPT, and Clinical BERT using the best training approaches, such as fine-tuning, prompt tuning, and contrastive learning. Furthermore, to validate these models, different experiments have been carried out. Our best system is built on an ensemble of different models with different training settings, which achieves an F1 score of 0.77, a faithfulness score of 0.76, and a consistency score of 0.75 and secures the sixth rank in the official leaderboard. In conclusion, this paper has addressed challenges in medical text analysis by exploring various NLP techniques, evaluating multiple advanced natural languagemodels(NLM) models and achieving good results with the ensemble model. Additionally, this project has contributed to the advancement of safe and effective NLMs for analysing complex medical data in CTRs.", }
Large language models (LLMs) have demonstrated state-of-the-art performance across multiple domains in various natural language tasks. Entailment tasks, however, are more difficult to achieve with a high-performance model. The task is to use safe natural language models to conclude biomedical clinical trial reports (CTRs). The Natural Language Inference for Clinical Trial Data (NLI4CT) task aims to define a given entailment and hypothesis based on CTRs. This paper aims to address the challenges of medical abbreviations and numerical data that can be logically inferred from one another due to acronyms, using different data pre-processing techniques to explain such data. This paper presents a model for NLI4CT SemEval 2024 task 2 that trains the data with DeBERTa, BioLink, BERT, GPT2, BioGPT, and Clinical BERT using the best training approaches, such as fine-tuning, prompt tuning, and contrastive learning. Furthermore, to validate these models, different experiments have been carried out. Our best system is built on an ensemble of different models with different training settings, which achieves an F1 score of 0.77, a faithfulness score of 0.76, and a consistency score of 0.75 and secures the sixth rank in the official leaderboard. In conclusion, this paper has addressed challenges in medical text analysis by exploring various NLP techniques, evaluating multiple advanced natural languagemodels(NLM) models and achieving good results with the ensemble model. Additionally, this project has contributed to the advancement of safe and effective NLMs for analysing complex medical data in CTRs.
[ "Abdel-salam, Reem", "Adewunmi, Mary", "Akinwale, Mercy" ]
CaresAI at SemEval-2024 Task 2: Improving Natural Language Inference in Clinical Trial Data using Model Ensemble and Data Explanation
semeval-1.266
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.267.bib
https://aclanthology.org/2024.semeval-1.267/
@inproceedings{bakhshande-naderi-2024-cvcoders, title = "{CV}coders on {S}emeval-2024 Task 4", author = "Bakhshande, Fatemezahra and Naderi, Mahdieh", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.267", doi = "10.18653/v1/2024.semeval-1.267", pages = "1912--1918", abstract = "In this paper, we present our methodology for addressing the SemEval 2024 Task 4 on {``}Multilingual Detection of Persuasion Techniques in Memes.{''} Our method focuses on identifying persuasion techniques within textual and multimodal meme content using a combination of preprocessing techniques and established models. By integrating advanced preprocessing methods, such as the OpenAI API for text processing, and utilizing a multimodal architecture combining VGG for image feature extraction and GPT-2 for text feature extraction, we achieve improved model performance. To handle class imbalance, we employ Focal Loss as the loss function and AdamW as the optimizer. Experimental results demonstrate the effectiveness of our approach, achieving competitive performance in the task. Notably, our system attains an F1 macro score of 0.67 and an F1 micro score of 0.74 on the test dataset, ranking third among all participants in the competition. Our findings highlight the importance of robust preprocessing techniques and model selection in effectively analyzing memes for persuasion techniques, contributing to efforts to combat misinformation on social media platforms.", }
In this paper, we present our methodology for addressing the SemEval 2024 Task 4 on {``}Multilingual Detection of Persuasion Techniques in Memes.{''} Our method focuses on identifying persuasion techniques within textual and multimodal meme content using a combination of preprocessing techniques and established models. By integrating advanced preprocessing methods, such as the OpenAI API for text processing, and utilizing a multimodal architecture combining VGG for image feature extraction and GPT-2 for text feature extraction, we achieve improved model performance. To handle class imbalance, we employ Focal Loss as the loss function and AdamW as the optimizer. Experimental results demonstrate the effectiveness of our approach, achieving competitive performance in the task. Notably, our system attains an F1 macro score of 0.67 and an F1 micro score of 0.74 on the test dataset, ranking third among all participants in the competition. Our findings highlight the importance of robust preprocessing techniques and model selection in effectively analyzing memes for persuasion techniques, contributing to efforts to combat misinformation on social media platforms.
[ "Bakhsh", "e, Fatemezahra", "Naderi, Mahdieh" ]
CVcoders on Semeval-2024 Task 4
semeval-1.267
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.268.bib
https://aclanthology.org/2024.semeval-1.268/
@inproceedings{donker-etal-2024-groningen, title = "{G}roningen Team {F} at {S}em{E}val-2024 Task 8: Detecting Machine-Generated Text using Feature-Based Machine Learning Models", author = {Donker, Rina and Overbeek, Bj{\"o}rn and Thulden, Dennis and Zwagers, Oscar}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.268", doi = "10.18653/v1/2024.semeval-1.268", pages = "1919--1925", abstract = "Large language models (LLMs) have shown remarkable capability of creating fluent responses to a wide variety of user queries. However, this also comes with concerns regarding the spread of misinformation and potential misuse within educational context. In this paper we describe our contribution to SemEval-2024 Task 8 (Wang et al., 2024), a shared task created around detecting machine-generated text. We aim to create several feature-based models that can detect whether a text is machine-generated or human-written. In the end, we obtained an accuracy of 0.74 on the binary human-written vs. machine-generated text classification task (Subtask A monolingual) and an accuracy of 0.61 on the multi-way machine-generated text-classification task (Subtask B). For future work, more features and models could be implemented.", }
Large language models (LLMs) have shown remarkable capability of creating fluent responses to a wide variety of user queries. However, this also comes with concerns regarding the spread of misinformation and potential misuse within educational context. In this paper we describe our contribution to SemEval-2024 Task 8 (Wang et al., 2024), a shared task created around detecting machine-generated text. We aim to create several feature-based models that can detect whether a text is machine-generated or human-written. In the end, we obtained an accuracy of 0.74 on the binary human-written vs. machine-generated text classification task (Subtask A monolingual) and an accuracy of 0.61 on the multi-way machine-generated text-classification task (Subtask B). For future work, more features and models could be implemented.
[ "Donker, Rina", "Overbeek, Bj{\\\"o}rn", "Thulden, Dennis", "Zwagers, Oscar" ]
Groningen Team F at SemEval-2024 Task 8: Detecting Machine-Generated Text using Feature-Based Machine Learning Models
semeval-1.268
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.269.bib
https://aclanthology.org/2024.semeval-1.269/
@inproceedings{alecakir-etal-2024-groningen, title = "{G}roningen Team A at {S}em{E}val-2024 Task 8: Human/Machine Authorship Attribution Using a Combination of Probabilistic and Linguistic Features", author = "Alecakir, Huseyin and Chakraborty, Puja and Henningsson, Pontus and Van Hofslot, Matthijs and Scheuer, Alon", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.269", doi = "10.18653/v1/2024.semeval-1.269", pages = "1926--1932", abstract = "Our approach primarily centers on feature-based systems, where a diverse array of features pertinent to the text{'}s linguistic attributes is extracted. Alongside those, we incorporate token-level probabilistic features which are fed into a Bidirectional Long Short-Term Memory (BiLSTM) model. Both resulting feature arrays are concatenated and fed into our final prediction model. Our method under-performed compared to the baseline, despite the fact that previous attempts by others have successfully used linguistic features for the purpose of discerning machine-generated text. We conclude that our examined subset of linguistically motivated features alongside probabilistic features was not able to contribute almost any performance at all to a hybrid classifier of human and machine texts.", }
Our approach primarily centers on feature-based systems, where a diverse array of features pertinent to the text{'}s linguistic attributes is extracted. Alongside those, we incorporate token-level probabilistic features which are fed into a Bidirectional Long Short-Term Memory (BiLSTM) model. Both resulting feature arrays are concatenated and fed into our final prediction model. Our method under-performed compared to the baseline, despite the fact that previous attempts by others have successfully used linguistic features for the purpose of discerning machine-generated text. We conclude that our examined subset of linguistically motivated features alongside probabilistic features was not able to contribute almost any performance at all to a hybrid classifier of human and machine texts.
[ "Alecakir, Huseyin", "Chakraborty, Puja", "Henningsson, Pontus", "Van Hofslot, Matthijs", "Scheuer, Alon" ]
Groningen Team A at SemEval-2024 Task 8: Human/Machine Authorship Attribution Using a Combination of Probabilistic and Linguistic Features
semeval-1.269
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.270.bib
https://aclanthology.org/2024.semeval-1.270/
@inproceedings{kumar-etal-2024-semeval, title = "{S}em{E}val 2024 - Task 10: Emotion Discovery and Reasoning its Flip in Conversation ({ED}i{R}e{F})", author = "Kumar, Shivani and Akhtar, Md. Shad and Cambria, Erik and Chakraborty, Tanmoy", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.270", doi = "10.18653/v1/2024.semeval-1.270", pages = "1933--1946", abstract = "We present SemEval-2024 Task 10, a shared task centred on identifying emotions and finding the rationale behind their flips within monolingual English and Hindi-English code-mixed dialogues. This task comprises three distinct subtasks {--} emotion recognition in conversation for code-mixed dialogues, emotion flip reasoning for code-mixed dialogues, and emotion flip reasoning for English dialogues. Participating systems were tasked to automatically execute one or more of these subtasks. The datasets for these tasks comprise manually annotated conversations focusing on emotions and triggers for emotion shifts.1 A total of 84 participants engaged in this task, with the most adept systems attaining F1-scores of 0.70, 0.79, and 0.76 for the respective subtasks. This paper summarises the results and findings from 24 teams alongside their system descriptions.", }
We present SemEval-2024 Task 10, a shared task centred on identifying emotions and finding the rationale behind their flips within monolingual English and Hindi-English code-mixed dialogues. This task comprises three distinct subtasks {--} emotion recognition in conversation for code-mixed dialogues, emotion flip reasoning for code-mixed dialogues, and emotion flip reasoning for English dialogues. Participating systems were tasked to automatically execute one or more of these subtasks. The datasets for these tasks comprise manually annotated conversations focusing on emotions and triggers for emotion shifts.1 A total of 84 participants engaged in this task, with the most adept systems attaining F1-scores of 0.70, 0.79, and 0.76 for the respective subtasks. This paper summarises the results and findings from 24 teams alongside their system descriptions.
[ "Kumar, Shivani", "Akhtar, Md. Shad", "Cambria, Erik", "Chakraborty, Tanmoy" ]
SemEval 2024 - Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF)
semeval-1.270
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.271.bib
https://aclanthology.org/2024.semeval-1.271/
@inproceedings{jullien-etal-2024-semeval, title = "{S}em{E}val-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials", author = "Jullien, Mael and Valentino, Marco and Freitas, Andr{\'e}", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.271", doi = "10.18653/v1/2024.semeval-1.271", pages = "1947--1962", abstract = "Large Language Models (LLMs) are at the forefront of NLP achievements but fall short in dealing with shortcut learning, factual inconsistency, and vulnerability to adversarial inputs. These shortcomings are especially critical in medical contexts, where they can misrepresent actual model capabilities. Addressing this, we present SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. Our contributions include the refined NLI4CT-P dataset (i.e. Natural Language Inference for Clinical Trials - Perturbed), designed to challenge LLMs with interventional and causal reasoning tasks, along with a comprehensive evaluation of methods and results for participant submissions. A total of 106 participants registered for the task contributing to over 1200 individual submissions and 25 system overview papers. This initiative aims to advance the robustness and applicability of NLI models in healthcare, ensuring safer and more dependable AI assistance in clinical decision-making. We anticipate that the dataset, models, and outcomes of this task can support future research in the field of biomedical NLI. The dataset, competition leaderboard, and website are publicly available.", }
Large Language Models (LLMs) are at the forefront of NLP achievements but fall short in dealing with shortcut learning, factual inconsistency, and vulnerability to adversarial inputs. These shortcomings are especially critical in medical contexts, where they can misrepresent actual model capabilities. Addressing this, we present SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. Our contributions include the refined NLI4CT-P dataset (i.e. Natural Language Inference for Clinical Trials - Perturbed), designed to challenge LLMs with interventional and causal reasoning tasks, along with a comprehensive evaluation of methods and results for participant submissions. A total of 106 participants registered for the task contributing to over 1200 individual submissions and 25 system overview papers. This initiative aims to advance the robustness and applicability of NLI models in healthcare, ensuring safer and more dependable AI assistance in clinical decision-making. We anticipate that the dataset, models, and outcomes of this task can support future research in the field of biomedical NLI. The dataset, competition leaderboard, and website are publicly available.
[ "Jullien, Mael", "Valentino, Marco", "Freitas, Andr{\\'e}" ]
SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials
semeval-1.271
Poster
2404.04963
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.272.bib
https://aclanthology.org/2024.semeval-1.272/
@inproceedings{ousidhoum-etal-2024-semeval, title = "{S}em{E}val Task 1: Semantic Textual Relatedness for {A}frican and {A}sian Languages", author = "Ousidhoum, Nedjma and Muhammad, Shamsuddeen Hassan and Abdalla, Mohamed and Abdulmumin, Idris and Ahmad, Ibrahim Said and Ahuja, Sanchit and Aji, Alham Fikri and Araujo, Vladimir and Beloucif, Meriem and De Kock, Christine and Hourrane, Oumaima and Shrivastava, Manish and Solorio, Thamar and Surange, Nirmal and Vishnubhotla, Krishnapriya and Yimam, Seid Muhie and Mohammad, Saif M.", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.272", doi = "10.18653/v1/2024.semeval-1.272", pages = "1963--1978", abstract = "We present the first shared task on Semantic Textual Relatedness (STR). While earlier shared tasks primarily focused on semantic similarity, we instead investigate the broader phenomenon of semantic relatedness across 14 languages: Afrikaans, Algerian Arabic, Amharic, English, Hausa, Hindi, Indonesian, Kinyarwanda, Marathi, Moroccan Arabic, Modern Standard Arabic, Punjabi, Spanish, and Telugu. These languages originate from five distinct language families and are predominantly spoken in Africa and Asia {--} regions characterised by the relatively limited availability of NLP resources. Each instance in the datasets is a sentence pair associated with a score that represents the degree of semantic textual relatedness between the two sentences. Participating systems were asked to rank sentence pairs by their closeness in meaning (i.e., their degree of semantic relatedness) in the 14 languages in three main tracks: (a) supervised, (b) unsupervised, and (c) crosslingual. The task attracted 163 participants. We received 70 submissions in total (across all tasks) from 51 different teams, and 38 system description papers. We report on the best-performing systems as well as the most common and the most effective approaches for the three different tracks.", }
We present the first shared task on Semantic Textual Relatedness (STR). While earlier shared tasks primarily focused on semantic similarity, we instead investigate the broader phenomenon of semantic relatedness across 14 languages: Afrikaans, Algerian Arabic, Amharic, English, Hausa, Hindi, Indonesian, Kinyarwanda, Marathi, Moroccan Arabic, Modern Standard Arabic, Punjabi, Spanish, and Telugu. These languages originate from five distinct language families and are predominantly spoken in Africa and Asia {--} regions characterised by the relatively limited availability of NLP resources. Each instance in the datasets is a sentence pair associated with a score that represents the degree of semantic textual relatedness between the two sentences. Participating systems were asked to rank sentence pairs by their closeness in meaning (i.e., their degree of semantic relatedness) in the 14 languages in three main tracks: (a) supervised, (b) unsupervised, and (c) crosslingual. The task attracted 163 participants. We received 70 submissions in total (across all tasks) from 51 different teams, and 38 system description papers. We report on the best-performing systems as well as the most common and the most effective approaches for the three different tracks.
[ "Ousidhoum, Nedjma", "Muhammad, Shamsuddeen Hassan", "Abdalla, Mohamed", "Abdulmumin, Idris", "Ahmad, Ibrahim Said", "Ahuja, Sanchit", "Aji, Alham Fikri", "Araujo, Vladimir", "Beloucif, Meriem", "De Kock, Christine", "Hourrane, Oumaima", "Shrivastava, Manish", "Solorio, Thamar", "Surange, Nirmal", "Vishnubhotla, Krishnapriya", "Yimam, Seid Muhie", "Mohammad, Saif M." ]
SemEval Task 1: Semantic Textual Relatedness for African and Asian Languages
semeval-1.272
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.273.bib
https://aclanthology.org/2024.semeval-1.273/
@inproceedings{mickus-etal-2024-semeval, title = "{S}em{E}val-2024 Task 6: {SHROOM}, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes", author = {Mickus, Timothee and Zosa, Elaine and Vazquez, Raul and Vahtola, Teemu and Tiedemann, J{\"o}rg and Segonne, Vincent and Raganato, Alessandro and Apidianaki, Marianna}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.273", doi = "10.18653/v1/2024.semeval-1.273", pages = "1979--1993", abstract = "This paper presents the results of the SHROOM, a shared task focused on detecting hallucinations: outputs from natural language generation (NLG) systems that are fluent, yet inaccurate. Such cases of overgeneration put in jeopardy many NLG applications, where correctness is often mission-critical. The shared task was conducted with a newly constructed dataset of 4000 model outputs labeled by 5 annotators each, spanning 3 NLP tasks: machine translation, paraphrase generation and definition modeling.The shared task was tackled by a total of 58 different users grouped in 42 teams, out of which 26 elected to write a system description paper; collectively, they submitted over 300 prediction sets on both tracks of the shared task. We observe a number of key trends in how this approach was tackled{---}many participants rely on a handful of model, and often rely either on synthetic data for fine-tuning or zero-shot prompting strategies. While a majority of the teams did outperform our proposed baseline system, the performances of top-scoring systems are still consistent with a random handling of the more challenging items.", }
This paper presents the results of the SHROOM, a shared task focused on detecting hallucinations: outputs from natural language generation (NLG) systems that are fluent, yet inaccurate. Such cases of overgeneration put in jeopardy many NLG applications, where correctness is often mission-critical. The shared task was conducted with a newly constructed dataset of 4000 model outputs labeled by 5 annotators each, spanning 3 NLP tasks: machine translation, paraphrase generation and definition modeling.The shared task was tackled by a total of 58 different users grouped in 42 teams, out of which 26 elected to write a system description paper; collectively, they submitted over 300 prediction sets on both tracks of the shared task. We observe a number of key trends in how this approach was tackled{---}many participants rely on a handful of model, and often rely either on synthetic data for fine-tuning or zero-shot prompting strategies. While a majority of the teams did outperform our proposed baseline system, the performances of top-scoring systems are still consistent with a random handling of the more challenging items.
[ "Mickus, Timothee", "Zosa, Elaine", "Vazquez, Raul", "Vahtola, Teemu", "Tiedemann, J{\\\"o}rg", "Segonne, Vincent", "Raganato, Aless", "ro", "Apidianaki, Marianna" ]
SemEval-2024 Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes
semeval-1.273
Poster
[ "https://github.com/ngregoriade/semeval2024-shroom" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.274.bib
https://aclanthology.org/2024.semeval-1.274/
@inproceedings{jiang-etal-2024-semeval, title = "{S}em{E}val-2024 Task 9: {BRAINTEASER}: A Novel Task Defying Common Sense", author = "Jiang, Yifan and Ilievski, Filip and Ma, Kaixin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.274", doi = "10.18653/v1/2024.semeval-1.274", pages = "1994--2008", abstract = "While vertical thinking relies on logical and commonsense reasoning, lateral thinking requires systems to defy commonsense associations and overwrite them through unconventional thinking. Lateral thinking has been shown to be challenging for current models but has received little attention. A recent benchmark, BRAINTEASER, aims to evaluate current models{'} lateral thinking ability in a zero-shot setting. In this paper, we split the original benchmark to also support fine-tuning setting and present SemEval Task 9, BRAINTEASER(S), the first task at this competition designed to test the system{'}s reasoning and lateral thinking ability. As a popular task, BRAINTEASER(S){'}s two subtasks receive 483 team submissions from 182 participants during the competition. This paper provides a fine-grained system analysis of the competition results, together with a reflection on what this means for the ability of the systems to reason laterally.We hope that the BRAINTEASER(S) subtasks and findings in this paper can stimulate future work on lateral thinking and robust reasoning by computational models", }
While vertical thinking relies on logical and commonsense reasoning, lateral thinking requires systems to defy commonsense associations and overwrite them through unconventional thinking. Lateral thinking has been shown to be challenging for current models but has received little attention. A recent benchmark, BRAINTEASER, aims to evaluate current models{'} lateral thinking ability in a zero-shot setting. In this paper, we split the original benchmark to also support fine-tuning setting and present SemEval Task 9, BRAINTEASER(S), the first task at this competition designed to test the system{'}s reasoning and lateral thinking ability. As a popular task, BRAINTEASER(S){'}s two subtasks receive 483 team submissions from 182 participants during the competition. This paper provides a fine-grained system analysis of the competition results, together with a reflection on what this means for the ability of the systems to reason laterally.We hope that the BRAINTEASER(S) subtasks and findings in this paper can stimulate future work on lateral thinking and robust reasoning by computational models
[ "Jiang, Yifan", "Ilievski, Filip", "Ma, Kaixin" ]
SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense
semeval-1.274
Poster
2404.16068
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.275.bib
https://aclanthology.org/2024.semeval-1.275/
@inproceedings{dimitrov-etal-2024-semeval, title = "{S}em{E}val-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes", author = "Dimitrov, Dimitar and Alam, Firoj and Hasanain, Maram and Hasnat, Abul and Silvestri, Fabrizio and Nakov, Preslav and Da San Martino, Giovanni", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.275", doi = "10.18653/v1/2024.semeval-1.275", pages = "2009--2026", abstract = "The automatic identification of misleading and persuasive content has emerged as a significant issue among various stakeholders, including social media platforms, policymakers, and the broader society. To tackle this issue within the context of memes, we organized a shared task at SemEval-2024, focusing on the multilingual detection of persuasion techniques. This paper outlines the dataset, the organization of the task, the evaluation framework, the outcomes, and the systems that participated. The task targets memes in four languages, with the inclusion of three surprise test datasets in Bulgarian, North Macedonian, and Arabic. It encompasses three subtasks: (i) identifying whether a meme utilizes a persuasion technique; (ii) identifying persuasion techniques within the meme{'}s {''}textual content{''}; and (iii) identifying persuasion techniques across both the textual and visual components of the meme (a multimodal task). Furthermore, due to the complex nature of persuasion techniques, we present a hierarchy that groups the 22 persuasion techniques into several levels of categories. This became one of the attractive shared tasks in SemEval 2024, with 153 teams registered, 48 teams submitting results, and finally, 32 system description papers submitted.", }
The automatic identification of misleading and persuasive content has emerged as a significant issue among various stakeholders, including social media platforms, policymakers, and the broader society. To tackle this issue within the context of memes, we organized a shared task at SemEval-2024, focusing on the multilingual detection of persuasion techniques. This paper outlines the dataset, the organization of the task, the evaluation framework, the outcomes, and the systems that participated. The task targets memes in four languages, with the inclusion of three surprise test datasets in Bulgarian, North Macedonian, and Arabic. It encompasses three subtasks: (i) identifying whether a meme utilizes a persuasion technique; (ii) identifying persuasion techniques within the meme{'}s {''}textual content{''}; and (iii) identifying persuasion techniques across both the textual and visual components of the meme (a multimodal task). Furthermore, due to the complex nature of persuasion techniques, we present a hierarchy that groups the 22 persuasion techniques into several levels of categories. This became one of the attractive shared tasks in SemEval 2024, with 153 teams registered, 48 teams submitting results, and finally, 32 system description papers submitted.
[ "Dimitrov, Dimitar", "Alam, Firoj", "Hasanain, Maram", "Hasnat, Abul", "Silvestri, Fabrizio", "Nakov, Preslav", "Da San Martino, Giovanni" ]
SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes
semeval-1.275
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.276.bib
https://aclanthology.org/2024.semeval-1.276/
@inproceedings{held-habernal-2024-semeval, title = "{S}em{E}val-2024 Task 5: Argument Reasoning in Civil Procedure", author = "Held, Lena and Habernal, Ivan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.276", doi = "10.18653/v1/2024.semeval-1.276", pages = "2027--2038", abstract = "This paper describes the results of SemEval-2024 Task 5: Argument Reasoning in Civil Procedure, consisting of a single task on judging and reasoning about the answers to questions in U.S. civil procedure. The dataset for this task contains question, answer and explanation pairs taken from The Glannon Guide To Civil Procedure (Glannon, 2018). The task was to classify in a binary manner if the answer is a correct choice for the question or not. Twenty participants submitted their solutions, with the best results achieving a remarkable 82.31{\%} F1-score. We summarize and analyze the results from all participating systems and provide an overview over the systems of 14 participants.", }
This paper describes the results of SemEval-2024 Task 5: Argument Reasoning in Civil Procedure, consisting of a single task on judging and reasoning about the answers to questions in U.S. civil procedure. The dataset for this task contains question, answer and explanation pairs taken from The Glannon Guide To Civil Procedure (Glannon, 2018). The task was to classify in a binary manner if the answer is a correct choice for the question or not. Twenty participants submitted their solutions, with the best results achieving a remarkable 82.31{\%} F1-score. We summarize and analyze the results from all participating systems and provide an overview over the systems of 14 participants.
[ "Held, Lena", "Habernal, Ivan" ]
SemEval-2024 Task 5: Argument Reasoning in Civil Procedure
semeval-1.276
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.277.bib
https://aclanthology.org/2024.semeval-1.277/
@inproceedings{wang-etal-2024-semeval, title = "{S}em{E}val-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations", author = "Wang, Fanfan and Ma, Heqing and Xia, Rui and Yu, Jianfei and Cambria, Erik", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.277", doi = "10.18653/v1/2024.semeval-1.277", pages = "2039--2050", abstract = "The ability to understand emotions is an essential component of human-like artificial intelligence, as emotions greatly influence human cognition, decision making, and social interactions. In addition to emotion recognition in conversations, the task of identifying the potential causes behind an individual{'}s emotional state in conversations, is of great importance in many application scenarios. We organize SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, which aims at extracting all pairs of emotions and their corresponding causes from conversations. Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE). The shared task has attracted 143 registrations and 216 successful submissions.In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.", }
The ability to understand emotions is an essential component of human-like artificial intelligence, as emotions greatly influence human cognition, decision making, and social interactions. In addition to emotion recognition in conversations, the task of identifying the potential causes behind an individual{'}s emotional state in conversations, is of great importance in many application scenarios. We organize SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, which aims at extracting all pairs of emotions and their corresponding causes from conversations. Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE). The shared task has attracted 143 registrations and 216 successful submissions.In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
[ "Wang, Fanfan", "Ma, Heqing", "Xia, Rui", "Yu, Jianfei", "Cambria, Erik" ]
SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations
semeval-1.277
Poster
2405.13049
[ "https://github.com/nustm/semeval-2024_ecac" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.278.bib
https://aclanthology.org/2024.semeval-1.278/
@inproceedings{grimshaw-etal-2024-sheffieldveraai, title = "{S}heffield{V}era{AI} at {S}em{E}val-2024 Task 4: Prompting and fine-tuning a Large Vision-Language Model for Binary Classification of Persuasion Techniques in Memes", author = "Grimshaw, Charlie and Bontcheva, Kalina and Song, Xingyi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.278", doi = "10.18653/v1/2024.semeval-1.278", pages = "2051--2056", abstract = "This paper describes our approach for SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes. Specifically, we concentrate on Subtask 2b, a binary classification challenge that entails categorizing memes as either {``}propagandistic{''} or {``}non-propagandistic{''}. To address this task, we utilized the large multimodal pretrained model, LLaVa. We explored various prompting strategies and fine-tuning methods, and observed that the model, when not fine-tuned but provided with a few-shot learning examples, achieved the best performance. Additionally, we enhanced the model{'}s multilingual capabilities by integrating a machine translation model. Our system secured the 2nd place in the Arabic language category.", }
This paper describes our approach for SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes. Specifically, we concentrate on Subtask 2b, a binary classification challenge that entails categorizing memes as either {``}propagandistic{''} or {``}non-propagandistic{''}. To address this task, we utilized the large multimodal pretrained model, LLaVa. We explored various prompting strategies and fine-tuning methods, and observed that the model, when not fine-tuned but provided with a few-shot learning examples, achieved the best performance. Additionally, we enhanced the model{'}s multilingual capabilities by integrating a machine translation model. Our system secured the 2nd place in the Arabic language category.
[ "Grimshaw, Charlie", "Bontcheva, Kalina", "Song, Xingyi" ]
SheffieldVeraAI at SemEval-2024 Task 4: Prompting and fine-tuning a Large Vision-Language Model for Binary Classification of Persuasion Techniques in Memes
semeval-1.278
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.279.bib
https://aclanthology.org/2024.semeval-1.279/
@inproceedings{wang-etal-2024-semeval-2024, title = "{S}em{E}val-2024 Task 8: Multidomain, Multimodel and Multilingual Machine-Generated Text Detection", author = "Wang, Yuxia and Mansurov, Jonibek and Ivanov, Petar and Su, Jinyan and Shelmanov, Artem and Tsvigun, Akim and Mohammed Afzal, Osama and Mahmoud, Tarek and Puccetti, Giovanni and Arnold, Thomas", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.279", doi = "10.18653/v1/2024.semeval-1.279", pages = "2057--2079", abstract = "We present the results and the main findings of SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection. The task featured three subtasks. Subtask A is a binary classification task determining whether a text is written by a human or generated by a machine. This subtask has two tracks: a monolingual track focused solely on English texts and a multilingual track. Subtask B is to detect the exact source of a text, discerning whether it is written by a human or generated by a specific LLM. Subtask C aims to identify the changing point within a text, at which the authorship transitions from human to machine. The task attracted a large number of participants: subtask A monolingual (126), subtask A multilingual (59), subtask B (70), and subtask C (30). In this paper, we present the task, analyze the results, and discuss the system submissions and the methods they used. For all subtasks, the best systems used LLMs.", }
We present the results and the main findings of SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection. The task featured three subtasks. Subtask A is a binary classification task determining whether a text is written by a human or generated by a machine. This subtask has two tracks: a monolingual track focused solely on English texts and a multilingual track. Subtask B is to detect the exact source of a text, discerning whether it is written by a human or generated by a specific LLM. Subtask C aims to identify the changing point within a text, at which the authorship transitions from human to machine. The task attracted a large number of participants: subtask A monolingual (126), subtask A multilingual (59), subtask B (70), and subtask C (30). In this paper, we present the task, analyze the results, and discuss the system submissions and the methods they used. For all subtasks, the best systems used LLMs.
[ "Wang, Yuxia", "Mansurov, Jonibek", "Ivanov, Petar", "Su, Jinyan", "Shelmanov, Artem", "Tsvigun, Akim", "Mohammed Afzal, Osama", "Mahmoud, Tarek", "Puccetti, Giovanni", "Arnold, Thomas" ]
SemEval-2024 Task 8: Multidomain, Multimodel and Multilingual Machine-Generated Text Detection
semeval-1.279
Poster
2404.14183
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.1.bib
https://aclanthology.org/2024.sigmorphon-1.1/
@inproceedings{herce-2024-velepa, title = "{V}e{L}e{P}a: a Verbal Lexicon of {P}ame", author = "Herce, Borja", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.1", doi = "10.18653/v1/2024.sigmorphon-1.1", pages = "1--6", abstract = "This paper presents VeLePa, an inflected verbal lexicon of Central Pame (pbs, cent2154), an Otomanguean language from Mexico. This resource contains 12528 words in phonological form representing the complete inflectional paradigms of 216 verbs, supplemented with use frequencies. Computer-operable (CLDF) inflected lexicons of non-WEIRD underresourced languages are urgently needed to expand digital capacities in this languages (e.g. in NLP). VeLePa contributes to this, and does so with data from a language which is morphologically extraordinary, with unusually high levels of irregularity and multiple conjugations at various loci within the word: prefixes, stems, tone, and suffixes constitute different albeit interrelated subsystems of inflection.", }
This paper presents VeLePa, an inflected verbal lexicon of Central Pame (pbs, cent2154), an Otomanguean language from Mexico. This resource contains 12528 words in phonological form representing the complete inflectional paradigms of 216 verbs, supplemented with use frequencies. Computer-operable (CLDF) inflected lexicons of non-WEIRD underresourced languages are urgently needed to expand digital capacities in this languages (e.g. in NLP). VeLePa contributes to this, and does so with data from a language which is morphologically extraordinary, with unusually high levels of irregularity and multiple conjugations at various loci within the word: prefixes, stems, tone, and suffixes constitute different albeit interrelated subsystems of inflection.
[ "Herce, Borja" ]
VeLePa: a Verbal Lexicon of Pame
sigmorphon-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.2.bib
https://aclanthology.org/2024.sigmorphon-1.2/
@inproceedings{matsuzaki-etal-2024-j, title = "{J}-{U}ni{M}orph: {J}apanese Morphological Annotation through the Universal Feature Schema", author = "Matsuzaki, Kosuke and Taniguchi, Masaya and Inui, Kentaro and Sakaguchi, Keisuke", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.2", doi = "10.18653/v1/2024.sigmorphon-1.2", pages = "7--19", abstract = "We introduce a Japanese Morphology dataset, J-UniMorph, developed based on the UniMorph feature schema. This dataset addresses the unique and rich verb forms characteristic of the language{'}s agglutinative nature. J-UniMorph distinguishes itself from the existing Japanese subset of UniMorph, which is automatically extracted from Wiktionary. On average, the Wiktionary Edition features around 12 inflected forms for each word and is primarily dominated by denominal verbs (i.e., [noun] + suru (do-PRS)). Morphologically, this inflection pattern is same as the verb suru (do). In contrast, J-UniMorph explores a much broader and more frequently used range of verb forms, offering 118 inflected forms for each word on average. It includes honorifics, a range of politeness levels, and other linguistic nuances, emphasizing the distinctive characteristics of the Japanese language. This paper presents detailed statistics and characteristics of J-UniMorph, comparing it with the Wiktionary Edition. We will release J-UniMorph and its interactive visualizer publicly available, aiming to support cross-linguistic research and various applications.", }
We introduce a Japanese Morphology dataset, J-UniMorph, developed based on the UniMorph feature schema. This dataset addresses the unique and rich verb forms characteristic of the language{'}s agglutinative nature. J-UniMorph distinguishes itself from the existing Japanese subset of UniMorph, which is automatically extracted from Wiktionary. On average, the Wiktionary Edition features around 12 inflected forms for each word and is primarily dominated by denominal verbs (i.e., [noun] + suru (do-PRS)). Morphologically, this inflection pattern is same as the verb suru (do). In contrast, J-UniMorph explores a much broader and more frequently used range of verb forms, offering 118 inflected forms for each word on average. It includes honorifics, a range of politeness levels, and other linguistic nuances, emphasizing the distinctive characteristics of the Japanese language. This paper presents detailed statistics and characteristics of J-UniMorph, comparing it with the Wiktionary Edition. We will release J-UniMorph and its interactive visualizer publicly available, aiming to support cross-linguistic research and various applications.
[ "Matsuzaki, Kosuke", "Taniguchi, Masaya", "Inui, Kentaro", "Sakaguchi, Keisuke" ]
J-UniMorph: Japanese Morphological Annotation through the Universal Feature Schema
sigmorphon-1.2
Poster
2402.14411
[ "https://github.com/cl-tohoku/j-unimorph" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.3.bib
https://aclanthology.org/2024.sigmorphon-1.3/
@inproceedings{varatharaj-todd-2024-just, title = "More than Just Statistical Recurrence: Human and Machine Unsupervised Learning of {M}{\=a}ori Word Segmentation across Morphological Processes", author = "Varatharaj, Ashvini and Todd, Simon", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.3", doi = "10.18653/v1/2024.sigmorphon-1.3", pages = "20--31", abstract = "Non-M{\=a}ori-speaking New Zealanders (NMS) are able to segment M{\=a}ori words in a highly similar way to fluent speakers (Panther et al., 2024). This ability is assumed to derive through the identification and extraction of statistically recurrent forms. We examine this assumption by asking how NMS segmentations compare to those produced by Morfessor, an unsupervised machine learning model that operates based on statistical recurrence, across words formed by a variety of morphological processes. Both NMS and Morfessor succeed in segmenting words formed by concatenative processes (compounding and affixation without allomorphy), but NMS also succeed for words that invoke templates (reduplication and allomorphy) and other cues to morphological structure, implying that their learning process is sensitive to more than just statistical recurrence.", }
Non-M{\=a}ori-speaking New Zealanders (NMS) are able to segment M{\=a}ori words in a highly similar way to fluent speakers (Panther et al., 2024). This ability is assumed to derive through the identification and extraction of statistically recurrent forms. We examine this assumption by asking how NMS segmentations compare to those produced by Morfessor, an unsupervised machine learning model that operates based on statistical recurrence, across words formed by a variety of morphological processes. Both NMS and Morfessor succeed in segmenting words formed by concatenative processes (compounding and affixation without allomorphy), but NMS also succeed for words that invoke templates (reduplication and allomorphy) and other cues to morphological structure, implying that their learning process is sensitive to more than just statistical recurrence.
[ "Varatharaj, Ashvini", "Todd, Simon" ]
More than Just Statistical Recurrence: Human and Machine Unsupervised Learning of Māori Word Segmentation across Morphological Processes
sigmorphon-1.3
Poster
2403.14444
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.4.bib
https://aclanthology.org/2024.sigmorphon-1.4/
@inproceedings{arnett-etal-2024-different, title = "Different Tokenization Schemes Lead to Comparable Performance in {S}panish Number Agreement", author = "Arnett, Catherine and Chang, Tyler and Trott, Sean", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.4", doi = "10.18653/v1/2024.sigmorphon-1.4", pages = "32--38", abstract = "The relationship between language model tokenization and performance is an open area of research. Here, we investigate how different tokenization schemes impact number agreement in Spanish plurals. We find that morphologically-aligned tokenization performs similarly to other tokenization schemes, even when induced artificially for words that would not be tokenized that way during training. We then present exploratory analyses demonstrating that language model embeddings for different plural tokenizations have similar distributions along the embedding space axis that maximally distinguishes singular and plural nouns. Our results suggest that morphologically-aligned tokenization is a viable tokenization approach, and existing models already generalize some morphological patterns to new items. However, our results indicate that morphological tokenization is not strictly required for performance.", }
The relationship between language model tokenization and performance is an open area of research. Here, we investigate how different tokenization schemes impact number agreement in Spanish plurals. We find that morphologically-aligned tokenization performs similarly to other tokenization schemes, even when induced artificially for words that would not be tokenized that way during training. We then present exploratory analyses demonstrating that language model embeddings for different plural tokenizations have similar distributions along the embedding space axis that maximally distinguishes singular and plural nouns. Our results suggest that morphologically-aligned tokenization is a viable tokenization approach, and existing models already generalize some morphological patterns to new items. However, our results indicate that morphological tokenization is not strictly required for performance.
[ "Arnett, Catherine", "Chang, Tyler", "Trott, Sean" ]
Different Tokenization Schemes Lead to Comparable Performance in Spanish Number Agreement
sigmorphon-1.4
Poster
2403.13754
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.5.bib
https://aclanthology.org/2024.sigmorphon-1.5/
@inproceedings{kezerian-yu-2024-ye, title = "Ye Olde {F}rench: Effect of Old and {M}iddle {F}rench on {SIGMORPHON}-{U}ni{M}orph Shared Task Data", author = "Kezerian, William and Yu, Kristine", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.5", doi = "10.18653/v1/2024.sigmorphon-1.5", pages = "39--50", abstract = "We offer one explanation for the historically low performance of French in the SIGMORPHON-UniMorph shared tasks. We conducted experiments replicating the 2023 task on French with the non-neural and neural baselines, first using the original task splits, and then using splits that excluded Old and Middle French lemmas. We applied a taxonomy on our errors using a framework based on Kyle Gorman{'}s {``}Weird Inflects but OK{''} 2019 annotation scheme, finding that a high portion of the French errors produced with the original splits were due to the inclusion of Old French forms, which was resolved with cleaned data.", }
We offer one explanation for the historically low performance of French in the SIGMORPHON-UniMorph shared tasks. We conducted experiments replicating the 2023 task on French with the non-neural and neural baselines, first using the original task splits, and then using splits that excluded Old and Middle French lemmas. We applied a taxonomy on our errors using a framework based on Kyle Gorman{'}s {``}Weird Inflects but OK{''} 2019 annotation scheme, finding that a high portion of the French errors produced with the original splits were due to the inclusion of Old French forms, which was resolved with cleaned data.
[ "Kezerian, William", "Yu, Kristine" ]
Ye Olde French: Effect of Old and Middle French on SIGMORPHON-UniMorph Shared Task Data
sigmorphon-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.6.bib
https://aclanthology.org/2024.sigmorphon-1.6/
@inproceedings{salehi-jacobs-2024-effect, title = "The Effect of Model Capacity and Script Diversity on Subword Tokenization for {S}orani {K}urdish", author = "Salehi, Ali and Jacobs, Cassandra L.", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.6", doi = "10.18653/v1/2024.sigmorphon-1.6", pages = "51--56", abstract = "Tokenization and morphological segmentation continue to pose challenges for text processing and studies of human language. Here, we focus on written Soran{\^\i} Kurdish, which uses a modified script based on Persian and Arabic, and its transliterations into the Kurdish Latin script. Importantly, Perso-Arabic and Latin-based writing systems demonstrate different statistical and structural properties, which may have significant effects on subword vocabulary learning. This has major consequences for frequency- or probability-based models of morphological induction. We explore the possibility that jointly training subword vocabularies using a source script along with its transliteration would improve morphological segmentation, subword tokenization, and whether gains are observed for one system over others. We find that joint training has a similar effect to increasing vocabulary size, while keeping subwords shorter in length, which produces higher-quality subwords that map onto morphemes.", }
Tokenization and morphological segmentation continue to pose challenges for text processing and studies of human language. Here, we focus on written Soran{\^\i} Kurdish, which uses a modified script based on Persian and Arabic, and its transliterations into the Kurdish Latin script. Importantly, Perso-Arabic and Latin-based writing systems demonstrate different statistical and structural properties, which may have significant effects on subword vocabulary learning. This has major consequences for frequency- or probability-based models of morphological induction. We explore the possibility that jointly training subword vocabularies using a source script along with its transliteration would improve morphological segmentation, subword tokenization, and whether gains are observed for one system over others. We find that joint training has a similar effect to increasing vocabulary size, while keeping subwords shorter in length, which produces higher-quality subwords that map onto morphemes.
[ "Salehi, Ali", "Jacobs, Cass", "ra L." ]
The Effect of Model Capacity and Script Diversity on Subword Tokenization for Sorani Kurdish
sigmorphon-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.7.bib
https://aclanthology.org/2024.sigmorphon-1.7/
@inproceedings{ginn-palmer-2024-decomposing, title = "Decomposing Fusional Morphemes with Vector Embeddings", author = "Ginn, Michael and Palmer, Alexis", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.7", doi = "10.18653/v1/2024.sigmorphon-1.7", pages = "57--66", abstract = "Distributional approaches have proven effective in modeling semantics and phonology through vector embeddings. We explore whether distributional representations can also effectively model morphological information. We train static vector embeddings over morphological sequences. Then, we explore morpheme categories for fusional morphemes, which encode multiple linguistic dimensions, and often have close relationships to other morphemes. We study whether the learned vector embeddings align with these linguistic dimensions, finding strong evidence that this is the case. Our work uses two low-resource languages, Uspanteko and Tsez, demonstrating that distributional morphological representations are effective even with limited data.", }
Distributional approaches have proven effective in modeling semantics and phonology through vector embeddings. We explore whether distributional representations can also effectively model morphological information. We train static vector embeddings over morphological sequences. Then, we explore morpheme categories for fusional morphemes, which encode multiple linguistic dimensions, and often have close relationships to other morphemes. We study whether the learned vector embeddings align with these linguistic dimensions, finding strong evidence that this is the case. Our work uses two low-resource languages, Uspanteko and Tsez, demonstrating that distributional morphological representations are effective even with limited data.
[ "Ginn, Michael", "Palmer, Alexis" ]
Decomposing Fusional Morphemes with Vector Embeddings
sigmorphon-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.8.bib
https://aclanthology.org/2024.sigmorphon-1.8/
@inproceedings{mailhot-jacobs-2024-acoustic, title = "Acoustic barycenters as exemplar production targets", author = "Mailhot, Frederic and Jacobs, Cassandra L.", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.8", doi = "10.18653/v1/2024.sigmorphon-1.8", pages = "67--76", abstract = "We present a solution to the problem of exemplar-based language production from variable-duration tokens, leveraging algorithms from the domain of time-series clustering and classification. Our model stores and outputs tokens of phonetically rich and temporally variable representations of recorded speech. We show qualitatively and quantitatively that model outputs retain essential acoustic/phonetic characteristics despite the noise introduced by averaging, and also demonstrate the effects of similarity and indexical information as constraints on exemplar cloud selection.", }
We present a solution to the problem of exemplar-based language production from variable-duration tokens, leveraging algorithms from the domain of time-series clustering and classification. Our model stores and outputs tokens of phonetically rich and temporally variable representations of recorded speech. We show qualitatively and quantitatively that model outputs retain essential acoustic/phonetic characteristics despite the noise introduced by averaging, and also demonstrate the effects of similarity and indexical information as constraints on exemplar cloud selection.
[ "Mailhot, Frederic", "Jacobs, Cass", "ra L." ]
Acoustic barycenters as exemplar production targets
sigmorphon-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigmorphon-1.9.bib
https://aclanthology.org/2024.sigmorphon-1.9/
@inproceedings{matogawa-etal-2024-japanese, title = "{J}apanese Rule-based Grapheme-to-phoneme Conversion System and Multilingual Named Entity Dataset with International Phonetic Alphabet", author = "Matogawa, Yuhi and Sakai, Yusuke and Watanabe, Taro and Taguchi, Chihiro", editor = {Nicolai, Garrett and Chodroff, Eleanor and Mailhot, Frederic and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, booktitle = "Proceedings of the 21st SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigmorphon-1.9", doi = "10.18653/v1/2024.sigmorphon-1.9", pages = "77--86", abstract = "In Japanese, loanwords are primarily written in Katakana, a syllabic writing system, based on their pronunciation. However, the transliterated loanwords often exhibit spelling variations, such as the word {``}Hepburn{''} being written as {``}ヘボン (hebon){''}, {``}ヘプバーン (hepubaan){''}, {``}ヘップバーン (heppubaan){''}. These orthographical variants pose a bottleneck in multilingual Named Entity Recognition (NER), because named entities (NEs) do not have one-to-one matches. In this study, we introduce a rule-based grapheme-to-phoneme (G2P) system for Japanese based on literature in linguistics and a large-scale multilingual NE dataset with annotations of the International Phonetic Alphabet (IPA), focusing on IPA to address the Katakana spelling variations in loanwords. These rules and dataset are expected to be beneficial for tasks such as NE aggregation, G2P system, construction of cross-lingual language models, and entity linking. We hope our work advances research on Japanese NER with multilingual loanwords by solving the spelling ambiguities.", }
In Japanese, loanwords are primarily written in Katakana, a syllabic writing system, based on their pronunciation. However, the transliterated loanwords often exhibit spelling variations, such as the word {``}Hepburn{''} being written as {``}ヘボン (hebon){''}, {``}ヘプバーン (hepubaan){''}, {``}ヘップバーン (heppubaan){''}. These orthographical variants pose a bottleneck in multilingual Named Entity Recognition (NER), because named entities (NEs) do not have one-to-one matches. In this study, we introduce a rule-based grapheme-to-phoneme (G2P) system for Japanese based on literature in linguistics and a large-scale multilingual NE dataset with annotations of the International Phonetic Alphabet (IPA), focusing on IPA to address the Katakana spelling variations in loanwords. These rules and dataset are expected to be beneficial for tasks such as NE aggregation, G2P system, construction of cross-lingual language models, and entity linking. We hope our work advances research on Japanese NER with multilingual loanwords by solving the spelling ambiguities.
[ "Matogawa, Yuhi", "Sakai, Yusuke", "Watanabe, Taro", "Taguchi, Chihiro" ]
Japanese Rule-based Grapheme-to-phoneme Conversion System and Multilingual Named Entity Dataset with International Phonetic Alphabet
sigmorphon-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.1.bib
https://aclanthology.org/2024.starsem-1.1/
@inproceedings{regan-etal-2024-massive, title = "{MASSIVE} Multilingual {A}bstract {M}eaning {R}epresentation: A Dataset and Baselines for Hallucination Detection", author = "Regan, Michael and Wein, Shira and Baker, George and Monti, Emilio", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.1", doi = "10.18653/v1/2024.starsem-1.1", pages = "1--17", abstract = "Abstract Meaning Representation (AMR) is a semantic formalism that captures the core meaning of an utterance. There has been substantial work developing AMR corpora in English and more recently across languages, though the limited size of existing datasets and the cost of collecting more annotations are prohibitive. With both engineering and scientific questions in mind, we introduce MASSIVE-AMR, a dataset with more than 84,000 text-to-graph annotations, currently the largest and most diverse of its kind: AMR graphs for 1,685 information-seeking utterances mapped to 50+ typologically diverse languages. We describe how we built our resource and its unique features before reporting on experiments using large language models for multilingual AMR and SPARQL parsing as well as applying AMRs for hallucination detection in the context of knowledge base question answering, with results shedding light on persistent issues using LLMs for structured parsing.", }
Abstract Meaning Representation (AMR) is a semantic formalism that captures the core meaning of an utterance. There has been substantial work developing AMR corpora in English and more recently across languages, though the limited size of existing datasets and the cost of collecting more annotations are prohibitive. With both engineering and scientific questions in mind, we introduce MASSIVE-AMR, a dataset with more than 84,000 text-to-graph annotations, currently the largest and most diverse of its kind: AMR graphs for 1,685 information-seeking utterances mapped to 50+ typologically diverse languages. We describe how we built our resource and its unique features before reporting on experiments using large language models for multilingual AMR and SPARQL parsing as well as applying AMRs for hallucination detection in the context of knowledge base question answering, with results shedding light on persistent issues using LLMs for structured parsing.
[ "Regan, Michael", "Wein, Shira", "Baker, George", "Monti, Emilio" ]
MASSIVE Multilingual Abstract Meaning Representation: A Dataset and Baselines for Hallucination Detection
starsem-1.1
Poster
2405.19285
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.2.bib
https://aclanthology.org/2024.starsem-1.2/
@inproceedings{fraser-etal-2024-stereotype, title = "How Does Stereotype Content Differ across Data Sources?", author = "Fraser, Kathleen and Kiritchenko, Svetlana and Nejadgholi, Isar", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.2", doi = "10.18653/v1/2024.starsem-1.2", pages = "18--34", abstract = "For decades, psychologists have been studying stereotypes using specially-designed rating scales to capture people{'}s beliefs and opinions about different social groups. Now, using NLP tools on extensive collections of text, we have the opportunity to study stereotypes {``}in the wild{''} and on a large scale. However, are we truly capturing the same information? In this paper we compare measurements along six psychologically-motivated, stereotype-relevant dimensions (Sociability, Morality, Ability, Assertiveness, Beliefs, and Status) for 10 groups, defined by occupation. We compute these measurements on stereotypical English sentences written by crowd-workers, stereotypical sentences generated by ChatGPT, and more general data collected from social media, and contrast the findings with traditional, survey-based results, as well as a spontaneous word-list generation task. We find that while the correlation with the traditional scales varies across dimensions, the free-text data can be used to specify the particular traits associated with each group, and provide context for numerical survey data.", }
For decades, psychologists have been studying stereotypes using specially-designed rating scales to capture people{'}s beliefs and opinions about different social groups. Now, using NLP tools on extensive collections of text, we have the opportunity to study stereotypes {``}in the wild{''} and on a large scale. However, are we truly capturing the same information? In this paper we compare measurements along six psychologically-motivated, stereotype-relevant dimensions (Sociability, Morality, Ability, Assertiveness, Beliefs, and Status) for 10 groups, defined by occupation. We compute these measurements on stereotypical English sentences written by crowd-workers, stereotypical sentences generated by ChatGPT, and more general data collected from social media, and contrast the findings with traditional, survey-based results, as well as a spontaneous word-list generation task. We find that while the correlation with the traditional scales varies across dimensions, the free-text data can be used to specify the particular traits associated with each group, and provide context for numerical survey data.
[ "Fraser, Kathleen", "Kiritchenko, Svetlana", "Nejadgholi, Isar" ]
How Does Stereotype Content Differ across Data Sources?
starsem-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.3.bib
https://aclanthology.org/2024.starsem-1.3/
@inproceedings{bruera-etal-2024-polysemy, title = "Polysemy through the lens of psycholinguistic variables: a dataset and an evaluation of static and contextualized language models", author = "Bruera, Andrea and Zamani, Farbod and Poesio, Massimo", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.3", doi = "10.18653/v1/2024.starsem-1.3", pages = "35--48", abstract = "Polysemes are words that can have different senses depending on the context of utterance: for instance, {`}newspaper{'} can refer to an organization (as in {`}manage the newspaper{'}) or to an object (as in {`}open the newspaper{'}). Contrary to a large body of evidence coming from psycholinguistics, polysemy has been traditionally modelled in NLP by assuming that each sense should be given a separate representation in a lexicon (e.g. WordNet). This led to the current situation, where datasets used to evaluate the ability of computational models of semantics miss crucial details about the representation of polysemes, thus limiting the amount of evidence that can be gained from their use. In this paper we propose a framework to approach polysemy as a continuous variation in psycholinguistic properties of a word in context. This approach accommodates different sense interpretations, without postulating clear-cut jumps between senses. First we describe a publicly available English dataset that we collected, where polysemes in context (verb-noun phrases) are annotated for their concreteness and body sensory strength. Then, we evaluate static and contextualized language models in their ability to predict the ratings of each polyseme in context, as well as in their ability to capture the distinction among senses, revealing and characterizing in an interpretable way the models{'} flaws.", }
Polysemes are words that can have different senses depending on the context of utterance: for instance, {`}newspaper{'} can refer to an organization (as in {`}manage the newspaper{'}) or to an object (as in {`}open the newspaper{'}). Contrary to a large body of evidence coming from psycholinguistics, polysemy has been traditionally modelled in NLP by assuming that each sense should be given a separate representation in a lexicon (e.g. WordNet). This led to the current situation, where datasets used to evaluate the ability of computational models of semantics miss crucial details about the representation of polysemes, thus limiting the amount of evidence that can be gained from their use. In this paper we propose a framework to approach polysemy as a continuous variation in psycholinguistic properties of a word in context. This approach accommodates different sense interpretations, without postulating clear-cut jumps between senses. First we describe a publicly available English dataset that we collected, where polysemes in context (verb-noun phrases) are annotated for their concreteness and body sensory strength. Then, we evaluate static and contextualized language models in their ability to predict the ratings of each polyseme in context, as well as in their ability to capture the distinction among senses, revealing and characterizing in an interpretable way the models{'} flaws.
[ "Bruera, Andrea", "Zamani, Farbod", "Poesio, Massimo" ]
Polysemy through the lens of psycholinguistic variables: a dataset and an evaluation of static and contextualized language models
starsem-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.4.bib
https://aclanthology.org/2024.starsem-1.4/
@inproceedings{sancheti-etal-2024-post, title = "Post-Hoc Answer Attribution for Grounded and Trustworthy Long Document Comprehension: Task, Insights, and Challenges", author = "Sancheti, Abhilasha and Goswami, Koustava and Srinivasan, Balaji", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.4", doi = "10.18653/v1/2024.starsem-1.4", pages = "49--57", abstract = "Attributing answer text to its source document for information-seeking questions is crucial for building trustworthy, reliable, and accountable systems. We formulate a new task of post-hoc answer attribution for long document comprehension (LDC). Owing to the lack of long-form abstractive and information-seeking LDC datasets, we refactor existing datasets to assess the strengths and weaknesses of existing retrieval-based and proposed answer decomposition and textual entailment-based optimal selection attribution systems for this task. We throw light on the limitations of existing datasets and the need for datasets to assess the actual performance of systems on this task.", }
Attributing answer text to its source document for information-seeking questions is crucial for building trustworthy, reliable, and accountable systems. We formulate a new task of post-hoc answer attribution for long document comprehension (LDC). Owing to the lack of long-form abstractive and information-seeking LDC datasets, we refactor existing datasets to assess the strengths and weaknesses of existing retrieval-based and proposed answer decomposition and textual entailment-based optimal selection attribution systems for this task. We throw light on the limitations of existing datasets and the need for datasets to assess the actual performance of systems on this task.
[ "Sancheti, Abhilasha", "Goswami, Koustava", "Srinivasan, Balaji" ]
Post-Hoc Answer Attribution for Grounded and Trustworthy Long Document Comprehension: Task, Insights, and Challenges
starsem-1.4
Poster
2406.06938
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.5.bib
https://aclanthology.org/2024.starsem-1.5/
@inproceedings{uematsu-etal-2024-benchmark, title = "A Benchmark Suite of {J}apanese Natural Questions", author = "Uematsu, Takuya and Wang, Hao and Kawahara, Daisuke and Shibata, Tomohide", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.5", doi = "10.18653/v1/2024.starsem-1.5", pages = "58--68", abstract = "To develop high-performance and robust natural language processing (NLP) models, it is important to have various question answering (QA) datasets to train, evaluate, and analyze them. Although there are various QA datasets available in English, there are only a few QA datasets in other languages. We focus on Japanese, a language with only a few basic QA datasets, and aim to build a Japanese version of Natural Questions (NQ) consisting of questions that naturally arise from human information needs. We collect natural questions from query logs of a Japanese search engine and build the dataset using crowdsourcing. We construct Japanese Natural Questions (JNQ) and a Japanese version of BoolQ (JBoolQ), which is derived from NQ and consists of yes/no questions. JNQ consists of 16,871 questions, and JBoolQ consists of 6,467 questions. We also define two tasks from JNQ and one from JBoolQ and establish baselines using competitive methods drawn from related literature. We hope that these datasets will facilitate research on QA and NLP models in Japanese. We are planning to release JNQ and JBoolQ.", }
To develop high-performance and robust natural language processing (NLP) models, it is important to have various question answering (QA) datasets to train, evaluate, and analyze them. Although there are various QA datasets available in English, there are only a few QA datasets in other languages. We focus on Japanese, a language with only a few basic QA datasets, and aim to build a Japanese version of Natural Questions (NQ) consisting of questions that naturally arise from human information needs. We collect natural questions from query logs of a Japanese search engine and build the dataset using crowdsourcing. We construct Japanese Natural Questions (JNQ) and a Japanese version of BoolQ (JBoolQ), which is derived from NQ and consists of yes/no questions. JNQ consists of 16,871 questions, and JBoolQ consists of 6,467 questions. We also define two tasks from JNQ and one from JBoolQ and establish baselines using competitive methods drawn from related literature. We hope that these datasets will facilitate research on QA and NLP models in Japanese. We are planning to release JNQ and JBoolQ.
[ "Uematsu, Takuya", "Wang, Hao", "Kawahara, Daisuke", "Shibata, Tomohide" ]
A Benchmark Suite of Japanese Natural Questions
starsem-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.6.bib
https://aclanthology.org/2024.starsem-1.6/
@inproceedings{takeshita-etal-2024-rouge, title = "{ROUGE}-K: Do Your Summaries Have Keywords?", author = "Takeshita, Sotaro and Ponzetto, Simone and Eckert, Kai", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.6", doi = "10.18653/v1/2024.starsem-1.6", pages = "69--79", abstract = "Keywords, that is, content-relevant words in summaries play an important role in efficient information conveyance, making it critical to assess if system-generated summaries contain such informative words during evaluation. However, existing evaluation metrics for extreme summarization models do not pay explicit attention to keywords in summaries, leaving developers ignorant of their presence. To address this issue, we present a keyword-oriented evaluation metric, dubbed ROUGE-K, which provides a quantitative answer to the question of {--} How well do summaries include keywords? Through the lens of this keyword-aware metric, we surprisingly find that a current strong baseline model often misses essential information in their summaries. Our analysis reveals that human annotators indeed find the summaries with more keywords to be more relevant to the source documents. This is an important yet previously overlooked aspect in evaluating summarization systems. Finally, to enhance keyword inclusion, we propose four approaches for incorporating word importance into a transformer-based model and experimentally show that it enables guiding models to include more keywords while keeping the overall quality.", }
Keywords, that is, content-relevant words in summaries play an important role in efficient information conveyance, making it critical to assess if system-generated summaries contain such informative words during evaluation. However, existing evaluation metrics for extreme summarization models do not pay explicit attention to keywords in summaries, leaving developers ignorant of their presence. To address this issue, we present a keyword-oriented evaluation metric, dubbed ROUGE-K, which provides a quantitative answer to the question of {--} How well do summaries include keywords? Through the lens of this keyword-aware metric, we surprisingly find that a current strong baseline model often misses essential information in their summaries. Our analysis reveals that human annotators indeed find the summaries with more keywords to be more relevant to the source documents. This is an important yet previously overlooked aspect in evaluating summarization systems. Finally, to enhance keyword inclusion, we propose four approaches for incorporating word importance into a transformer-based model and experimentally show that it enables guiding models to include more keywords while keeping the overall quality.
[ "Takeshita, Sotaro", "Ponzetto, Simone", "Eckert, Kai" ]
ROUGE-K: Do Your Summaries Have Keywords?
starsem-1.6
Poster
2403.05186
[ "https://github.com/sobamchan/rougek" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.7.bib
https://aclanthology.org/2024.starsem-1.7/
@inproceedings{li-etal-2024-investigating, title = "Investigating Aspect Features in Contextualized Embeddings with Semantic Scales and Distributional Similarity", author = "Li, Yuxi and Chersoni, Emmanuele and Hsu, Yu-Yin", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.7", doi = "10.18653/v1/2024.starsem-1.7", pages = "80--92", abstract = "Aspect, a linguistic category describing how actions and events unfold over time, is traditionally characterized by three semantic properties: stativity, durativity and telicity. In this study, we investigate whether and to what extent these properties are encoded in the verb token embeddings of the contextualized spaces of two English language models {--} BERT and GPT-2. First, we propose an experiment using semantic projections to examine whether the values of the vector dimensions of annotated verbs for stativity, durativity and telicity reflect human linguistic distinctions. Second, we use distributional similarity to replicate the notorious Imperfective Paradox described by Dowty (1977), and assess whether the embedding models are sensitive to capture contextual nuances of the verb telicity. Our results show that both models encode the semantic distinctions for the aspect properties of stativity and telicity in most of their layers, while durativity is the most challenging feature. As for the Imperfective Paradox, only the embedding similarities computed with the vectors from the early layers of the BERT model align with the expected pattern.", }
Aspect, a linguistic category describing how actions and events unfold over time, is traditionally characterized by three semantic properties: stativity, durativity and telicity. In this study, we investigate whether and to what extent these properties are encoded in the verb token embeddings of the contextualized spaces of two English language models {--} BERT and GPT-2. First, we propose an experiment using semantic projections to examine whether the values of the vector dimensions of annotated verbs for stativity, durativity and telicity reflect human linguistic distinctions. Second, we use distributional similarity to replicate the notorious Imperfective Paradox described by Dowty (1977), and assess whether the embedding models are sensitive to capture contextual nuances of the verb telicity. Our results show that both models encode the semantic distinctions for the aspect properties of stativity and telicity in most of their layers, while durativity is the most challenging feature. As for the Imperfective Paradox, only the embedding similarities computed with the vectors from the early layers of the BERT model align with the expected pattern.
[ "Li, Yuxi", "Chersoni, Emmanuele", "Hsu, Yu-Yin" ]
Investigating Aspect Features in Contextualized Embeddings with Semantic Scales and Distributional Similarity
starsem-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.8.bib
https://aclanthology.org/2024.starsem-1.8/
@inproceedings{alacam-etal-2024-wikiscenes, title = "{W}iki{S}cenes with Descriptions: Aligning Paragraphs and Sentences with Images in {W}ikipedia Articles", author = {Ala{\c{c}}am, {\"O}zge and Utescher, Ronja and Gr{\"o}nner, Hannes and Sieker, Judith and Zarrie{\ss}, Sina}, editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.8", doi = "10.18653/v1/2024.starsem-1.8", pages = "93--105", abstract = "Research in Language {\&} Vision rarely uses naturally occurring multimodal documents as Wikipedia articles, since they feature complex image-text relations and implicit image-text alignments. In this paper, we provide one of the first datasets that provides ground-truth annotations of image-text alignments in multi-paragraph multi-image articles. The dataset can be used to study phenomena of visual language grounding in longer documents and assess retrieval capabilities of language models trained on, e.g., captioning data. Our analyses show that there are systematic linguistic differences between the image captions and descriptive sentences from the article{'}s text and that intra-document retrieval is a challenging task for state-of-the-art models in L{\&}V (CLIP, VILT, MCSE).", }
Research in Language {\&} Vision rarely uses naturally occurring multimodal documents as Wikipedia articles, since they feature complex image-text relations and implicit image-text alignments. In this paper, we provide one of the first datasets that provides ground-truth annotations of image-text alignments in multi-paragraph multi-image articles. The dataset can be used to study phenomena of visual language grounding in longer documents and assess retrieval capabilities of language models trained on, e.g., captioning data. Our analyses show that there are systematic linguistic differences between the image captions and descriptive sentences from the article{'}s text and that intra-document retrieval is a challenging task for state-of-the-art models in L{\&}V (CLIP, VILT, MCSE).
[ "Ala{\\c{c}}am, {\\\"O}zge", "Utescher, Ronja", "Gr{\\\"o}nner, Hannes", "Sieker, Judith", "Zarrie{\\ss}, Sina" ]
WikiScenes with Descriptions: Aligning Paragraphs and Sentences with Images in Wikipedia Articles
starsem-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.9.bib
https://aclanthology.org/2024.starsem-1.9/
@inproceedings{yano-etal-2024-relevance, title = "Relevance, Diversity, and Exclusivity: Designing Keyword-augmentation Strategy for Zero-shot Classifiers", author = "Yano, Taro and Takeoka, Kunihiro and Oyamada, Masafumi", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.9", doi = "10.18653/v1/2024.starsem-1.9", pages = "106--119", abstract = "Zero-shot text classification involves categorizing text into classes without labeled data, typically using a pre-trained language model to compute the correlation between text and class names. This makes it essential for class names to contain sufficient information. Existing methods incorporate semantically similar keywords related to class names, but the properties of effective keywords remain unclear. We demonstrate that effective keywords should possess three properties: 1) keyword relevance to the task objective, 2) inter-class exclusivity, and 3) intra-class diversity. We also propose an automatic method for acquiring keywords that satisfy these properties without additional knowledge bases or data. Experiments on nine real-world datasets show our method outperforms existing approaches in fully zero-shot and generalized zero-shot settings. Ablation studies further confirm the importance of all three properties for superior performance.", }
Zero-shot text classification involves categorizing text into classes without labeled data, typically using a pre-trained language model to compute the correlation between text and class names. This makes it essential for class names to contain sufficient information. Existing methods incorporate semantically similar keywords related to class names, but the properties of effective keywords remain unclear. We demonstrate that effective keywords should possess three properties: 1) keyword relevance to the task objective, 2) inter-class exclusivity, and 3) intra-class diversity. We also propose an automatic method for acquiring keywords that satisfy these properties without additional knowledge bases or data. Experiments on nine real-world datasets show our method outperforms existing approaches in fully zero-shot and generalized zero-shot settings. Ablation studies further confirm the importance of all three properties for superior performance.
[ "Yano, Taro", "Takeoka, Kunihiro", "Oyamada, Masafumi" ]
Relevance, Diversity, and Exclusivity: Designing Keyword-augmentation Strategy for Zero-shot Classifiers
starsem-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.10.bib
https://aclanthology.org/2024.starsem-1.10/
@inproceedings{shi-etal-2024-lexical, title = "Lexical Substitution as Causal Language Modeling", author = "Shi, Ning and Hauer, Bradley and Kondrak, Grzegorz", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.10", doi = "10.18653/v1/2024.starsem-1.10", pages = "120--132", abstract = "Causal language models such as the GPT series have achieved significant success across various domains. However, their application to the lexical substitution task (LST) remains largely unexplored due to inherent limitations in autoregressive decoding. Our work is motivated by our observation that existing LST approaches tend to suffer from a misalignment between the pre-training objectives of the language models that they employ, and their subsequent fine-tuning and application for substitute generation. We introduce PromptSub, the first system to use causal language modeling (CLM) for LST. Through prompt-aware fine-tuning, PromptSub not only enriches the given context with additional knowledge, but also leverages the unidirectional nature of autoregressive decoding. PromptSub consistently outperforms GeneSis, the best previously published supervised LST method. Further analysis demonstrates the potential of PromptSub to further benefit from increased model capacity, expanded data resources, and retrieval of external knowledge. By framing LST within the paradigm of CLM, our approach indicates the versatility of general CLM-based systems, such as ChatGPT, in catering to specialized tasks, including LST.", }
Causal language models such as the GPT series have achieved significant success across various domains. However, their application to the lexical substitution task (LST) remains largely unexplored due to inherent limitations in autoregressive decoding. Our work is motivated by our observation that existing LST approaches tend to suffer from a misalignment between the pre-training objectives of the language models that they employ, and their subsequent fine-tuning and application for substitute generation. We introduce PromptSub, the first system to use causal language modeling (CLM) for LST. Through prompt-aware fine-tuning, PromptSub not only enriches the given context with additional knowledge, but also leverages the unidirectional nature of autoregressive decoding. PromptSub consistently outperforms GeneSis, the best previously published supervised LST method. Further analysis demonstrates the potential of PromptSub to further benefit from increased model capacity, expanded data resources, and retrieval of external knowledge. By framing LST within the paradigm of CLM, our approach indicates the versatility of general CLM-based systems, such as ChatGPT, in catering to specialized tasks, including LST.
[ "Shi, Ning", "Hauer, Bradley", "Kondrak, Grzegorz" ]
Lexical Substitution as Causal Language Modeling
starsem-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.11.bib
https://aclanthology.org/2024.starsem-1.11/
@inproceedings{shi-etal-2024-paraphrase, title = "Paraphrase Identification via Textual Inference", author = "Shi, Ning and Hauer, Bradley and Riley, Jai and Kondrak, Grzegorz", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.11", doi = "10.18653/v1/2024.starsem-1.11", pages = "133--141", abstract = "Paraphrase identification (PI) and natural language inference (NLI) are two important tasks in natural language processing. Despite their distinct objectives, an underlying connection exists, which has been notably under-explored in empirical investigations. We formalize the relationship between these semantic tasks and introduce a method for solving PI using an NLI system, including the adaptation of PI datasets for fine-tuning NLI models. Through extensive evaluations on six PI benchmarks, across both zero-shot and fine-tuned settings, we showcase the efficacy of NLI models for PI through our proposed reduction. Remarkably, our fine-tuning procedure enables NLI models to outperform dedicated PI models on PI datasets. In addition, our findings provide insights into the limitations of current PI benchmarks.", }
Paraphrase identification (PI) and natural language inference (NLI) are two important tasks in natural language processing. Despite their distinct objectives, an underlying connection exists, which has been notably under-explored in empirical investigations. We formalize the relationship between these semantic tasks and introduce a method for solving PI using an NLI system, including the adaptation of PI datasets for fine-tuning NLI models. Through extensive evaluations on six PI benchmarks, across both zero-shot and fine-tuned settings, we showcase the efficacy of NLI models for PI through our proposed reduction. Remarkably, our fine-tuning procedure enables NLI models to outperform dedicated PI models on PI datasets. In addition, our findings provide insights into the limitations of current PI benchmarks.
[ "Shi, Ning", "Hauer, Bradley", "Riley, Jai", "Kondrak, Grzegorz" ]
Paraphrase Identification via Textual Inference
starsem-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.12.bib
https://aclanthology.org/2024.starsem-1.12/
@inproceedings{woudstra-etal-2024-identifying, title = "Identifying Emotional and Polar Concepts via Synset Translation", author = "Woudstra, Logan and Dawodu, Moyo and Igwe, Frances and Li, Senyu and Shi, Ning and Hauer, Bradley and Kondrak, Grzegorz", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.12", doi = "10.18653/v1/2024.starsem-1.12", pages = "142--152", abstract = "Emotion identification and polarity classification seek to determine the sentiment expressed by a writer. Sentiment lexicons that provide classifications at the word level fail to distinguish between different senses of polysemous words. To address this problem, we propose a translation-based method for labeling each individual lexical concept and word sense. Specifically, we translate synsets into 20 different languages and verify the sentiment of these translations in multilingual sentiment lexicons. By applying our method to all WordNet synsets, we produce SentiSynset, a synset-level sentiment resource containing 12,429 emotional synsets and 15,567 polar synsets, which is significantly larger than previous resources. Experimental evaluation shows that our method outperforms prior automated methods that classify word senses, in addition to outperforming ChatGPT. We make the resulting resource publicly available on GitHub.", }
Emotion identification and polarity classification seek to determine the sentiment expressed by a writer. Sentiment lexicons that provide classifications at the word level fail to distinguish between different senses of polysemous words. To address this problem, we propose a translation-based method for labeling each individual lexical concept and word sense. Specifically, we translate synsets into 20 different languages and verify the sentiment of these translations in multilingual sentiment lexicons. By applying our method to all WordNet synsets, we produce SentiSynset, a synset-level sentiment resource containing 12,429 emotional synsets and 15,567 polar synsets, which is significantly larger than previous resources. Experimental evaluation shows that our method outperforms prior automated methods that classify word senses, in addition to outperforming ChatGPT. We make the resulting resource publicly available on GitHub.
[ "Woudstra, Logan", "Dawodu, Moyo", "Igwe, Frances", "Li, Senyu", "Shi, Ning", "Hauer, Bradley", "Kondrak, Grzegorz" ]
Identifying Emotional and Polar Concepts via Synset Translation
starsem-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.13.bib
https://aclanthology.org/2024.starsem-1.13/
@inproceedings{wanner-etal-2024-closer, title = "A Closer Look at Claim Decomposition", author = "Wanner, Miriam and Ebner, Seth and Jiang, Zhengping and Dredze, Mark and Van Durme, Benjamin", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.13", doi = "10.18653/v1/2024.starsem-1.13", pages = "153--175", abstract = "As generated text becomes more commonplace, it is increasingly important to evaluate how well-supported such text is by external knowledge sources. Many approaches for evaluating textual support rely on some method for decomposing text into its individual subclaims which are scored against a trusted reference. We investigate how various methods of claim decomposition{---}especially LLM-based methods{---}affect the result of an evaluation approach such as the recently proposed FActScore, finding that it is sensitive to the decomposition method used. This sensitivity arises because such metrics attribute overall textual support to the model that generated the text even though error can also come from the metric{'}s decomposition step. To measure decomposition quality, we introduce an adaptation of FActScore, which we call DecompScore. We then propose an LLM-based approach to generating decompositions inspired by Bertrand Russell{'}s theory of logical atomism and neo-Davidsonian semantics and demonstrate its improved decomposition quality over previous methods.", }
As generated text becomes more commonplace, it is increasingly important to evaluate how well-supported such text is by external knowledge sources. Many approaches for evaluating textual support rely on some method for decomposing text into its individual subclaims which are scored against a trusted reference. We investigate how various methods of claim decomposition{---}especially LLM-based methods{---}affect the result of an evaluation approach such as the recently proposed FActScore, finding that it is sensitive to the decomposition method used. This sensitivity arises because such metrics attribute overall textual support to the model that generated the text even though error can also come from the metric{'}s decomposition step. To measure decomposition quality, we introduce an adaptation of FActScore, which we call DecompScore. We then propose an LLM-based approach to generating decompositions inspired by Bertrand Russell{'}s theory of logical atomism and neo-Davidsonian semantics and demonstrate its improved decomposition quality over previous methods.
[ "Wanner, Miriam", "Ebner, Seth", "Jiang, Zhengping", "Dredze, Mark", "Van Durme, Benjamin" ]
A Closer Look at Claim Decomposition
starsem-1.13
Poster
2403.11903
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.14.bib
https://aclanthology.org/2024.starsem-1.14/
@inproceedings{canete-bravo-marquez-2024-speedy, title = "Speedy Gonzales: A Collection of Fast Task-Specific Models for {S}panish", author = "Ca{\~n}ete, Jos{\'e} and Bravo-Marquez, Felipe", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.14", doi = "10.18653/v1/2024.starsem-1.14", pages = "176--189", abstract = "Large language models (LLM) are now a very common and successful path to approach language and retrieval tasks. While these LLM achieve surprisingly good results it is a challenge to use them on more constrained resources. Techniques to compress these LLM into smaller and faster models have emerged for English or Multilingual settings, but it is still a challenge for other languages. In fact, Spanish is the second language with most native speakers but lacks of these kind of resources. In this work, we evaluate all the models publicly available for Spanish on a set of 6 tasks and then, by leveraging on Knowledge Distillation, we present Speedy Gonzales, a collection of inference-efficient task-specific language models based on the ALBERT architecture. All of our models (fine-tuned and distilled) are publicly available on: https://huggingface.co/dccuchile.", }
Large language models (LLM) are now a very common and successful path to approach language and retrieval tasks. While these LLM achieve surprisingly good results it is a challenge to use them on more constrained resources. Techniques to compress these LLM into smaller and faster models have emerged for English or Multilingual settings, but it is still a challenge for other languages. In fact, Spanish is the second language with most native speakers but lacks of these kind of resources. In this work, we evaluate all the models publicly available for Spanish on a set of 6 tasks and then, by leveraging on Knowledge Distillation, we present Speedy Gonzales, a collection of inference-efficient task-specific language models based on the ALBERT architecture. All of our models (fine-tuned and distilled) are publicly available on: https://huggingface.co/dccuchile.
[ "Ca{\\~n}ete, Jos{\\'e}", "Bravo-Marquez, Felipe" ]
Speedy Gonzales: A Collection of Fast Task-Specific Models for Spanish
starsem-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.15.bib
https://aclanthology.org/2024.starsem-1.15/
@inproceedings{mor-lan-levi-2024-exploring, title = "Exploring Factual Entailment with {NLI}: A News Media Study", author = "Mor-Lan, Guy and Levi, Effi", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.15", doi = "10.18653/v1/2024.starsem-1.15", pages = "190--199", abstract = "We explore the relationship between factuality and Natural Language Inference (NLI) by introducing FactRel {--} a novel annotation scheme that models factual rather than textual entailment, and use it to annotate a dataset of naturally occurring sentences from news articles. Our analysis shows that 84{\%} of factually supporting pairs and 63{\%} of factually undermining pairs do not amount to NLI entailment or contradiction, respectively, suggesting that factual relationships are more apt for analyzing media discourse. We experiment with models for pairwise classification on the new dataset, and find that in some cases, generating synthetic data with GPT-4 on the basis of the annotated dataset can improve performance. Surprisingly, few-shot learning with GPT-4 yields strong results on par with medium LMs (DeBERTa) trained on the labelled dataset. We hypothesize that these results indicate the fundamental dependence of this task on both world knowledge and advanced reasoning abilities.", }
We explore the relationship between factuality and Natural Language Inference (NLI) by introducing FactRel {--} a novel annotation scheme that models factual rather than textual entailment, and use it to annotate a dataset of naturally occurring sentences from news articles. Our analysis shows that 84{\%} of factually supporting pairs and 63{\%} of factually undermining pairs do not amount to NLI entailment or contradiction, respectively, suggesting that factual relationships are more apt for analyzing media discourse. We experiment with models for pairwise classification on the new dataset, and find that in some cases, generating synthetic data with GPT-4 on the basis of the annotated dataset can improve performance. Surprisingly, few-shot learning with GPT-4 yields strong results on par with medium LMs (DeBERTa) trained on the labelled dataset. We hypothesize that these results indicate the fundamental dependence of this task on both world knowledge and advanced reasoning abilities.
[ "Mor-Lan, Guy", "Levi, Effi" ]
Exploring Factual Entailment with NLI: A News Media Study
starsem-1.15
Poster
2406.16842
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.16.bib
https://aclanthology.org/2024.starsem-1.16/
@inproceedings{bernard-etal-2024-emergence, title = "The Emergence of High-Level Semantics in a Signaling Game", author = "Bernard, Timoth{\'e}e and Mickus, Timothee and Takamura, Hiroya", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.16", doi = "10.18653/v1/2024.starsem-1.16", pages = "200--211", abstract = "The symbol grounding problem{---}how to connect a symbolic system to the outer world{---}is a longstanding question in AI that has recently gained prominence with the progress made in NLP in general and surrounding large language models in particular. In this article, we study the emergence of semantic categories in the communication protocol developed by neural agents involved in a well-established type of signaling game. In its basic form, the game requires one agent to retrieve an image based on a message produced by a second agent. We first show that the agents are able to, and do, learn to communicate high-level semantic concepts rather than low-level features of the images even from very indirect training signal to that end. Second, we demonstrate that the introduction of an adversarial agent in the game fosters the emergence of semantics by producing an appropriate training signal when no other method is available.", }
The symbol grounding problem{---}how to connect a symbolic system to the outer world{---}is a longstanding question in AI that has recently gained prominence with the progress made in NLP in general and surrounding large language models in particular. In this article, we study the emergence of semantic categories in the communication protocol developed by neural agents involved in a well-established type of signaling game. In its basic form, the game requires one agent to retrieve an image based on a message produced by a second agent. We first show that the agents are able to, and do, learn to communicate high-level semantic concepts rather than low-level features of the images even from very indirect training signal to that end. Second, we demonstrate that the introduction of an adversarial agent in the game fosters the emergence of semantics by producing an appropriate training signal when no other method is available.
[ "Bernard, Timoth{\\'e}e", "Mickus, Timothee", "Takamura, Hiroya" ]
The Emergence of High-Level Semantics in a Signaling Game
starsem-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.17.bib
https://aclanthology.org/2024.starsem-1.17/
@inproceedings{zhang-etal-2024-pddlego, title = "{PDDLEGO}: Iterative Planning in Textual Environments", author = "Zhang, Li and Jansen, Peter and Zhang, Tianyi and Clark, Peter and Callison-Burch, Chris and Tandon, Niket", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.17", doi = "10.18653/v1/2024.starsem-1.17", pages = "212--221", abstract = "Planning in textual environments have been shown to be a long-standing challenge even for current models. A recent, promising line of work uses LLMs to generate a formal representation of the environment that can be solved by a symbolic planner. However, existing methods rely on a fully-observed environment where all entity states are initially known, so a one-off representation can be constructed, leading to a complete plan. In contrast, we tackle partially-observed environments where there is initially no sufficient information to plan for the end-goal. We propose PDDLEGO that iteratively construct a planning representation that can lead to a partial plan for a given sub-goal. By accomplishing the sub-goal, more information is acquired to augment the representation, eventually achieving the end-goal. We show that plans produced by few-shot PDDLEGO are 43{\%} more efficient than generating plans end-to-end on the Coin Collector simulation, with strong performance (98{\%}) on the more complex Cooking World simulation where end-to-end LLMs fail to generate coherent plans (4{\%}).", }
Planning in textual environments have been shown to be a long-standing challenge even for current models. A recent, promising line of work uses LLMs to generate a formal representation of the environment that can be solved by a symbolic planner. However, existing methods rely on a fully-observed environment where all entity states are initially known, so a one-off representation can be constructed, leading to a complete plan. In contrast, we tackle partially-observed environments where there is initially no sufficient information to plan for the end-goal. We propose PDDLEGO that iteratively construct a planning representation that can lead to a partial plan for a given sub-goal. By accomplishing the sub-goal, more information is acquired to augment the representation, eventually achieving the end-goal. We show that plans produced by few-shot PDDLEGO are 43{\%} more efficient than generating plans end-to-end on the Coin Collector simulation, with strong performance (98{\%}) on the more complex Cooking World simulation where end-to-end LLMs fail to generate coherent plans (4{\%}).
[ "Zhang, Li", "Jansen, Peter", "Zhang, Tianyi", "Clark, Peter", "Callison-Burch, Chris", "T", "on, Niket" ]
PDDLEGO: Iterative Planning in Textual Environments
starsem-1.17
Poster
2405.19793
[ "https://github.com/zharry29/nl-to-pddl" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.18.bib
https://aclanthology.org/2024.starsem-1.18/
@inproceedings{piccirilli-etal-2024-volimet, title = "{VOLIMET}: A Parallel Corpus of Literal and Metaphorical Verb-Object Pairs for {E}nglish{--}{G}erman and {E}nglish{--}{F}rench", author = "Piccirilli, Prisca and Fraser, Alexander and Schulte im Walde, Sabine", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.18", doi = "10.18653/v1/2024.starsem-1.18", pages = "222--237", abstract = "The interplay of cultural and linguistic elements that characterizes metaphorical language poses a substantial challenge for both human comprehension and machine processing. This challenge goes beyond monolingual settings and becomes particularly complex in translation, even more so in automatic translation. We present VOLIMET, a corpus of 2,916 parallel sentences containing gold standard alignments of metaphorical verb-object pairs and their literal paraphrases, e.g., tackle/address question, from English to German and French. On the one hand, the parallel nature of our corpus enables us to explore monolingual patterns for metaphorical vs. literal uses in English. On the other hand, we investigate different aspects of cross-lingual translations into German and French and the extent to which metaphoricity and literalness in the source language are transferred to the target languages. Monolingually, our findings reveal clear preferences in using metaphorical or literal uses of verb-object pairs. Cross-lingually, we observe a rich variability in translations as well as different behaviors for our two target languages.", }
The interplay of cultural and linguistic elements that characterizes metaphorical language poses a substantial challenge for both human comprehension and machine processing. This challenge goes beyond monolingual settings and becomes particularly complex in translation, even more so in automatic translation. We present VOLIMET, a corpus of 2,916 parallel sentences containing gold standard alignments of metaphorical verb-object pairs and their literal paraphrases, e.g., tackle/address question, from English to German and French. On the one hand, the parallel nature of our corpus enables us to explore monolingual patterns for metaphorical vs. literal uses in English. On the other hand, we investigate different aspects of cross-lingual translations into German and French and the extent to which metaphoricity and literalness in the source language are transferred to the target languages. Monolingually, our findings reveal clear preferences in using metaphorical or literal uses of verb-object pairs. Cross-lingually, we observe a rich variability in translations as well as different behaviors for our two target languages.
[ "Piccirilli, Prisca", "Fraser, Alex", "er", "Schulte im Walde, Sabine" ]
VOLIMET: A Parallel Corpus of Literal and Metaphorical Verb-Object Pairs for English–German and English–French
starsem-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.19.bib
https://aclanthology.org/2024.starsem-1.19/
@inproceedings{yava-etal-2024-improving, title = "Improving Word Sense Induction through Adversarial Forgetting of Morphosyntactic Information", author = "Yavas, Deniz Ekin and Bernard, Timoth{\'e}e and Kallmeyer, Laura and Crabb{\'e}, Beno{\^\i}t", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.19", doi = "10.18653/v1/2024.starsem-1.19", pages = "238--251", abstract = "This paper addresses the problem of word sense induction (WSI) via clustering of word embeddings. It starts from the hypothesis that contextualized word representations obtained from pre-trained language models (LMs), while being a valuable source for WSI, encode more information than what is necessary for the identification of word senses and some of this information affect the performance negatively in unsupervised settings. We investigate whether using contextualized representations that are invariant to these {`}nuisance features{'} can increase WSI performance. For this purpose, we propose an adaptation of the adversarial training framework proposed by Jaiswal et al. (2020) to erase specific information from the representations of LMs, thereby creating feature-invariant representations. We experiment with erasing (i) morphological and (ii) syntactic features. The results of subsequent clustering for WSI show that these features indeed act like noise: Using feature-invariant representations, compared to using the original representations, increases clustering-based WSI performance. Furthermore, we provide an in-depth analysis of how the information about the syntactic and morphological features of words relate to and affect WSI performance.", }
This paper addresses the problem of word sense induction (WSI) via clustering of word embeddings. It starts from the hypothesis that contextualized word representations obtained from pre-trained language models (LMs), while being a valuable source for WSI, encode more information than what is necessary for the identification of word senses and some of this information affect the performance negatively in unsupervised settings. We investigate whether using contextualized representations that are invariant to these {`}nuisance features{'} can increase WSI performance. For this purpose, we propose an adaptation of the adversarial training framework proposed by Jaiswal et al. (2020) to erase specific information from the representations of LMs, thereby creating feature-invariant representations. We experiment with erasing (i) morphological and (ii) syntactic features. The results of subsequent clustering for WSI show that these features indeed act like noise: Using feature-invariant representations, compared to using the original representations, increases clustering-based WSI performance. Furthermore, we provide an in-depth analysis of how the information about the syntactic and morphological features of words relate to and affect WSI performance.
[ "Yavas, Deniz Ekin", "Bernard, Timoth{\\'e}e", "Kallmeyer, Laura", "Crabb{\\'e}, Beno{\\^\\i}t" ]
Improving Word Sense Induction through Adversarial Forgetting of Morphosyntactic Information
starsem-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.20.bib
https://aclanthology.org/2024.starsem-1.20/
@inproceedings{bassignana-etal-2024-whats, title = "What{'}s wrong with your model? A Quantitative Analysis of Relation Classification", author = "Bassignana, Elisa and van der Goot, Rob and Plank, Barbara", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.20", doi = "10.18653/v1/2024.starsem-1.20", pages = "252--263", abstract = "With the aim of improving the state-of-the-art (SOTA) on a target task, a standard strategy in Natural Language Processing (NLP) research is to design a new model, or modify the existing SOTA, and then benchmark its performance on the target task. We argue in favor of enriching this chain of actions by a preliminary error-guided analysis: First, explore weaknesses by analyzing the hard cases where the existing model fails, and then target the improvement based on those. Interpretable evaluation has received little attention for structured prediction tasks. Therefore we propose the first in-depth analysis suite for Relation Classification (RC), and show its effectiveness through a case study. We propose a set of potentially influential attributes to focus on (e.g., entity distance, sentence length). Then, we bucket our datasets based on these attributes, and weight the importance of them through correlations. This allows us to identify highly challenging scenarios for the RC model. By exploiting the findings of our analysis, with a carefully targeted adjustment to our architecture, we effectively improve the performance over the baseline by {\textgreater}3 Micro-F1.", }
With the aim of improving the state-of-the-art (SOTA) on a target task, a standard strategy in Natural Language Processing (NLP) research is to design a new model, or modify the existing SOTA, and then benchmark its performance on the target task. We argue in favor of enriching this chain of actions by a preliminary error-guided analysis: First, explore weaknesses by analyzing the hard cases where the existing model fails, and then target the improvement based on those. Interpretable evaluation has received little attention for structured prediction tasks. Therefore we propose the first in-depth analysis suite for Relation Classification (RC), and show its effectiveness through a case study. We propose a set of potentially influential attributes to focus on (e.g., entity distance, sentence length). Then, we bucket our datasets based on these attributes, and weight the importance of them through correlations. This allows us to identify highly challenging scenarios for the RC model. By exploiting the findings of our analysis, with a carefully targeted adjustment to our architecture, we effectively improve the performance over the baseline by {\textgreater}3 Micro-F1.
[ "Bassignana, Elisa", "van der Goot, Rob", "Plank, Barbara" ]
What's wrong with your model? A Quantitative Analysis of Relation Classification
starsem-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.21.bib
https://aclanthology.org/2024.starsem-1.21/
@inproceedings{hosseini-staab-2024-disambiguating, title = "Disambiguating Emotional Connotations of Words Using Contextualized Word Representations", author = "Hosseini, Akram Sadat and Staab, Steffen", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.21", doi = "10.18653/v1/2024.starsem-1.21", pages = "264--277", abstract = "Understanding emotional nuances in written content is crucial for effective communication; however, the context-dependent nature of language poses challenges in precisely discerning emotions in text. This study contributes to the understanding of how the emotional connotations of a word are influenced by the sentence context in which it appears. Leveraging the contextual understanding embedded in contextualized word representations, we conduct an empirical investigation to (i) evaluate the varying abilities of these representations in distinguishing the diverse emotional connotations evoked by the same word across different contexts, (ii) explore potential biases in these representations toward specific emotions of a word, and (iii) assess the capability of these representations in estimating the number of emotional connotations evoked by a word in diverse contexts. Our experiments, utilizing four popular models{---}BERT, RoBERTa, XLNet, and GPT-2{---}and drawing on the GoEmotions and SemEval 2018 datasets, demonstrate that these models effectively discern emotional connotations of words. RoBERTa, in particular, shows superior performance and greater resilience against biases. Our further analysis reveals that disambiguating the emotional connotations of words significantly enhances emotion identification at the sentence level.", }
Understanding emotional nuances in written content is crucial for effective communication; however, the context-dependent nature of language poses challenges in precisely discerning emotions in text. This study contributes to the understanding of how the emotional connotations of a word are influenced by the sentence context in which it appears. Leveraging the contextual understanding embedded in contextualized word representations, we conduct an empirical investigation to (i) evaluate the varying abilities of these representations in distinguishing the diverse emotional connotations evoked by the same word across different contexts, (ii) explore potential biases in these representations toward specific emotions of a word, and (iii) assess the capability of these representations in estimating the number of emotional connotations evoked by a word in diverse contexts. Our experiments, utilizing four popular models{---}BERT, RoBERTa, XLNet, and GPT-2{---}and drawing on the GoEmotions and SemEval 2018 datasets, demonstrate that these models effectively discern emotional connotations of words. RoBERTa, in particular, shows superior performance and greater resilience against biases. Our further analysis reveals that disambiguating the emotional connotations of words significantly enhances emotion identification at the sentence level.
[ "Hosseini, Akram Sadat", "Staab, Steffen" ]
Disambiguating Emotional Connotations of Words Using Contextualized Word Representations
starsem-1.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.22.bib
https://aclanthology.org/2024.starsem-1.22/
@inproceedings{han-etal-2024-length, title = "Length-Aware Multi-Kernel Transformer for Long Document Classification", author = "Han, Guangzeng and Tsao, Jack and Huang, Xiaolei", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.22", doi = "10.18653/v1/2024.starsem-1.22", pages = "278--290", abstract = "Lengthy documents pose a unique challenge to neural language models due to substantial memory consumption. While existing state-of-the-art (SOTA) models segment long texts into equal-length snippets (e.g., 128 tokens per snippet) or deploy sparse attention networks, these methods have new challenges of context fragmentation and generalizability due to sentence boundaries and varying text lengths. For example, our empirical analysis has shown that SOTA models consistently overfit one set of lengthy documents (e.g., 2000 tokens) while performing worse on texts with other lengths (e.g., 1000 or 4000). In this study, we propose a Length-Aware Multi-Kernel Transformer (LAMKIT) to address the new challenges for the long document classification. LAMKIT encodes lengthy documents by diverse transformer-based kernels for bridging context boundaries and vectorizes text length by the kernels to promote model robustness over varying document lengths. Experiments on five standard benchmarks from health and law domains show LAMKIT outperforms SOTA models up to an absolute 10.9{\%} improvement. We conduct extensive ablation analyses to examine model robustness and effectiveness over varying document lengths.", }
Lengthy documents pose a unique challenge to neural language models due to substantial memory consumption. While existing state-of-the-art (SOTA) models segment long texts into equal-length snippets (e.g., 128 tokens per snippet) or deploy sparse attention networks, these methods have new challenges of context fragmentation and generalizability due to sentence boundaries and varying text lengths. For example, our empirical analysis has shown that SOTA models consistently overfit one set of lengthy documents (e.g., 2000 tokens) while performing worse on texts with other lengths (e.g., 1000 or 4000). In this study, we propose a Length-Aware Multi-Kernel Transformer (LAMKIT) to address the new challenges for the long document classification. LAMKIT encodes lengthy documents by diverse transformer-based kernels for bridging context boundaries and vectorizes text length by the kernels to promote model robustness over varying document lengths. Experiments on five standard benchmarks from health and law domains show LAMKIT outperforms SOTA models up to an absolute 10.9{\%} improvement. We conduct extensive ablation analyses to examine model robustness and effectiveness over varying document lengths.
[ "Han, Guangzeng", "Tsao, Jack", "Huang, Xiaolei" ]
Length-Aware Multi-Kernel Transformer for Long Document Classification
starsem-1.22
Poster
2405.07052
[ "https://github.com/trust-nlp/LAMKIT" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.23.bib
https://aclanthology.org/2024.starsem-1.23/
@inproceedings{buz-etal-2024-investigating, title = "Investigating Wit, Creativity, and Detectability of Large Language Models in Domain-Specific Writing Style Adaptation of {R}eddit{'}s Showerthoughts", author = "Buz, Tolga and Frost, Benjamin and Genchev, Nikola and Schneider, Moritz and Kaffee, Lucie-Aim{\'e}e and de Melo, Gerard", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.23", doi = "10.18653/v1/2024.starsem-1.23", pages = "291--307", abstract = "Recent Large Language Models (LLMs) have shown the ability to generate content that is difficult or impossible to distinguish from human writing. We investigate the ability of differently-sized LLMs to replicate human writing style in short, creative texts in the domain of Showerthoughts, thoughts that may occur during mundane activities. We compare GPT-2 and GPT-Neo fine-tuned on Reddit data as well as GPT-3.5 invoked in a zero-shot manner, against human-authored texts. We measure human preference on the texts across the specific dimensions that account for the quality of creative, witty texts. Additionally, we compare the ability of humans versus fine-tuned RoBERTa-based classifiers to detect AI-generated texts. We conclude that human evaluators rate the generated texts slightly worse on average regarding their creative quality, but they are unable to reliably distinguish between human-written and AI-generated texts. We further provide the dataset for creative, witty text generation based on Reddit Showerthoughts posts.", }
Recent Large Language Models (LLMs) have shown the ability to generate content that is difficult or impossible to distinguish from human writing. We investigate the ability of differently-sized LLMs to replicate human writing style in short, creative texts in the domain of Showerthoughts, thoughts that may occur during mundane activities. We compare GPT-2 and GPT-Neo fine-tuned on Reddit data as well as GPT-3.5 invoked in a zero-shot manner, against human-authored texts. We measure human preference on the texts across the specific dimensions that account for the quality of creative, witty texts. Additionally, we compare the ability of humans versus fine-tuned RoBERTa-based classifiers to detect AI-generated texts. We conclude that human evaluators rate the generated texts slightly worse on average regarding their creative quality, but they are unable to reliably distinguish between human-written and AI-generated texts. We further provide the dataset for creative, witty text generation based on Reddit Showerthoughts posts.
[ "Buz, Tolga", "Frost, Benjamin", "Genchev, Nikola", "Schneider, Moritz", "Kaffee, Lucie-Aim{\\'e}e", "de Melo, Gerard" ]
Investigating Wit, Creativity, and Detectability of Large Language Models in Domain-Specific Writing Style Adaptation of Reddit's Showerthoughts
starsem-1.23
Poster
2405.01660
[ "https://github.com/aiintelligentsystems/showerthoughts-dataset" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.24.bib
https://aclanthology.org/2024.starsem-1.24/
@inproceedings{salle-malmasi-2024-multilingual, title = "Multilingual and Code-Switched Sentence Ordering", author = "Salle, Alexandre and Malmasi, Shervin", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.24", doi = "10.18653/v1/2024.starsem-1.24", pages = "308--313", abstract = "Sentence Ordering (SO) is a linguistic task which requires re-ordering of shuffled sentences into a coherent paragraph. SO has downstream applications, but also serves as a semantic probe for computational models as this capability is essential for understanding narrative structures, causal and temporal relations within texts. Despite its importance, prior research has been limited to predictable English language structures and has not thoroughly addressed the complexities of multilingual and varied narrative contexts. To fill this gap, we introduce a novel and comprehensive Multilingual Sentence Ordering task that extends SO to diverse narratives across 12 languages, including challenging code-switched texts. We have developed MultiSO, a new benchmark dataset that represents these challenges. Our findings reveal that both specialized sentence ordering models and advanced Large Language Models like GPT-4 face significant challenges with this task.", }
Sentence Ordering (SO) is a linguistic task which requires re-ordering of shuffled sentences into a coherent paragraph. SO has downstream applications, but also serves as a semantic probe for computational models as this capability is essential for understanding narrative structures, causal and temporal relations within texts. Despite its importance, prior research has been limited to predictable English language structures and has not thoroughly addressed the complexities of multilingual and varied narrative contexts. To fill this gap, we introduce a novel and comprehensive Multilingual Sentence Ordering task that extends SO to diverse narratives across 12 languages, including challenging code-switched texts. We have developed MultiSO, a new benchmark dataset that represents these challenges. Our findings reveal that both specialized sentence ordering models and advanced Large Language Models like GPT-4 face significant challenges with this task.
[ "Salle, Alex", "re", "Malmasi, Shervin" ]
Multilingual and Code-Switched Sentence Ordering
starsem-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.25.bib
https://aclanthology.org/2024.starsem-1.25/
@inproceedings{ranaldi-zanzotto-2024-hans, title = "{HANS}, are you clever? Clever Hans Effect Analysis of Neural Systems", author = "Ranaldi, Leonardo and Zanzotto, Fabio", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.25", doi = "10.18653/v1/2024.starsem-1.25", pages = "314--325", abstract = "Large Language Models (LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively. In fact, several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models{'} abilities. However, earlier works demonstrate the presence of inherent {``}order bias{''} in LLMs, posing challenges to the appropriate evaluation. In this paper, we investigate LLMs{'} resilience abilities through a series of probing tests using four MCQ benchmarks. Introducing adversarial examples, we show a significant performance gap, mainly when varying the order of the choices, which reveals a selection bias and brings into discussion reasoning abilities. Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the LLMs, strengthened by including significant examples in few-shot scenarios. Finally, by using the Chain-of-Thought (CoT) technique, we elicit the model to reason and mitigate the bias by obtaining more robust models.", }
Large Language Models (LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively. In fact, several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models{'} abilities. However, earlier works demonstrate the presence of inherent {``}order bias{''} in LLMs, posing challenges to the appropriate evaluation. In this paper, we investigate LLMs{'} resilience abilities through a series of probing tests using four MCQ benchmarks. Introducing adversarial examples, we show a significant performance gap, mainly when varying the order of the choices, which reveals a selection bias and brings into discussion reasoning abilities. Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the LLMs, strengthened by including significant examples in few-shot scenarios. Finally, by using the Chain-of-Thought (CoT) technique, we elicit the model to reason and mitigate the bias by obtaining more robust models.
[ "Ranaldi, Leonardo", "Zanzotto, Fabio" ]
HANS, are you clever? Clever Hans Effect Analysis of Neural Systems
starsem-1.25
Poster
2309.12481
[ "" ]
https://huggingface.co/papers/2309.12481
0
0
0
2
1
[]
[]
[]
https://aclanthology.org/2024.starsem-1.26.bib
https://aclanthology.org/2024.starsem-1.26/
@inproceedings{charpentier-etal-2024-exploring, title = "Exploring Semantics in Pretrained Language Model Attention", author = "Charpentier, Fr{\'e}d{\'e}ric and Cugliari, Jairo and Guille, Adrien", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.26", doi = "10.18653/v1/2024.starsem-1.26", pages = "326--333", abstract = "Abstract Meaning Representations (AMRs) encode the semantics of sentences in the form of graphs. Vertices represent instances of concepts, and labeled edges represent semantic relations between those instances. Language models (LMs) operate by computing weights of edges of per layer complete graphs whose vertices are words in a sentence or a whole paragraph. In this work, we investigate the ability of the attention heads of two LMs, RoBERTa and GPT2, to detect the semantic relations encoded in an AMR. This is an attempt to show semantic capabilities of those models without finetuning. To do so, we apply both unsupervised and supervised learning techniques.", }
Abstract Meaning Representations (AMRs) encode the semantics of sentences in the form of graphs. Vertices represent instances of concepts, and labeled edges represent semantic relations between those instances. Language models (LMs) operate by computing weights of edges of per layer complete graphs whose vertices are words in a sentence or a whole paragraph. In this work, we investigate the ability of the attention heads of two LMs, RoBERTa and GPT2, to detect the semantic relations encoded in an AMR. This is an attempt to show semantic capabilities of those models without finetuning. To do so, we apply both unsupervised and supervised learning techniques.
[ "Charpentier, Fr{\\'e}d{\\'e}ric", "Cugliari, Jairo", "Guille, Adrien" ]
Exploring Semantics in Pretrained Language Model Attention
starsem-1.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.27.bib
https://aclanthology.org/2024.starsem-1.27/
@inproceedings{jang-etal-2024-enhancing, title = "Enhancing Self-Attention via Knowledge Fusion: Deriving Sentiment Lexical Attention from Semantic-Polarity Scores", author = "Jang, Dongjun and Kim, Jinwoong and Shin, Hyopil", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.27", doi = "10.18653/v1/2024.starsem-1.27", pages = "334--344", abstract = "In recent years, pre-trained language models have demonstrated exceptional performance across various natural language processing (NLP) tasks. One fundamental component of these models is the self-attention mechanism, which has played a vital role in capturing meaningful relationships between tokens. However, a question still remains as to whether injecting lexical features into the self-attention mechanism can further enhance the understanding and performance of language models. This paper presents a novel approach for injecting semantic-polarity knowledge, referred to as Sentiment Lexical Attention, directly into the self-attention mechanism of Transformer-based models. The primary goal is to improve performance on sentiment classification task. Our approach involves consistently injecting Sentiment Lexical Attention derived from the lexicon corpus into the attention scores throughout the training process. We empirically evaluate our method on the NSMC sentiment classification benchmark, showcasing significant performance improvements and achieving state-of-the-art results. Furthermore, our approach demonstrates robustness and effectiveness in out-of-domain tasks, indicating its potential for broad applicability. Additionally, we analyze the impact of Sentiment Lexical Attention on the view of the $CLS$ token{'}s attention distribution. Our method offers a fresh perspective on synergizing lexical features and attention scores, thereby encouraging further investigations in the realm of knowledge injection utilizing the lexical features.", }
In recent years, pre-trained language models have demonstrated exceptional performance across various natural language processing (NLP) tasks. One fundamental component of these models is the self-attention mechanism, which has played a vital role in capturing meaningful relationships between tokens. However, a question still remains as to whether injecting lexical features into the self-attention mechanism can further enhance the understanding and performance of language models. This paper presents a novel approach for injecting semantic-polarity knowledge, referred to as Sentiment Lexical Attention, directly into the self-attention mechanism of Transformer-based models. The primary goal is to improve performance on sentiment classification task. Our approach involves consistently injecting Sentiment Lexical Attention derived from the lexicon corpus into the attention scores throughout the training process. We empirically evaluate our method on the NSMC sentiment classification benchmark, showcasing significant performance improvements and achieving state-of-the-art results. Furthermore, our approach demonstrates robustness and effectiveness in out-of-domain tasks, indicating its potential for broad applicability. Additionally, we analyze the impact of Sentiment Lexical Attention on the view of the $CLS$ token{'}s attention distribution. Our method offers a fresh perspective on synergizing lexical features and attention scores, thereby encouraging further investigations in the realm of knowledge injection utilizing the lexical features.
[ "Jang, Dongjun", "Kim, Jinwoong", "Shin, Hyopil" ]
Enhancing Self-Attention via Knowledge Fusion: Deriving Sentiment Lexical Attention from Semantic-Polarity Scores
starsem-1.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.28.bib
https://aclanthology.org/2024.starsem-1.28/
@inproceedings{bacciu-etal-2024-handling, title = "Handling Ontology Gaps in Semantic Parsing", author = "Bacciu, Andrea and Damonte, Marco and Basaldella, Marco and Monti, Emilio", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.28", doi = "10.18653/v1/2024.starsem-1.28", pages = "345--359", abstract = "The majority of Neural Semantic Parsing (NSP) models are developed with the assumption that there are no concepts outside the ones such models can represent with their target symbols (closed-world assumption). This assumption leads to generate hallucinated outputs rather than admitting their lack of knowledge. Hallucinations can lead to wrong or potentially offensive responses to users. Hence, a mechanism to prevent this behavior is crucial to build trusted NSP-based Question Answering agents. To that end, we propose the Hallucination Simulation Framework (HSF), a general setting for stimulating and analyzing NSP model hallucinations. The framework can be applied to any NSP task with a closed-ontology. Using the proposed framework and KQA Pro as the benchmark dataset, we assess state-of-the-art techniques for hallucination detection. We then present a novel hallucination detection strategy that exploits the computational graph of the NSP model to detect the NSP hallucinations in the presence of ontology gaps, out-of-domain utterances, and to recognize NSP errors, improving the F1-Score respectively by {\textasciitilde}21{\%}, {\textasciitilde}24{\%} and {\textasciitilde}1{\%}. This is the first work in closed-ontology NSP that addresses the problem of recognizing ontology gaps. We release our code and checkpoints at https://github.com/amazon-science/handling-ontology-gaps-in-semantic-parsing.", }
The majority of Neural Semantic Parsing (NSP) models are developed with the assumption that there are no concepts outside the ones such models can represent with their target symbols (closed-world assumption). This assumption leads to generate hallucinated outputs rather than admitting their lack of knowledge. Hallucinations can lead to wrong or potentially offensive responses to users. Hence, a mechanism to prevent this behavior is crucial to build trusted NSP-based Question Answering agents. To that end, we propose the Hallucination Simulation Framework (HSF), a general setting for stimulating and analyzing NSP model hallucinations. The framework can be applied to any NSP task with a closed-ontology. Using the proposed framework and KQA Pro as the benchmark dataset, we assess state-of-the-art techniques for hallucination detection. We then present a novel hallucination detection strategy that exploits the computational graph of the NSP model to detect the NSP hallucinations in the presence of ontology gaps, out-of-domain utterances, and to recognize NSP errors, improving the F1-Score respectively by {\textasciitilde}21{\%}, {\textasciitilde}24{\%} and {\textasciitilde}1{\%}. This is the first work in closed-ontology NSP that addresses the problem of recognizing ontology gaps. We release our code and checkpoints at https://github.com/amazon-science/handling-ontology-gaps-in-semantic-parsing.
[ "Bacciu, Andrea", "Damonte, Marco", "Basaldella, Marco", "Monti, Emilio" ]
Handling Ontology Gaps in Semantic Parsing
starsem-1.28
Poster
2406.19537
[ "https://github.com/amazon-science/handling-ontology-gaps-in-semantic-parsing" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.29.bib
https://aclanthology.org/2024.starsem-1.29/
@inproceedings{su-etal-2024-pipenet, title = "{P}ipe{N}et: Question Answering with Semantic Pruning over Knowledge Graphs", author = "Su, Ying and Zhang, Jipeng and Song, Yangqiu and Zhang, Tong", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.29", doi = "10.18653/v1/2024.starsem-1.29", pages = "360--371", abstract = "It is well acknowledged that incorporating explicit knowledge graphs (KGs) can benefit question answering. Existing approaches typically follow a grounding-reasoning pipeline in which entity nodes are first grounded for the query (question and candidate answers), and then a reasoning module reasons over the matched multi-hop subgraph for answer prediction. Although the pipeline largely alleviates the issue of extracting essential information from giant KGs, efficiency is still an open challenge when scaling up hops in grounding the subgraphs. In this paper, we target at finding semantically related entity nodes in the subgraph to improve the efficiency of graph reasoning with KG. We propose a grounding-pruning-reasoning pipeline to prune noisy nodes, remarkably reducing the computation cost and memory usage while also obtaining decent subgraph representation. In detail, the pruning module first scores concept nodes based on the dependency distance between matched spans and then prunes the nodes according to score ranks. To facilitate the evaluation of pruned subgraphs, we also propose a graph attention network (GAT) based module to reason with the subgraph data. Experimental results on CommonsenseQA and OpenBookQA demonstrate the effectiveness of our method.", }
It is well acknowledged that incorporating explicit knowledge graphs (KGs) can benefit question answering. Existing approaches typically follow a grounding-reasoning pipeline in which entity nodes are first grounded for the query (question and candidate answers), and then a reasoning module reasons over the matched multi-hop subgraph for answer prediction. Although the pipeline largely alleviates the issue of extracting essential information from giant KGs, efficiency is still an open challenge when scaling up hops in grounding the subgraphs. In this paper, we target at finding semantically related entity nodes in the subgraph to improve the efficiency of graph reasoning with KG. We propose a grounding-pruning-reasoning pipeline to prune noisy nodes, remarkably reducing the computation cost and memory usage while also obtaining decent subgraph representation. In detail, the pruning module first scores concept nodes based on the dependency distance between matched spans and then prunes the nodes according to score ranks. To facilitate the evaluation of pruned subgraphs, we also propose a graph attention network (GAT) based module to reason with the subgraph data. Experimental results on CommonsenseQA and OpenBookQA demonstrate the effectiveness of our method.
[ "Su, Ying", "Zhang, Jipeng", "Song, Yangqiu", "Zhang, Tong" ]
PipeNet: Question Answering with Semantic Pruning over Knowledge Graphs
starsem-1.29
Poster
2401.17536
[ "https://github.com/hkust-knowcomp/pipenet" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.30.bib
https://aclanthology.org/2024.starsem-1.30/
@inproceedings{ranaldi-etal-2024-trip, title = "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", author = "Ranaldi, Leonardo and Ruzzetti, Elena and Venditti, Davide and Onorati, Dario and Zanzotto, Fabio", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.30", doi = "10.18653/v1/2024.starsem-1.30", pages = "372--384", abstract = "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training are emerging as the next big revolution in natural language processing and understanding. These CtB-LLMs are democratizing access to trainable Very Large-Language Models (VLLMs) and, thus, may represent the building blocks of many NLP systems solving downstream tasks. Hence, a little or a large bias in CtB-LLMs may cause huge harm. In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable. Indeed, according to current tests, the LLaMA and the OPT families have an important bias in gender, race, religion, and profession. In contrast to the analysis for other LMMs, we discovered that bias depends not on the number of parameters but on the perplexity. Finally, the debiasing of OPT using LORA reduces bias up to 4.12 points in the normalized stereotype score.", }
Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training are emerging as the next big revolution in natural language processing and understanding. These CtB-LLMs are democratizing access to trainable Very Large-Language Models (VLLMs) and, thus, may represent the building blocks of many NLP systems solving downstream tasks. Hence, a little or a large bias in CtB-LLMs may cause huge harm. In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable. Indeed, according to current tests, the LLaMA and the OPT families have an important bias in gender, race, religion, and profession. In contrast to the analysis for other LMMs, we discovered that bias depends not on the number of parameters but on the perplexity. Finally, the debiasing of OPT using LORA reduces bias up to 4.12 points in the normalized stereotype score.
[ "Ranaldi, Leonardo", "Ruzzetti, Elena", "Venditti, Davide", "Onorati, Dario", "Zanzotto, Fabio" ]
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
starsem-1.30
Poster
2305.13862
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.31.bib
https://aclanthology.org/2024.starsem-1.31/
@inproceedings{fu-frank-2024-compositional, title = "Compositional Structured Explanation Generation with Dynamic Modularized Reasoning", author = "Fu, Xiyan and Frank, Anette", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.31", doi = "10.18653/v1/2024.starsem-1.31", pages = "385--401", abstract = "In this work, we propose a new task, compositional structured explanation generation (CSEG), to facilitate research on compositional generalization in reasoning. Despite the success of language models in solving reasoning tasks, their compositional generalization capabilities are under-researched. Our new CSEG task tests a model{'}s ability to generalize from generating entailment trees with a limited number of inference steps {--} to more steps, focusing on the length and shapes of entailment trees. CSEG is challenging in requiring both reasoning and compositional generalization abilities, and by being framed as a generation task. Besides the CSEG task, we propose a new dynamic modularized reasoning model, MORSE, that factorizes the inference process into modules, where each module represents a functional unit. We adopt modularized self-attention to dynamically select and route inputs to dedicated heads, which specializes them to specific functions. Using CSEG, we compare MORSE to models from prior work. Our analyses show that the task is challenging, but that the dynamic reasoning modules of MORSE are effective, showing competitive compositional generalization abilities in a generation setting.", }
In this work, we propose a new task, compositional structured explanation generation (CSEG), to facilitate research on compositional generalization in reasoning. Despite the success of language models in solving reasoning tasks, their compositional generalization capabilities are under-researched. Our new CSEG task tests a model{'}s ability to generalize from generating entailment trees with a limited number of inference steps {--} to more steps, focusing on the length and shapes of entailment trees. CSEG is challenging in requiring both reasoning and compositional generalization abilities, and by being framed as a generation task. Besides the CSEG task, we propose a new dynamic modularized reasoning model, MORSE, that factorizes the inference process into modules, where each module represents a functional unit. We adopt modularized self-attention to dynamically select and route inputs to dedicated heads, which specializes them to specific functions. Using CSEG, we compare MORSE to models from prior work. Our analyses show that the task is challenging, but that the dynamic reasoning modules of MORSE are effective, showing competitive compositional generalization abilities in a generation setting.
[ "Fu, Xiyan", "Frank, Anette" ]
Compositional Structured Explanation Generation with Dynamic Modularized Reasoning
starsem-1.31
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.32.bib
https://aclanthology.org/2024.starsem-1.32/
@inproceedings{ki-etal-2024-inspecting, title = "Inspecting Soundness of {AMR} Similarity Metrics in terms of Equivalence and Inequivalence", author = "Ki, Kyung Seo and Kim, Bugeun and Gweon, Gahgene", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.32", doi = "10.18653/v1/2024.starsem-1.32", pages = "402--409", abstract = "In this study, we investigate soundness of current Abstract Meaning Representation (AMR) similarity metrics in terms of equivalence and inequivalence. Specifically, AMR guidelines provide several equivalence and inequivalence conditions to reflect the meaning aspect of the semantics. Thus, it is important to examine an AMR metric{'}s soundness, i.e., whether the metric correctly reflects the guidelines. However, the existing metrics have less investigated their soundness. In this work, we propose a new experimental method using simulated data and a series of statistical tests to verify the metric{'}s soundness. Our experimental result revealed that all existing metrics such as Smatch, SemBLEU, S2match, Smatch++, WWLK-theta, WWLK-k3e2n, and SEMA did not fully meet the AMR guidelines in terms of equivalence and inequivalence aspects. Also, to alleviate this soundness problem, we suggest a revised metric called Smatch{\#}, which adopts simple graph standardization technique that can improve the soundness of an existing metric.", }
In this study, we investigate soundness of current Abstract Meaning Representation (AMR) similarity metrics in terms of equivalence and inequivalence. Specifically, AMR guidelines provide several equivalence and inequivalence conditions to reflect the meaning aspect of the semantics. Thus, it is important to examine an AMR metric{'}s soundness, i.e., whether the metric correctly reflects the guidelines. However, the existing metrics have less investigated their soundness. In this work, we propose a new experimental method using simulated data and a series of statistical tests to verify the metric{'}s soundness. Our experimental result revealed that all existing metrics such as Smatch, SemBLEU, S2match, Smatch++, WWLK-theta, WWLK-k3e2n, and SEMA did not fully meet the AMR guidelines in terms of equivalence and inequivalence aspects. Also, to alleviate this soundness problem, we suggest a revised metric called Smatch{\#}, which adopts simple graph standardization technique that can improve the soundness of an existing metric.
[ "Ki, Kyung Seo", "Kim, Bugeun", "Gweon, Gahgene" ]
Inspecting Soundness of AMR Similarity Metrics in terms of Equivalence and Inequivalence
starsem-1.32
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.33.bib
https://aclanthology.org/2024.starsem-1.33/
@inproceedings{dorkin-sirts-2024-sonajaht, title = "S{\~o}najaht: Definition Embeddings and Semantic Search for Reverse Dictionary Creation", author = "Dorkin, Aleksei and Sirts, Kairit", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.33", doi = "10.18653/v1/2024.starsem-1.33", pages = "410--420", abstract = "We present an information retrieval based reverse dictionary system using modern pre-trained language models and approximate nearest neighbors search algorithms. The proposed approach is applied to an existing Estonian language lexicon resource, S{\~o}naveeb (word web), with the purpose of enhancing and enriching it by introducing cross-lingual reverse dictionary functionality powered by semantic search. The performance of the system is evaluated using both an existing labeled English dataset of words and definitions that is extended to contain also Estonian and Russian translations, and a novel unlabeled evaluation approach that extracts the evaluation data from the lexicon resource itself using synonymy relations. Evaluation results indicate that the information retrieval based semantic search approach without any model training is feasible, producing median rank of 1 in the monolingual setting and median rank of 2 in the cross-lingual setting using the unlabeled evaluation approach, with models trained for cross-lingual retrieval and including Estonian in their training data showing superior performance in our particular task.", }
We present an information retrieval based reverse dictionary system using modern pre-trained language models and approximate nearest neighbors search algorithms. The proposed approach is applied to an existing Estonian language lexicon resource, S{\~o}naveeb (word web), with the purpose of enhancing and enriching it by introducing cross-lingual reverse dictionary functionality powered by semantic search. The performance of the system is evaluated using both an existing labeled English dataset of words and definitions that is extended to contain also Estonian and Russian translations, and a novel unlabeled evaluation approach that extracts the evaluation data from the lexicon resource itself using synonymy relations. Evaluation results indicate that the information retrieval based semantic search approach without any model training is feasible, producing median rank of 1 in the monolingual setting and median rank of 2 in the cross-lingual setting using the unlabeled evaluation approach, with models trained for cross-lingual retrieval and including Estonian in their training data showing superior performance in our particular task.
[ "Dorkin, Aleksei", "Sirts, Kairit" ]
Sõnajaht: Definition Embeddings and Semantic Search for Reverse Dictionary Creation
starsem-1.33
Poster
2404.19430
[ "https://github.com/slowwavesleep/sonajaht" ]
https://huggingface.co/papers/2404.19430
1
0
0
2
1
[]
[ "adorkin/sonajaht" ]
[]
https://aclanthology.org/2024.starsem-1.34.bib
https://aclanthology.org/2024.starsem-1.34/
@inproceedings{hong-etal-2024-large, title = "Do large language models and humans have similar behaviours in causal inference with script knowledge?", author = "Hong, Xudong and Ryzhova, Margarita and Biondi, Daniel and Demberg, Vera", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.34", doi = "10.18653/v1/2024.starsem-1.34", pages = "421--437", abstract = "Recently, large pre-trained language models (LLMs) have demonstrated superior language understanding abilities, including zero-shot causal reasoning. However, it is unclear to what extent their capabilities are similar to human ones. We here study the processing of an event $B$ in a script-based story, which causally depends on a previous event $A$. In our manipulation, event $A$ is stated, negated, or omitted in an earlier section of the text. We first conducted a self-paced reading experiment, which showed that humans exhibit significantly longer reading times when causal conflicts exist ($\neg A \rightarrow B$) than under logical conditions ($A \rightarrow B$). However, reading times remain similar when cause A is not explicitly mentioned, indicating that humans can easily infer event B from their script knowledge. We then tested a variety of LLMs on the same data to check to what extent the models replicate human behavior. Our experiments show that 1) only recent LLMs, like GPT-3 or Vicuna, correlate with human behavior in the $\neg A \rightarrow B$ condition. 2) Despite this correlation, all models still fail to predict that $nil \rightarrow B$ is less surprising than $\neg A \rightarrow B$, indicating that LLMs still have difficulties integrating script knowledge.", }
Recently, large pre-trained language models (LLMs) have demonstrated superior language understanding abilities, including zero-shot causal reasoning. However, it is unclear to what extent their capabilities are similar to human ones. We here study the processing of an event $B$ in a script-based story, which causally depends on a previous event $A$. In our manipulation, event $A$ is stated, negated, or omitted in an earlier section of the text. We first conducted a self-paced reading experiment, which showed that humans exhibit significantly longer reading times when causal conflicts exist ($\neg A \rightarrow B$) than under logical conditions ($A \rightarrow B$). However, reading times remain similar when cause A is not explicitly mentioned, indicating that humans can easily infer event B from their script knowledge. We then tested a variety of LLMs on the same data to check to what extent the models replicate human behavior. Our experiments show that 1) only recent LLMs, like GPT-3 or Vicuna, correlate with human behavior in the $\neg A \rightarrow B$ condition. 2) Despite this correlation, all models still fail to predict that $nil \rightarrow B$ is less surprising than $\neg A \rightarrow B$, indicating that LLMs still have difficulties integrating script knowledge.
[ "Hong, Xudong", "Ryzhova, Margarita", "Biondi, Daniel", "Demberg, Vera" ]
Do large language models and humans have similar behaviours in causal inference with script knowledge?
starsem-1.34
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.starsem-1.35.bib
https://aclanthology.org/2024.starsem-1.35/
@inproceedings{anantheswaran-etal-2024-edm3, title = "{EDM}3: Event Detection as Multi-task Text Generation", author = "Anantheswaran, Ujjwala and Gupta, Himanshu and Parmar, Mihir and Pal, Kuntal and Baral, Chitta", editor = "Bollegala, Danushka and Shwartz, Vered", booktitle = "Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.starsem-1.35", doi = "10.18653/v1/2024.starsem-1.35", pages = "438--451", abstract = "We present EDM3, a novel approach for Event Detection (ED) based on decomposing and reformulating ED, and fine-tuning over its atomic subtasks. EDM3 enhances knowledge transfer while mitigating prediction error propagation inherent in pipelined approaches. EDM3 infers dataset-specific knowledge required for the complex primary task from its atomic tasks, making it adaptable to any set of event types. We evaluate EDM3 on multiple ED datasets, achieving state-of-the-art results on RAMS (71.3{\%} vs 65.1{\%} F1), and competitive performance on WikiEvents, MAVEN (∆ = 0.2{\%}), and MLEE (∆ = 1.8{\%}). We present an ablation study over rare event types ({\textless}15 instances in training data) in MAVEN, where EDM3 achieves {\textasciitilde}90{\%} F1. To the best of the authors{'} knowledge, we are the first to analyze ED performance over non-standard event configurations (i.e., multi-word and multi-class triggers). Experimental results show that EDM3 achieves {\textasciitilde}90{\%} exact match accuracy on multi-word triggers and {\textasciitilde}61{\%} prediction accuracy on multi-class triggers. This work establishes the effectiveness of EDM3 in enhancing performance on a complex information extraction task.", }
We present EDM3, a novel approach for Event Detection (ED) based on decomposing and reformulating ED, and fine-tuning over its atomic subtasks. EDM3 enhances knowledge transfer while mitigating prediction error propagation inherent in pipelined approaches. EDM3 infers dataset-specific knowledge required for the complex primary task from its atomic tasks, making it adaptable to any set of event types. We evaluate EDM3 on multiple ED datasets, achieving state-of-the-art results on RAMS (71.3{\%} vs 65.1{\%} F1), and competitive performance on WikiEvents, MAVEN (∆ = 0.2{\%}), and MLEE (∆ = 1.8{\%}). We present an ablation study over rare event types ({\textless}15 instances in training data) in MAVEN, where EDM3 achieves {\textasciitilde}90{\%} F1. To the best of the authors{'} knowledge, we are the first to analyze ED performance over non-standard event configurations (i.e., multi-word and multi-class triggers). Experimental results show that EDM3 achieves {\textasciitilde}90{\%} exact match accuracy on multi-word triggers and {\textasciitilde}61{\%} prediction accuracy on multi-class triggers. This work establishes the effectiveness of EDM3 in enhancing performance on a complex information extraction task.
[ "Anantheswaran, Ujjwala", "Gupta, Himanshu", "Parmar, Mihir", "Pal, Kuntal", "Baral, Chitta" ]
EDM3: Event Detection as Multi-task Text Generation
starsem-1.35
Poster
2305.16357
[ "https://github.com/ujjwalaananth/edm3_eventdetection" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trustnlp-1.1.bib
https://aclanthology.org/2024.trustnlp-1.1/
@inproceedings{adilazuarda-2024-beyond, title = "Beyond {T}uring: A Comparative Analysis of Approaches for Detecting Machine-Generated Text", author = "Adilazuarda, Muhammad", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Cao, Yang Trista and Mehrabi, Ninareh and Zhao, Jieyu and Galstyan, Aram and Dhamala, Jwala and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.trustnlp-1.1", doi = "10.18653/v1/2024.trustnlp-1.1", pages = "1--12", abstract = "Significant progress has been made on text generation by pre-trained language models (PLMs), yet distinguishing between human and machine-generated text poses an escalating challenge. This paper offers an in-depth evaluation of three distinct methods used to address this task: traditional shallow learning, Language Model (LM) fine-tuning, and Multilingual Model fine-tuning. These approaches are rigorously tested on a wide range of machine-generated texts, providing a benchmark of their competence in distinguishing between human-authored and machine-authored linguistic constructs. The results reveal considerable differences in performance across methods, thus emphasizing the continued need for advancement in this crucial area of NLP. This study offers valuable insights and paves the way for future research aimed at creating robust and highly discriminative models.", }
Significant progress has been made on text generation by pre-trained language models (PLMs), yet distinguishing between human and machine-generated text poses an escalating challenge. This paper offers an in-depth evaluation of three distinct methods used to address this task: traditional shallow learning, Language Model (LM) fine-tuning, and Multilingual Model fine-tuning. These approaches are rigorously tested on a wide range of machine-generated texts, providing a benchmark of their competence in distinguishing between human-authored and machine-authored linguistic constructs. The results reveal considerable differences in performance across methods, thus emphasizing the continued need for advancement in this crucial area of NLP. This study offers valuable insights and paves the way for future research aimed at creating robust and highly discriminative models.
[ "Adilazuarda, Muhammad" ]
Beyond Turing: A Comparative Analysis of Approaches for Detecting Machine-Generated Text
trustnlp-1.1
Poster
2311.12373
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trustnlp-1.2.bib
https://aclanthology.org/2024.trustnlp-1.2/
@inproceedings{lal-etal-2024-automated, title = "Automated Adversarial Discovery for Safety Classifiers", author = "Lal, Yash Kumar and Lahoti, Preethi and Sinha, Aradhana and Qin, Yao and Balashankar, Ananth", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Cao, Yang Trista and Mehrabi, Ninareh and Zhao, Jieyu and Galstyan, Aram and Dhamala, Jwala and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.trustnlp-1.2", doi = "10.18653/v1/2024.trustnlp-1.2", pages = "13--26", abstract = "Safety classifiers are critical in mitigating toxicity on online forums such as social media and in chatbots. Still, they continue to be vulnerable to emergent, and often innumerable, adversarial attacks.Traditional automated adversarial data generation methods, however, tend to produce attacks that are not diverse, but variations of previously observed harm types.We formalize the task of automated adversarial discovery for safety classifiers - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier.We measure progress on this task along two key axes (1) adversarial success: does the attack fool the classifier? and (2) dimensional diversity: does the attack represent a previously unseen harm type?Our evaluation of existing attack generation methods on the CivilComments toxicity task reveals their limitations: Word perturbation attacks fail to fool classifiers, while prompt-based LLM attacks have more adversarial success, but lack dimensional diversity.Even our best-performing prompt-based method finds new successful attacks on unseen harm dimensions of attacks only 5{\%} of the time.Automatically finding new harmful dimensions of attack is crucial and there is substantial headroom for future research on our new task.", }
Safety classifiers are critical in mitigating toxicity on online forums such as social media and in chatbots. Still, they continue to be vulnerable to emergent, and often innumerable, adversarial attacks.Traditional automated adversarial data generation methods, however, tend to produce attacks that are not diverse, but variations of previously observed harm types.We formalize the task of automated adversarial discovery for safety classifiers - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier.We measure progress on this task along two key axes (1) adversarial success: does the attack fool the classifier? and (2) dimensional diversity: does the attack represent a previously unseen harm type?Our evaluation of existing attack generation methods on the CivilComments toxicity task reveals their limitations: Word perturbation attacks fail to fool classifiers, while prompt-based LLM attacks have more adversarial success, but lack dimensional diversity.Even our best-performing prompt-based method finds new successful attacks on unseen harm dimensions of attacks only 5{\%} of the time.Automatically finding new harmful dimensions of attack is crucial and there is substantial headroom for future research on our new task.
[ "Lal, Yash Kumar", "Lahoti, Preethi", "Sinha, Aradhana", "Qin, Yao", "Balashankar, Ananth" ]
Automated Adversarial Discovery for Safety Classifiers
trustnlp-1.2
Poster
2406.17104
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trustnlp-1.3.bib
https://aclanthology.org/2024.trustnlp-1.3/
@inproceedings{setzu-etal-2024-fairbelief, title = "{F}air{B}elief - Assessing Harmful Beliefs in Language Models", author = "Setzu, Mattia and Marchiori Manerba, Marta and Minervini, Pasquale and Nozza, Debora", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Cao, Yang Trista and Mehrabi, Ninareh and Zhao, Jieyu and Galstyan, Aram and Dhamala, Jwala and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.trustnlp-1.3", doi = "10.18653/v1/2024.trustnlp-1.3", pages = "27--39", abstract = "Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.This paper proposes FairBelief, an analytical approach to capture and assess beliefs, i.e., propositions that an LM may embed with different degrees of confidence and that covertly influence its predictions. With FairBelief, we leverage prompting to study the behavior of several state-of-the-art LMs across different previously neglected axes, such as model scale and likelihood, assessing predictions on a fairness dataset specifically designed to quantify LMs{'} outputs{'} hurtfulness.Finally, we conclude with an in-depth qualitative assessment of the beliefs emitted by the models.We apply FairBelief to English LMs, revealing that, although these architectures enable high performances on diverse natural language processing tasks, they show hurtful beliefs about specific genders. Interestingly, training procedure and dataset, model scale, and architecture induce beliefs of different degrees of hurtfulness.", }
Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.This paper proposes FairBelief, an analytical approach to capture and assess beliefs, i.e., propositions that an LM may embed with different degrees of confidence and that covertly influence its predictions. With FairBelief, we leverage prompting to study the behavior of several state-of-the-art LMs across different previously neglected axes, such as model scale and likelihood, assessing predictions on a fairness dataset specifically designed to quantify LMs{'} outputs{'} hurtfulness.Finally, we conclude with an in-depth qualitative assessment of the beliefs emitted by the models.We apply FairBelief to English LMs, revealing that, although these architectures enable high performances on diverse natural language processing tasks, they show hurtful beliefs about specific genders. Interestingly, training procedure and dataset, model scale, and architecture induce beliefs of different degrees of hurtfulness.
[ "Setzu, Mattia", "Marchiori Manerba, Marta", "Minervini, Pasquale", "Nozza, Debora" ]
FairBelief - Assessing Harmful Beliefs in Language Models
trustnlp-1.3
Poster
2402.17389
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trustnlp-1.4.bib
https://aclanthology.org/2024.trustnlp-1.4/
@inproceedings{bui-von-der-wense-2024-trade, title = "The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification", author = "Bui, Minh Duc and Von Der Wense, Katharina", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Cao, Yang Trista and Mehrabi, Ninareh and Zhao, Jieyu and Galstyan, Aram and Dhamala, Jwala and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.trustnlp-1.4", doi = "10.18653/v1/2024.trustnlp-1.4", pages = "40--50", abstract = "Current natural language processing (NLP) research tends to focus on only one or, less frequently, two dimensions {--} e.g., performance, interpretability, or efficiency {--} at a time, which may lead to suboptimal conclusions. Work on adapter modulesfocuses on improving performance and efficiency, with no investigation of unintended consequences on other aspects such as fairness. To address this gap, we conduct experiments on three text classification datasets by either (1) finetuning all parameters or (2) using adapter modules. Regarding performance and efficiency, we confirm prior findings that the accuracy of adapter-enhanced models is roughly on par with that of fully finetuned models, while training time is substantially reduced. Regarding fairness, we show that adapter modules result in mixed fairness across sensitive groups. Further investigation reveals that, when the standard finetuned model exhibits limited biases, adapter modules typically do not introduce extra bias. On the other hand, when the finetuned model exhibits increased bias, the use of adapter modules poses the potential danger of amplifying these biases to a significant extent. Our findings highlight the need for a case-by-case evaluation rather than a one-size-fits-all judgment.", }
Current natural language processing (NLP) research tends to focus on only one or, less frequently, two dimensions {--} e.g., performance, interpretability, or efficiency {--} at a time, which may lead to suboptimal conclusions. Work on adapter modulesfocuses on improving performance and efficiency, with no investigation of unintended consequences on other aspects such as fairness. To address this gap, we conduct experiments on three text classification datasets by either (1) finetuning all parameters or (2) using adapter modules. Regarding performance and efficiency, we confirm prior findings that the accuracy of adapter-enhanced models is roughly on par with that of fully finetuned models, while training time is substantially reduced. Regarding fairness, we show that adapter modules result in mixed fairness across sensitive groups. Further investigation reveals that, when the standard finetuned model exhibits limited biases, adapter modules typically do not introduce extra bias. On the other hand, when the finetuned model exhibits increased bias, the use of adapter modules poses the potential danger of amplifying these biases to a significant extent. Our findings highlight the need for a case-by-case evaluation rather than a one-size-fits-all judgment.
[ "Bui, Minh Duc", "Von Der Wense, Katharina" ]
The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification
trustnlp-1.4
Poster
2405.02010
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trustnlp-1.5.bib
https://aclanthology.org/2024.trustnlp-1.5/
@inproceedings{bohacek-bravansky-2024-xgboost, title = "When {XGB}oost Outperforms {GPT}-4 on Text Classification: A Case Study", author = "Bohacek, Matyas and Bravansky, Michal", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Cao, Yang Trista and Mehrabi, Ninareh and Zhao, Jieyu and Galstyan, Aram and Dhamala, Jwala and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.trustnlp-1.5", doi = "10.18653/v1/2024.trustnlp-1.5", pages = "51--60", abstract = "Large language models (LLMs) are increasingly used for applications beyond text generation, ranging from text summarization to instruction following. One popular example of exploiting LLMs{'} zero- and few-shot capabilities is the task of text classification. This short paper compares two popular LLM-based classification pipelines (GPT-4 and LLAMA 2) to a popular pre-LLM-era classification pipeline on the task of news trustworthiness classification, focusing on performance, training, and deployment requirements. We find that, in this case, the pre-LLM-era ensemble pipeline outperforms the two popular LLM pipelines while being orders of magnitude smaller in parameter size.", }
Large language models (LLMs) are increasingly used for applications beyond text generation, ranging from text summarization to instruction following. One popular example of exploiting LLMs{'} zero- and few-shot capabilities is the task of text classification. This short paper compares two popular LLM-based classification pipelines (GPT-4 and LLAMA 2) to a popular pre-LLM-era classification pipeline on the task of news trustworthiness classification, focusing on performance, training, and deployment requirements. We find that, in this case, the pre-LLM-era ensemble pipeline outperforms the two popular LLM pipelines while being orders of magnitude smaller in parameter size.
[ "Bohacek, Matyas", "Bravansky, Michal" ]
When XGBoost Outperforms GPT-4 on Text Classification: A Case Study
trustnlp-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trustnlp-1.6.bib
https://aclanthology.org/2024.trustnlp-1.6/
@inproceedings{lin-etal-2024-towards, title = "Towards Healthy {AI}: Large Language Models Need Therapists Too", author = "Lin, Baihan and Bouneffouf, Djallel and Cecchi, Guillermo and Varshney, Kush", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Cao, Yang Trista and Mehrabi, Ninareh and Zhao, Jieyu and Galstyan, Aram and Dhamala, Jwala and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.trustnlp-1.6", doi = "10.18653/v1/2024.trustnlp-1.6", pages = "61--70", abstract = "Recent advances in large language models (LLMs) have led to the development of powerful chatbots capable of engaging in fluent human-like conversations. However, these chatbots may be harmful, exhibiting manipulation, gaslighting, narcissism, and other toxicity. To work toward safer and more well-adjusted models, we propose a framework that uses psychotherapy to identify and mitigate harmful chatbot behaviors. The framework involves four different artificial intelligence (AI) agents: the Chatbot whose behavior is to be adjusted, a User, a Therapist, and a Critic that can be paired with reinforcement learning-based LLM tuning. We illustrate the framework with a working example of a social conversation involving four instances of ChatGPT, showing that the framework may mitigate the toxicity in conversations between LLM-driven chatbots and people. Although there are still several challenges and directions to be addressed in the future, the proposed framework is a promising approach to improving the alignment between LLMs and human values.", }
Recent advances in large language models (LLMs) have led to the development of powerful chatbots capable of engaging in fluent human-like conversations. However, these chatbots may be harmful, exhibiting manipulation, gaslighting, narcissism, and other toxicity. To work toward safer and more well-adjusted models, we propose a framework that uses psychotherapy to identify and mitigate harmful chatbot behaviors. The framework involves four different artificial intelligence (AI) agents: the Chatbot whose behavior is to be adjusted, a User, a Therapist, and a Critic that can be paired with reinforcement learning-based LLM tuning. We illustrate the framework with a working example of a social conversation involving four instances of ChatGPT, showing that the framework may mitigate the toxicity in conversations between LLM-driven chatbots and people. Although there are still several challenges and directions to be addressed in the future, the proposed framework is a promising approach to improving the alignment between LLMs and human values.
[ "Lin, Baihan", "Bouneffouf, Djallel", "Cecchi, Guillermo", "Varshney, Kush" ]
Towards Healthy AI: Large Language Models Need Therapists Too
trustnlp-1.6
Poster
2304.00416
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]