Datasets:

bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
sequencelengths
1
45
title
stringlengths
21
199
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
28
upvotes
int64
-1
255
num_comments
int64
-1
23
n_authors
int64
-1
35
proceedings
stringlengths
38
47
Models
sequencelengths
0
57
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.acl-long.501.bib
@inproceedings{na-etal-2024-reward, title = "Reward-based Input Construction for Cross-document Relation Extraction", author = "Na, Byeonghu and Jo, Suhyeon and Kim, Yeongmin and Moon, Il-chul", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.501", pages = "9254--9270", abstract = "Relation extraction (RE) is a fundamental task in natural language processing, aiming to identify relations between target entities in text. While many RE methods are designed for a single sentence or document, cross-document RE has emerged to address relations across multiple long documents. Given the nature of long documents in cross-document RE, extracting document embeddings is challenging due to the length constraints of pre-trained language models. Therefore, we propose REward-based Input Construction (REIC), the first learning-based sentence selector for cross-document RE. REIC extracts sentences based on relational evidence, enabling the RE module to effectively infer relations. Since supervision of evidence sentences is generally unavailable, we train REIC using reinforcement learning with RE prediction scores as rewards. Experimental results demonstrate the superiority of our method over heuristic methods for different RE structures and backbones in cross-document RE. Our code is publicly available at https://github.com/aailabkaist/REIC.", }
Relation extraction (RE) is a fundamental task in natural language processing, aiming to identify relations between target entities in text. While many RE methods are designed for a single sentence or document, cross-document RE has emerged to address relations across multiple long documents. Given the nature of long documents in cross-document RE, extracting document embeddings is challenging due to the length constraints of pre-trained language models. Therefore, we propose REward-based Input Construction (REIC), the first learning-based sentence selector for cross-document RE. REIC extracts sentences based on relational evidence, enabling the RE module to effectively infer relations. Since supervision of evidence sentences is generally unavailable, we train REIC using reinforcement learning with RE prediction scores as rewards. Experimental results demonstrate the superiority of our method over heuristic methods for different RE structures and backbones in cross-document RE. Our code is publicly available at https://github.com/aailabkaist/REIC.
[ "Na, Byeonghu", "Jo, Suhyeon", "Kim, Yeongmin", "Moon, Il-chul" ]
Reward-based Input Construction for Cross-document Relation Extraction
acl-long.501
Oral
2405.20649
[ "https://github.com/aailabkaist/reic" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.501/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.502.bib
@inproceedings{zhang-etal-2024-hyperspherical, title = "Hyperspherical Multi-Prototype with Optimal Transport for Event Argument Extraction", author = "Zhang, Guangjun and Zhang, Hu and Wang, YuJie and Li, Ru and Tan, Hongye and Liang, Jiye", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.502", pages = "9271--9284", abstract = "Event Argument Extraction (EAE) aims to extract arguments for specified events from a text. Previous research has mainly focused on addressing long-distance dependencies of arguments, modeling co-occurrence relationships between roles and events, but overlooking potential inductive biases: (i) semantic differences among arguments of the same type and (ii) large margin separation between arguments of the different types. Inspired by prototype networks, we introduce a new model named HMPEAE, which takes the two inductive biases above as targets to locate prototypes and guide the model to learn argument representations based on these prototypes.Specifically, we set multiple prototypes to represent each role to capture intra-class differences. Simultaneously, we use hypersphere as the output space for prototypes, defining large margin separation between prototypes to encourage the model to learn significant differences between different types of arguments effectively.We solve the {``}argument-prototype{''} assignment as an optimal transport problem to optimize the argument representation and minimize the absolute distance between arguments and prototypes to achieve compactness within sub-clusters. Experimental results on the RAMS and WikiEvents datasets show that HMPEAE achieves state-of-the-art performances.", }
Event Argument Extraction (EAE) aims to extract arguments for specified events from a text. Previous research has mainly focused on addressing long-distance dependencies of arguments, modeling co-occurrence relationships between roles and events, but overlooking potential inductive biases: (i) semantic differences among arguments of the same type and (ii) large margin separation between arguments of the different types. Inspired by prototype networks, we introduce a new model named HMPEAE, which takes the two inductive biases above as targets to locate prototypes and guide the model to learn argument representations based on these prototypes.Specifically, we set multiple prototypes to represent each role to capture intra-class differences. Simultaneously, we use hypersphere as the output space for prototypes, defining large margin separation between prototypes to encourage the model to learn significant differences between different types of arguments effectively.We solve the {``}argument-prototype{''} assignment as an optimal transport problem to optimize the argument representation and minimize the absolute distance between arguments and prototypes to achieve compactness within sub-clusters. Experimental results on the RAMS and WikiEvents datasets show that HMPEAE achieves state-of-the-art performances.
[ "Zhang, Guangjun", "Zhang, Hu", "Wang, YuJie", "Li, Ru", "Tan, Hongye", "Liang, Jiye" ]
Hyperspherical Multi-Prototype with Optimal Transport for Event Argument Extraction
acl-long.502
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.502/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.503.bib
@inproceedings{li-etal-2024-understanding-retrieval, title = "Understanding Retrieval Robustness for Retrieval-augmented Image Captioning", author = "Li, Wenyan and Li, Jiaang and Ramos, Rita and Tang, Raphael and Elliott, Desmond", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.503", pages = "9285--9299", abstract = "Recent advances in retrieval-augmented models for image captioning highlight the benefit of retrieving related captions for efficient, lightweight models with strong domain-transfer capabilities. While these models demonstrate the success of retrieval augmentation, retrieval models are still far from perfect in practice: the retrieved information can sometimes mislead the model, resulting in incorrect generation and worse performance. In this paper, we analyze the robustness of a retrieval-augmented captioning model SmallCap. Our analysis shows that the model is sensitive to tokens that appear in the majority of the retrieved captions, and the input attribution shows that those tokens are likely copied into the generated output. Given these findings, we propose to train the model by sampling retrieved captions from more diverse sets. This decreases the chance that the model learns to copy majority tokens, and improves both in-domain and cross-domain performance.", }
Recent advances in retrieval-augmented models for image captioning highlight the benefit of retrieving related captions for efficient, lightweight models with strong domain-transfer capabilities. While these models demonstrate the success of retrieval augmentation, retrieval models are still far from perfect in practice: the retrieved information can sometimes mislead the model, resulting in incorrect generation and worse performance. In this paper, we analyze the robustness of a retrieval-augmented captioning model SmallCap. Our analysis shows that the model is sensitive to tokens that appear in the majority of the retrieved captions, and the input attribution shows that those tokens are likely copied into the generated output. Given these findings, we propose to train the model by sampling retrieved captions from more diverse sets. This decreases the chance that the model learns to copy majority tokens, and improves both in-domain and cross-domain performance.
[ "Li, Wenyan", "Li, Jiaang", "Ramos, Rita", "Tang, Raphael", "Elliott, Desmond" ]
Understanding Retrieval Robustness for Retrieval-augmented Image Captioning
acl-long.503
Oral
2406.02265
[ "https://github.com/lyan62/RobustCap" ]
https://huggingface.co/papers/2406.02265
2
6
2
5
https://aclanthology.org/2024.acl-long.503/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.504.bib
@inproceedings{yao-etal-2024-semi, title = "Semi-Supervised Spoken Language Glossification", author = "Yao, Huijie and Zhou, Wengang and Zhou, Hao and Li, Houqiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.504", pages = "9300--9312", abstract = "Spoken language glossification (SLG) aims to translate the spoken language text into the sign language gloss, i.e., a written record of sign language. In this work, we present a framework named $S$emi-$S$upervised $S$poken $L$anguage $G$lossification ($S^3$LG) for SLG. To tackle the bottleneck of limited parallel data in SLG, our $S^3$LG incorporates large-scale monolingual spoken language text into SLG training. The proposed framework follows the self-training structure that iteratively annotates and learns from pseudo labels. Considering the lexical similarity and syntactic difference between sign language and spoken language, our $S^3$LG adopts both the rule-based heuristic and model-based approach for auto-annotation. During training, we randomly mix these complementary synthetic datasets and mark their differences with a special token. As the synthetic data may be less quality, the $S^3$LG further leverages consistency regularization to reduce the negative impact of noise in the synthetic data. Extensive experiments are conducted on public benchmarks to demonstrate the effectiveness of the $S^3$LG. Our code is available at \url{https://github.com/yaohj11/S3LG}.", }
Spoken language glossification (SLG) aims to translate the spoken language text into the sign language gloss, i.e., a written record of sign language. In this work, we present a framework named $S$emi-$S$upervised $S$poken $L$anguage $G$lossification ($S^3$LG) for SLG. To tackle the bottleneck of limited parallel data in SLG, our $S^3$LG incorporates large-scale monolingual spoken language text into SLG training. The proposed framework follows the self-training structure that iteratively annotates and learns from pseudo labels. Considering the lexical similarity and syntactic difference between sign language and spoken language, our $S^3$LG adopts both the rule-based heuristic and model-based approach for auto-annotation. During training, we randomly mix these complementary synthetic datasets and mark their differences with a special token. As the synthetic data may be less quality, the $S^3$LG further leverages consistency regularization to reduce the negative impact of noise in the synthetic data. Extensive experiments are conducted on public benchmarks to demonstrate the effectiveness of the $S^3$LG. Our code is available at \url{https://github.com/yaohj11/S3LG}.
[ "Yao, Huijie", "Zhou, Wengang", "Zhou, Hao", "Li, Houqiang" ]
Semi-Supervised Spoken Language Glossification
acl-long.504
Poster
2406.08173
[ "https://github.com/yaohj11/s3lg" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.504/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.505.bib
@inproceedings{cheng-etal-2024-seeclick, title = "{S}ee{C}lick: Harnessing {GUI} Grounding for Advanced Visual {GUI} Agents", author = "Cheng, Kanzhi and Sun, Qiushi and Chu, Yougang and Xu, Fangzhi and YanTao, Li and Zhang, Jianbing and Wu, Zhiyong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.505", pages = "9313--9332", abstract = "Graphical User Interface (GUI) agents are designed to automate complex tasks on digital devices, such as smartphones and desktops. Most existing GUI agents interact with the environment through extracted structured data, which can be notably lengthy (e.g., HTML) and occasionally inaccessible (e.g., on desktops). To alleviate this issue, we propose a novel visual GUI agent {--} SeeClick, which only relies on screenshots for task automation. In our preliminary study, we have discovered a key challenge in developing visual GUI agents: GUI grounding {--} the capacity to accurately locate screen elements based on instructions. To tackle this challenge, we propose to enhance SeeClick with GUI grounding pre-training and devise a method to automate the curation of GUI grounding data. Along with the efforts above, we have also created ScreenSpot, the first realistic GUI grounding benchmark that encompasses mobile, desktop, and web environments. After pre-training, SeeClick demonstrates significant improvement in ScreenSpot over various baselines. Moreover, comprehensive evaluations on three widely used benchmarks consistently support our finding that advancements in GUI grounding directly correlate with enhanced performance in downstream GUI agent tasks. The model, data and code will be open-sourced.", }
Graphical User Interface (GUI) agents are designed to automate complex tasks on digital devices, such as smartphones and desktops. Most existing GUI agents interact with the environment through extracted structured data, which can be notably lengthy (e.g., HTML) and occasionally inaccessible (e.g., on desktops). To alleviate this issue, we propose a novel visual GUI agent {--} SeeClick, which only relies on screenshots for task automation. In our preliminary study, we have discovered a key challenge in developing visual GUI agents: GUI grounding {--} the capacity to accurately locate screen elements based on instructions. To tackle this challenge, we propose to enhance SeeClick with GUI grounding pre-training and devise a method to automate the curation of GUI grounding data. Along with the efforts above, we have also created ScreenSpot, the first realistic GUI grounding benchmark that encompasses mobile, desktop, and web environments. After pre-training, SeeClick demonstrates significant improvement in ScreenSpot over various baselines. Moreover, comprehensive evaluations on three widely used benchmarks consistently support our finding that advancements in GUI grounding directly correlate with enhanced performance in downstream GUI agent tasks. The model, data and code will be open-sourced.
[ "Cheng, Kanzhi", "Sun, Qiushi", "Chu, Yougang", "Xu, Fangzhi", "YanTao, Li", "Zhang, Jianbing", "Wu, Zhiyong" ]
SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents
acl-long.505
Poster
2401.10935
[ "https://github.com/njucckevin/seeclick" ]
https://huggingface.co/papers/2401.10935
1
4
3
7
https://aclanthology.org/2024.acl-long.505/
[]
[ "rootsautomation/ScreenSpot", "rootsautomation/RICO-SCA" ]
[]
1
https://aclanthology.org/2024.acl-long.506.bib
@inproceedings{yehuda-etal-2024-interrogatellm, title = "{I}nterrogate{LLM}: Zero-Resource Hallucination Detection in {LLM}-Generated Answers", author = "Yehuda, Yakir and Malkiel, Itzik and Barkan, Oren and Weill, Jonathan and Ronen, Royi and Koenigstein, Noam", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.506", pages = "9333--9347", abstract = "Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons. One critical factor hindering their widespread adoption is the occurrence of hallucinations, where LLMs invent answers that sound realistic, yet drift away from factual truth. In this paper, we present a novel method for detecting hallucinations in large language models, which tackles a critical issue in the adoption of these models in various real-world scenarios. Through extensive evaluations across multiple datasets and LLMs, including Llama-2, we study the hallucination levels of various recent LLMs and demonstrate the effectiveness of our method to automatically detect them. Notably, we observe up to 87{\%} hallucinations for Llama-2 in a specific experiment, where our method achieves a Balanced Accuracy of 81{\%}, all without relying on external knowledge.", }
Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons. One critical factor hindering their widespread adoption is the occurrence of hallucinations, where LLMs invent answers that sound realistic, yet drift away from factual truth. In this paper, we present a novel method for detecting hallucinations in large language models, which tackles a critical issue in the adoption of these models in various real-world scenarios. Through extensive evaluations across multiple datasets and LLMs, including Llama-2, we study the hallucination levels of various recent LLMs and demonstrate the effectiveness of our method to automatically detect them. Notably, we observe up to 87{\%} hallucinations for Llama-2 in a specific experiment, where our method achieves a Balanced Accuracy of 81{\%}, all without relying on external knowledge.
[ "Yehuda, Yakir", "Malkiel, Itzik", "Barkan, Oren", "Weill, Jonathan", "Ronen, Royi", "Koenigstein, Noam" ]
InterrogateLLM: Zero-Resource Hallucination Detection in LLM-Generated Answers
acl-long.506
Oral
2403.02889
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.506/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.507.bib
@inproceedings{sun-etal-2024-f, title = "{F}-Eval: Asssessing Fundamental Abilities with Refined Evaluation Methods", author = "Sun, Yu and Keyuchen, Keyuchen and Wang, Shujie and Li, Peiji and Guo, Qipeng and Yan, Hang and Qiu, Xipeng and Huang, Xuanjing and Lin, Dahua", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.507", pages = "9348--9369", abstract = "Large language models (LLMs) garner significant attention for their unprecedented performance, leading to an increasing number of researches evaluating LLMs. However, these evaluation benchmarks are limited to assessing the instruction-following capabilities, overlooking the fundamental abilities that emerge during the pre-training stage. Previous subjective evaluation methods mainly reply on scoring by API models. However, in the absence of references, large models have shown limited ability to discern subtle differences. To bridge the gap, we propose F-Eval, a bilingual evaluation benchmark to evaluate the fundamental abilities, including expression, commonsense and logic. The tasks in F-Eval include multi-choice objective tasks, open-ended objective tasks, reference-based subjective tasks and reference-free subjective tasks. For reference-free subjective tasks, we devise new evaluation methods, serving as alternatives to scoring by API models. We conduct evaluations on 13 advanced LLMs. Results show that our evaluation methods show higher correlation coefficients and larger distinction than other evaluators. Additionally, we discuss the influence of different model sizes, dimensions, and normalization methods. We anticipate that F-Eval will facilitate the study of LLMs{'} fundamental abilities.", }
Large language models (LLMs) garner significant attention for their unprecedented performance, leading to an increasing number of researches evaluating LLMs. However, these evaluation benchmarks are limited to assessing the instruction-following capabilities, overlooking the fundamental abilities that emerge during the pre-training stage. Previous subjective evaluation methods mainly reply on scoring by API models. However, in the absence of references, large models have shown limited ability to discern subtle differences. To bridge the gap, we propose F-Eval, a bilingual evaluation benchmark to evaluate the fundamental abilities, including expression, commonsense and logic. The tasks in F-Eval include multi-choice objective tasks, open-ended objective tasks, reference-based subjective tasks and reference-free subjective tasks. For reference-free subjective tasks, we devise new evaluation methods, serving as alternatives to scoring by API models. We conduct evaluations on 13 advanced LLMs. Results show that our evaluation methods show higher correlation coefficients and larger distinction than other evaluators. Additionally, we discuss the influence of different model sizes, dimensions, and normalization methods. We anticipate that F-Eval will facilitate the study of LLMs{'} fundamental abilities.
[ "Sun, Yu", "Keyuchen, Keyuchen", "Wang, Shujie", "Li, Peiji", "Guo, Qipeng", "Yan, Hang", "Qiu, Xipeng", "Huang, Xuanjing", "Lin, Dahua" ]
F-Eval: Asssessing Fundamental Abilities with Refined Evaluation Methods
acl-long.507
Poster
[ "https://github.com/juliasun623/f-eval" ]
https://huggingface.co/papers/2401.14869
0
0
0
8
https://aclanthology.org/2024.acl-long.507/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.508.bib
@inproceedings{mondorf-plank-2024-comparing, title = "Comparing Inferential Strategies of Humans and Large Language Models in Deductive Reasoning", author = "Mondorf, Philipp and Plank, Barbara", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.508", pages = "9370--9402", abstract = "Deductive reasoning plays a pivotal role in the formulation of sound and cohesive arguments. It allows individuals to draw conclusions that logically follow, given the truth value of the information provided. Recent progress in the domain of large language models (LLMs) has showcased their capability in executing deductive reasoning tasks. Nonetheless, a significant portion of research primarily assesses the accuracy of LLMs in solving such tasks, often overlooking a deeper analysis of their reasoning behavior. In this study, we draw upon principles from cognitive psychology to examine inferential strategies employed by LLMs, through a detailed evaluation of their responses to propositional logic problems. Our findings indicate that LLMs display reasoning patterns akin to those observed in humans, including strategies like $\textit{supposition following}$ or $\textit{chain construction}$. Moreover, our research demonstrates that the architecture and scale of the model significantly affect its preferred method of reasoning, with more advanced models tending to adopt strategies more frequently than less sophisticated ones. Importantly, we assert that a model{'}s accuracy, that is the correctness of its final conclusion, does not necessarily reflect the validity of its reasoning process. This distinction underscores the necessity for more nuanced evaluation procedures in the field.", }
Deductive reasoning plays a pivotal role in the formulation of sound and cohesive arguments. It allows individuals to draw conclusions that logically follow, given the truth value of the information provided. Recent progress in the domain of large language models (LLMs) has showcased their capability in executing deductive reasoning tasks. Nonetheless, a significant portion of research primarily assesses the accuracy of LLMs in solving such tasks, often overlooking a deeper analysis of their reasoning behavior. In this study, we draw upon principles from cognitive psychology to examine inferential strategies employed by LLMs, through a detailed evaluation of their responses to propositional logic problems. Our findings indicate that LLMs display reasoning patterns akin to those observed in humans, including strategies like $\textit{supposition following}$ or $\textit{chain construction}$. Moreover, our research demonstrates that the architecture and scale of the model significantly affect its preferred method of reasoning, with more advanced models tending to adopt strategies more frequently than less sophisticated ones. Importantly, we assert that a model{'}s accuracy, that is the correctness of its final conclusion, does not necessarily reflect the validity of its reasoning process. This distinction underscores the necessity for more nuanced evaluation procedures in the field.
[ "Mondorf, Philipp", "Plank, Barbara" ]
Comparing Inferential Strategies of Humans and Large Language Models in Deductive Reasoning
acl-long.508
Poster
2402.14856
[ "https://github.com/mainlp/inferential-strategies" ]
https://huggingface.co/papers/2402.14856
0
0
0
2
https://aclanthology.org/2024.acl-long.508/
[]
[ "mainlp/inferential_strategies", "mainlp/henst_prop_logic" ]
[]
1
https://aclanthology.org/2024.acl-long.509.bib
@inproceedings{lerner-etal-2024-whose, title = "Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of {AI} Utilizing Human Feedback", author = "Lerner, Maria and Dorner, Florian and Ash, Elliott and Goel, Naman", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.509", pages = "9403--9425", abstract = "There is a growing body of work on learning from human feedback to align various aspects of machine learning systems with human values and preferences. We consider the setting of fairness in content moderation, in which human feedback is used to determine how two comments {---} referencing different sensitive attribute groups {---} should be treated in comparison to one another. With a novel dataset collected from Prolific and MTurk, we find significant gaps in fairness preferences depending on the race, age, political stance, educational level, and LGBTQ+ identity of annotators. We also demonstrate that demographics mentioned in text have a strong influence on how users perceive individual fairness in moderation. Further, we find that differences also exist in downstream classifiers trained to predict human preferences. Finally, we observe that an ensemble, giving equal weight to classifiers trained on annotations from different demographics, performs better for different demographic intersections; compared to a single classifier that gives equal weight to each annotation.", }
There is a growing body of work on learning from human feedback to align various aspects of machine learning systems with human values and preferences. We consider the setting of fairness in content moderation, in which human feedback is used to determine how two comments {---} referencing different sensitive attribute groups {---} should be treated in comparison to one another. With a novel dataset collected from Prolific and MTurk, we find significant gaps in fairness preferences depending on the race, age, political stance, educational level, and LGBTQ+ identity of annotators. We also demonstrate that demographics mentioned in text have a strong influence on how users perceive individual fairness in moderation. Further, we find that differences also exist in downstream classifiers trained to predict human preferences. Finally, we observe that an ensemble, giving equal weight to classifiers trained on annotations from different demographics, performs better for different demographic intersections; compared to a single classifier that gives equal weight to each annotation.
[ "Lerner, Maria", "Dorner, Florian", "Ash, Elliott", "Goel, Naman" ]
Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback
acl-long.509
Poster
2406.05902
[ "https://github.com/emiliaagis/differences-in-fairness-preferences-acl-2024" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.509/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.510.bib
@inproceedings{wang-etal-2024-math, title = "Math-Shepherd: Verify and Reinforce {LLM}s Step-by-step without Human Annotations", author = "Wang, Peiyi and Li, Lei and Shao, Zhihong and Xu, Runxin and Dai, Damai and Li, Yifei and Chen, Deli and Wu, Yu and Sui, Zhifang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.510", pages = "9426--9439", abstract = "In this paper, we present an innovative process-oriented math process reward model called Math-shepherd, which assigns a reward score to each step of math problem solutions. The training of Math-shepherd is achieved using automatically constructed process-wise supervision data, breaking the bottleneck of heavy reliance on manual annotation in existing work. We explore the effectiveness of Math-shepherd in two scenarios: 1) $\textit{Verification}$: Math-shepherd is utilized for reranking multiple outputs generated by Large Language Models (LLMs); 2) $\textit{Reinforcement Learning (RL)}$: Math-shepherd is employed to reinforce LLMs.With Math-shepherd, a series of open-source LLMs demonstrates exceptional performance. For instance, process RL with Math-shepherd significantly enhances Mistral-7B (77.9{\%}$\to$84.1{\%} on GSM8K and 28.6{\%}$\to$33.0{\%} on MATH).The accuracy can be further improved to 89.1{\%} and 43.5{\%} on two benchmarks with verification of Math-shepherd.We believe that automatic process supervision holds significant potential for the future evolution of LLMs.", }
In this paper, we present an innovative process-oriented math process reward model called Math-shepherd, which assigns a reward score to each step of math problem solutions. The training of Math-shepherd is achieved using automatically constructed process-wise supervision data, breaking the bottleneck of heavy reliance on manual annotation in existing work. We explore the effectiveness of Math-shepherd in two scenarios: 1) $\textit{Verification}$: Math-shepherd is utilized for reranking multiple outputs generated by Large Language Models (LLMs); 2) $\textit{Reinforcement Learning (RL)}$: Math-shepherd is employed to reinforce LLMs.With Math-shepherd, a series of open-source LLMs demonstrates exceptional performance. For instance, process RL with Math-shepherd significantly enhances Mistral-7B (77.9{\%}$\to$84.1{\%} on GSM8K and 28.6{\%}$\to$33.0{\%} on MATH).The accuracy can be further improved to 89.1{\%} and 43.5{\%} on two benchmarks with verification of Math-shepherd.We believe that automatic process supervision holds significant potential for the future evolution of LLMs.
[ "Wang, Peiyi", "Li, Lei", "Shao, Zhihong", "Xu, Runxin", "Dai, Damai", "Li, Yifei", "Chen, Deli", "Wu, Yu", "Sui, Zhifang" ]
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
acl-long.510
Poster
2312.08935
[ "" ]
https://huggingface.co/papers/2312.08935
3
4
0
9
https://aclanthology.org/2024.acl-long.510/
[ "peiyi9979/math-shepherd-mistral-7b-prm", "peiyi9979/mistral-7b-sft", "peiyi9979/math-shepherd-mistral-7b-rl" ]
[ "peiyi9979/Math-Shepherd" ]
[ "yuzhouzhouqianfantian/peiyi9979-math-shepherd-mistral-7b-prm" ]
1
https://aclanthology.org/2024.acl-long.511.bib
@inproceedings{wang-etal-2024-large-language-models-fair, title = "Large Language Models are not Fair Evaluators", author = "Wang, Peiyi and Li, Lei and Chen, Liang and Cai, Zefan and Zhu, Dawei and Lin, Binghuai and Cao, Yunbo and Kong, Lingpeng and Liu, Qi and Liu, Tianyu and Sui, Zhifang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.511", pages = "9440--9450", abstract = "In this paper, we uncover a positional bias in the evaluation paradigm of adopting large language models (LLMs), e.g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models. We find that the quality ranking of candidate responses can be easily hacked by simply altering their order of appearance in the context. This manipulation allows us to skew the evaluation result, making one model appear considerably superior to the other, e.g., Vicuna-13B could beat ChatGPT on 66 over 80 tested queries with ChatGPT as an evaluator. We propose a simple yet effective calibration framework to address our discovered positional bias.To evaluate the effectiveness of our framework, we manually annotate the {``}win/tie/lose{''} outcomes of responses from ChatGPT and Vicuna-13B in the Vicuna Benchmark{'}s question prompt. Extensive experiments demonstrate that our approach successfully alleviates evaluation bias, resulting in closer alignment with human judgments.", }
In this paper, we uncover a positional bias in the evaluation paradigm of adopting large language models (LLMs), e.g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models. We find that the quality ranking of candidate responses can be easily hacked by simply altering their order of appearance in the context. This manipulation allows us to skew the evaluation result, making one model appear considerably superior to the other, e.g., Vicuna-13B could beat ChatGPT on 66 over 80 tested queries with ChatGPT as an evaluator. We propose a simple yet effective calibration framework to address our discovered positional bias.To evaluate the effectiveness of our framework, we manually annotate the {``}win/tie/lose{''} outcomes of responses from ChatGPT and Vicuna-13B in the Vicuna Benchmark{'}s question prompt. Extensive experiments demonstrate that our approach successfully alleviates evaluation bias, resulting in closer alignment with human judgments.
[ "Wang, Peiyi", "Li, Lei", "Chen, Liang", "Cai, Zefan", "Zhu, Dawei", "Lin, Binghuai", "Cao, Yunbo", "Kong, Lingpeng", "Liu, Qi", "Liu, Tianyu", "Sui, Zhifang" ]
Large Language Models are not Fair Evaluators
acl-long.511
Poster
2305.17926
[ "https://github.com/i-eval/faireval" ]
https://huggingface.co/papers/2305.17926
3
1
0
10
https://aclanthology.org/2024.acl-long.511/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.512.bib
@inproceedings{chen-etal-2024-improving-large, title = "Improving Large Language Models in Event Relation Logical Prediction", author = "Chen, Meiqi and Ma, Yubo and Song, Kaitao and Cao, Yixin and Zhang, Yan and Li, Dongsheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.512", pages = "9451--9478", abstract = "Event relations are crucial for narrative understanding and reasoning. Governed by nuanced logic, event relation extraction (ERE) is a challenging task that demands thorough semantic understanding and rigorous logical reasoning. In this paper, we conduct an in-depth investigation to systematically explore the capability of LLMs in understanding and applying event relation logic. More in detail, we first investigate the deficiencies of LLMs in logical reasoning across different tasks. Our study reveals that LLMs are not logically consistent reasoners, which results in their suboptimal performance on tasks that need rigorous reasoning. To address this, we explore three different approaches to endow LLMs with event relation logic, and thus enable them to generate more coherent answers across various scenarios. Based on our approach, we also contribute a synthesized dataset (LLM-ERL) involving high-order reasoning for evaluation and fine-tuning. Extensive quantitative and qualitative analyses on different tasks also validate the effectiveness of our approach and provide insights for solving practical tasks with LLMs in future work. Codes are available at https://github.com/chenmeiqii/Teach-LLM-LR.", }
Event relations are crucial for narrative understanding and reasoning. Governed by nuanced logic, event relation extraction (ERE) is a challenging task that demands thorough semantic understanding and rigorous logical reasoning. In this paper, we conduct an in-depth investigation to systematically explore the capability of LLMs in understanding and applying event relation logic. More in detail, we first investigate the deficiencies of LLMs in logical reasoning across different tasks. Our study reveals that LLMs are not logically consistent reasoners, which results in their suboptimal performance on tasks that need rigorous reasoning. To address this, we explore three different approaches to endow LLMs with event relation logic, and thus enable them to generate more coherent answers across various scenarios. Based on our approach, we also contribute a synthesized dataset (LLM-ERL) involving high-order reasoning for evaluation and fine-tuning. Extensive quantitative and qualitative analyses on different tasks also validate the effectiveness of our approach and provide insights for solving practical tasks with LLMs in future work. Codes are available at https://github.com/chenmeiqii/Teach-LLM-LR.
[ "Chen, Meiqi", "Ma, Yubo", "Song, Kaitao", "Cao, Yixin", "Zhang, Yan", "Li, Dongsheng" ]
Improving Large Language Models in Event Relation Logical Prediction
acl-long.512
Oral
2310.09158
[ "https://github.com/chenmeiqii/teach-llm-lr" ]
https://huggingface.co/papers/2310.09158
0
1
0
6
https://aclanthology.org/2024.acl-long.512/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.513.bib
@inproceedings{yang-etal-2024-synchronized, title = "Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline", author = "Yang, Dingyi and Zhan, Chunru and Wang, Ziheng and Wang, Biao and Ge, Tiezheng and Zheng, Bo and Jin, Qin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.513", pages = "9479--9493", abstract = "Video storytelling is engaging multimedia content that utilizes video and its accompanying narration to share a story and attract the audience, where a key challenge is creating narrations for recorded visual scenes. Previous studies on dense video captioning and video story generation have made some progress. However, in practical applications, we typically require synchronized narrations for ongoing visual scenes. In this work, we introduce a new task of Synchronized Video Storytelling, which aims to generate synchronous and informative narrations for videos. These narrations, associated with each video clip, should relate to the visual content, integrate relevant knowledge, and have an appropriate word count corresponding to the clip{'}s duration. Specifically, a structured storyline is beneficial to guide the generation process, ensuring coherence and integrity. To support the exploration of this task, we introduce a new benchmark dataset E-SyncVidStory with rich annotations. Since existing Multimodal LLMs are not effective in addressing this task in one-shot or few-shot settings, we propose a framework named VideoNarrator that can generate a storyline for input videos and simultaneously generate narrations with the guidance of the generated or predefined storyline. We further introduce a set of evaluation metrics to thoroughly assess the generation. Both automatic and human evaluations validate the effectiveness of our approach. Our dataset, codes, and evaluations will be released.", }
Video storytelling is engaging multimedia content that utilizes video and its accompanying narration to share a story and attract the audience, where a key challenge is creating narrations for recorded visual scenes. Previous studies on dense video captioning and video story generation have made some progress. However, in practical applications, we typically require synchronized narrations for ongoing visual scenes. In this work, we introduce a new task of Synchronized Video Storytelling, which aims to generate synchronous and informative narrations for videos. These narrations, associated with each video clip, should relate to the visual content, integrate relevant knowledge, and have an appropriate word count corresponding to the clip{'}s duration. Specifically, a structured storyline is beneficial to guide the generation process, ensuring coherence and integrity. To support the exploration of this task, we introduce a new benchmark dataset E-SyncVidStory with rich annotations. Since existing Multimodal LLMs are not effective in addressing this task in one-shot or few-shot settings, we propose a framework named VideoNarrator that can generate a storyline for input videos and simultaneously generate narrations with the guidance of the generated or predefined storyline. We further introduce a set of evaluation metrics to thoroughly assess the generation. Both automatic and human evaluations validate the effectiveness of our approach. Our dataset, codes, and evaluations will be released.
[ "Yang, Dingyi", "Zhan, Chunru", "Wang, Ziheng", "Wang, Biao", "Ge, Tiezheng", "Zheng, Bo", "Jin, Qin" ]
Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline
acl-long.513
Poster
2405.14040
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.513/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.514.bib
@inproceedings{chen-etal-2024-fine, title = "Fine-Grained Image-Text Alignment in Medical Imaging Enables Explainable Cyclic Image-Report Generation", author = "Chen, Wenting and Shen, Linlin and Lin, Jingyang and Luo, Jiebo and Li, Xiang and Yuan, Yixuan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.514", pages = "9494--9509", abstract = "Fine-grained vision-language models (VLM) have been widely used for inter-modality local alignment between the predefined fixed patches and textual words. However, in medical analysis, lesions exhibit varying sizes and positions, and using fixed patches may cause incomplete representations of lesions. Moreover, these methods provide explainability by using heatmaps to show the general image areas potentially associated with texts rather than specific regions, making their explanations not explicit and specific enough. To address these issues, we propose a novel Adaptive patch-word Matching (AdaMatch) model to correlate chest X-ray (CXR) image regions with words in medical reports and apply it to CXR-report generation to provide explainability for the generation process. AdaMatch exploits the fine-grained relation between adaptive patches and words to provide explanations of specific image regions with corresponding words. To capture the abnormal regions of varying sizes and positions, we introduce an Adaptive Patch extraction (AdaPatch) module to acquire adaptive patches for these regions adaptively. Aiming to provide explicit explainability for the CXR-report generation task, we propose an AdaMatch-based bidirectional LLM for Cyclic CXR-report generation (AdaMatch-Cyclic). It employs AdaMatch to obtain the keywords for CXR images and {`}keypatches{'} for medical reports as hints to guide CXR-report generation. Extensive experiments on two publicly available CXR datasets validate the effectiveness of our method and its superior performance over existing methods. Source code will be released.", }
Fine-grained vision-language models (VLM) have been widely used for inter-modality local alignment between the predefined fixed patches and textual words. However, in medical analysis, lesions exhibit varying sizes and positions, and using fixed patches may cause incomplete representations of lesions. Moreover, these methods provide explainability by using heatmaps to show the general image areas potentially associated with texts rather than specific regions, making their explanations not explicit and specific enough. To address these issues, we propose a novel Adaptive patch-word Matching (AdaMatch) model to correlate chest X-ray (CXR) image regions with words in medical reports and apply it to CXR-report generation to provide explainability for the generation process. AdaMatch exploits the fine-grained relation between adaptive patches and words to provide explanations of specific image regions with corresponding words. To capture the abnormal regions of varying sizes and positions, we introduce an Adaptive Patch extraction (AdaPatch) module to acquire adaptive patches for these regions adaptively. Aiming to provide explicit explainability for the CXR-report generation task, we propose an AdaMatch-based bidirectional LLM for Cyclic CXR-report generation (AdaMatch-Cyclic). It employs AdaMatch to obtain the keywords for CXR images and {`}keypatches{'} for medical reports as hints to guide CXR-report generation. Extensive experiments on two publicly available CXR datasets validate the effectiveness of our method and its superior performance over existing methods. Source code will be released.
[ "Chen, Wenting", "Shen, Linlin", "Lin, Jingyang", "Luo, Jiebo", "Li, Xiang", "Yuan, Yixuan" ]
Fine-Grained Image-Text Alignment in Medical Imaging Enables Explainable Cyclic Image-Report Generation
acl-long.514
Poster
2312.08078
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.514/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.515.bib
@inproceedings{chen-etal-2024-eval, title = "{T}-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step", author = "Chen, Zehui and Du, Weihua and Zhang, Wenwei and Liu, Kuikun and Liu, Jiangning and Zheng, Miao and Zhuo, Jingming and Zhang, Songyang and Lin, Dahua and Chen, Kai and Zhao, Feng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.515", pages = "9510--9529", abstract = "Large language models (LLMs) have achieved remarkable performance on various NLP tasks and are augmented by tools for broader applications. Yet, how to evaluate and analyze the tool utilization capability of LLMs is still under-explored. In contrast to previous works that evaluate models holistically, we comprehensively decompose the tool utilization into multiple sub-processes, including instruction following, planning, reasoning, retrieval, understanding, and review. Based on that, we further introduce T-Eval to evaluate the tool-utilization capability step by step. T-Eval disentangles the tool utilization evaluation into several sub-domains along model capabilities, facilitating the inner understanding of both holistic and isolated competency of LLMs. We conduct extensive experiments on T-Eval and in-depth analysis of various LLMs. T-Eval not only exhibits consistency with the outcome-oriented evaluation but also provides a more fine-grained analysis of the capabilities of LLMs, providing a new perspective in LLM evaluation on tool-utilization ability. The benchmark will be available.", }
Large language models (LLMs) have achieved remarkable performance on various NLP tasks and are augmented by tools for broader applications. Yet, how to evaluate and analyze the tool utilization capability of LLMs is still under-explored. In contrast to previous works that evaluate models holistically, we comprehensively decompose the tool utilization into multiple sub-processes, including instruction following, planning, reasoning, retrieval, understanding, and review. Based on that, we further introduce T-Eval to evaluate the tool-utilization capability step by step. T-Eval disentangles the tool utilization evaluation into several sub-domains along model capabilities, facilitating the inner understanding of both holistic and isolated competency of LLMs. We conduct extensive experiments on T-Eval and in-depth analysis of various LLMs. T-Eval not only exhibits consistency with the outcome-oriented evaluation but also provides a more fine-grained analysis of the capabilities of LLMs, providing a new perspective in LLM evaluation on tool-utilization ability. The benchmark will be available.
[ "Chen, Zehui", "Du, Weihua", "Zhang, Wenwei", "Liu, Kuikun", "Liu, Jiangning", "Zheng, Miao", "Zhuo, Jingming", "Zhang, Songyang", "Lin, Dahua", "Chen, Kai", "Zhao, Feng" ]
T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step
acl-long.515
Poster
2312.14033
[ "https://github.com/open-compass/t-eval" ]
https://huggingface.co/papers/2312.14033
2
2
1
11
https://aclanthology.org/2024.acl-long.515/
[]
[ "lovesnowbest/T-Eval" ]
[]
1
https://aclanthology.org/2024.acl-long.516.bib
@inproceedings{hu-etal-2024-llm, title = "Are {LLM}-based Evaluators Confusing {NLG} Quality Criteria?", author = "Hu, Xinyu and Gao, Mingqi and Hu, Sen and Zhang, Yang and Chen, Yicheng and Xu, Teng and Wan, Xiaojun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.516", pages = "9530--9570", abstract = "Some prior work has shown that LLMs perform well in NLG evaluation for different tasks. However, we discover that LLMs seem to confuse different evaluation criteria, which reduces their reliability. For further verification, we first consider avoiding issues of inconsistent conceptualization and vague expression in existing NLG quality criteria themselves. So we summarize a clear hierarchical classification system for 11 common aspects with corresponding different criteria from previous studies involved. Inspired by behavioral testing, we elaborately design 18 types of aspect-targeted perturbation attacks for fine-grained analysis of the evaluation behaviors of different LLMs. We also conduct human annotations beyond the guidance of the classification system to validate the impact of the perturbations. Our experimental results reveal confusion issues inherent in LLMs, as well as other noteworthy phenomena, and necessitate further research and improvements for LLM-based evaluation.", }
Some prior work has shown that LLMs perform well in NLG evaluation for different tasks. However, we discover that LLMs seem to confuse different evaluation criteria, which reduces their reliability. For further verification, we first consider avoiding issues of inconsistent conceptualization and vague expression in existing NLG quality criteria themselves. So we summarize a clear hierarchical classification system for 11 common aspects with corresponding different criteria from previous studies involved. Inspired by behavioral testing, we elaborately design 18 types of aspect-targeted perturbation attacks for fine-grained analysis of the evaluation behaviors of different LLMs. We also conduct human annotations beyond the guidance of the classification system to validate the impact of the perturbations. Our experimental results reveal confusion issues inherent in LLMs, as well as other noteworthy phenomena, and necessitate further research and improvements for LLM-based evaluation.
[ "Hu, Xinyu", "Gao, Mingqi", "Hu, Sen", "Zhang, Yang", "Chen, Yicheng", "Xu, Teng", "Wan, Xiaojun" ]
Are LLM-based Evaluators Confusing NLG Quality Criteria?
acl-long.516
Poster
2402.12055
[ "https://github.com/pku-onelab/llm-evaluator-reliability" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.516/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.517.bib
@inproceedings{feng-etal-2024-synergistic, title = "Synergistic Interplay between Search and Large Language Models for Information Retrieval", author = "Feng, Jiazhan and Tao, Chongyang and Geng, Xiubo and Shen, Tao and Xu, Can and Long, Guodong and Zhao, Dongyan and Jiang, Daxin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.517", pages = "9571--9583", abstract = "Information retrieval (IR) plays a crucial role in locating relevant resources from vast amounts of data, and its applications have evolved from traditional knowledge bases to modern retrieval models (RMs). The emergence of large language models (LLMs) has further revolutionized the IR field by enabling users to interact with search systems in natural languages. In this paper, we explore the advantages and disadvantages of LLMs and RMs, highlighting their respective strengths in understanding user-issued queries and retrieving up-to-date information. To leverage the benefits of both paradigms while circumventing their limitations, we propose **InteR**, a novel framework that facilitates information refinement through synergy between RMs and LLMs. InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections and enables LLMs to enhance prompt formulation using retrieved documents. This iterative refinement process augments the inputs of RMs and LLMs, leading to more accurate retrieval. Experiments on large-scale retrieval benchmarks involving web search and low-resource retrieval tasks show that InteR achieves overall superior **zero-shot** retrieval performance compared to state-of-the-art methods, even those using relevance judgment. Source code is available at https://github.com/Cyril-JZ/InteR.", }
Information retrieval (IR) plays a crucial role in locating relevant resources from vast amounts of data, and its applications have evolved from traditional knowledge bases to modern retrieval models (RMs). The emergence of large language models (LLMs) has further revolutionized the IR field by enabling users to interact with search systems in natural languages. In this paper, we explore the advantages and disadvantages of LLMs and RMs, highlighting their respective strengths in understanding user-issued queries and retrieving up-to-date information. To leverage the benefits of both paradigms while circumventing their limitations, we propose **InteR**, a novel framework that facilitates information refinement through synergy between RMs and LLMs. InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections and enables LLMs to enhance prompt formulation using retrieved documents. This iterative refinement process augments the inputs of RMs and LLMs, leading to more accurate retrieval. Experiments on large-scale retrieval benchmarks involving web search and low-resource retrieval tasks show that InteR achieves overall superior **zero-shot** retrieval performance compared to state-of-the-art methods, even those using relevance judgment. Source code is available at https://github.com/Cyril-JZ/InteR.
[ "Feng, Jiazhan", "Tao, Chongyang", "Geng, Xiubo", "Shen, Tao", "Xu, Can", "Long, Guodong", "Zhao, Dongyan", "Jiang, Daxin" ]
Synergistic Interplay between Search and Large Language Models for Information Retrieval
acl-long.517
Poster
2305.07402
[ "https://github.com/Cyril-JZ/InteR" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.517/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.518.bib
@inproceedings{aksenov-etal-2024-linear, title = "Linear Transformers with Learnable Kernel Functions are Better In-Context Models", author = "Aksenov, Yaroslav and Balagansky, Nikita and Lo Cicero Vaina, Sofia and Shaposhnikov, Boris and Gorbatovski, Alexey and Gavrilov, Daniil", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.518", pages = "9584--9597", abstract = "Advancing the frontier of subquadratic architectures for Language Models (LMs) is crucial in the rapidly evolving field of natural language processing. Current innovations, including State Space Models, were initially celebrated for surpassing Transformer performance on language modeling tasks. However, these models have revealed deficiencies in essential In-Context Learning capabilities {--} a domain where the Transformer traditionally shines. The Based model emerged as a hybrid solution, blending a Linear Transformer with a kernel inspired by the Taylor expansion of exponential functions, augmented by convolutional networks. Mirroring the Transformer{'}s in-context adeptness, it became a strong contender in the field. In our work, we present a singular, elegant alteration to the Based kernel that amplifies its In-Context Learning abilities evaluated with the Multi-Query Associative Recall task and overall language modeling process, as demonstrated on the Pile dataset.", }
Advancing the frontier of subquadratic architectures for Language Models (LMs) is crucial in the rapidly evolving field of natural language processing. Current innovations, including State Space Models, were initially celebrated for surpassing Transformer performance on language modeling tasks. However, these models have revealed deficiencies in essential In-Context Learning capabilities {--} a domain where the Transformer traditionally shines. The Based model emerged as a hybrid solution, blending a Linear Transformer with a kernel inspired by the Taylor expansion of exponential functions, augmented by convolutional networks. Mirroring the Transformer{'}s in-context adeptness, it became a strong contender in the field. In our work, we present a singular, elegant alteration to the Based kernel that amplifies its In-Context Learning abilities evaluated with the Multi-Query Associative Recall task and overall language modeling process, as demonstrated on the Pile dataset.
[ "Aksenov, Yaroslav", "Balagansky, Nikita", "Lo Cicero Vaina, Sofia", "Shaposhnikov, Boris", "Gorbatovski, Alexey", "Gavrilov, Daniil" ]
Linear Transformers with Learnable Kernel Functions are Better In-Context Models
acl-long.518
Poster
2402.10644
[ "https://github.com/corl-team/rebased" ]
https://huggingface.co/papers/2402.10644
6
76
3
6
https://aclanthology.org/2024.acl-long.518/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.519.bib
@inproceedings{liu-etal-2024-temperature, title = "Temperature-scaling surprisal estimates improve fit to human reading times {--} but does it do so for the {``}right reasons{''}?", author = "Liu, Tong and {\v{S}}krjanec, Iza and Demberg, Vera", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.519", pages = "9598--9619", abstract = "A wide body of evidence shows that human language processing difficulty is predicted by the information-theoretic measure surprisal, a word{'}s negative log probability in context. However, it is still unclear how to best estimate these probabilities needed for predicting human processing difficulty {--} while a long-standing belief held that models with lower perplexity would provide more accurate estimates of word predictability, and therefore lead to better reading time predictions, recent work has shown that for very large models, psycholinguistic predictive power decreases. One reason could be that language models might be more confident of their predictions than humans, because they have had exposure to several magnitudes more data. In this paper, we test what effect temperature-scaling of large language model (LLM) predictions has on surprisal estimates and their predictive power of reading times of English texts. Firstly, we show that calibration of large language models typically improves with model size, i.e. poorer calibration cannot account for poorer fit to reading times. Secondly, we find that temperature-scaling probabilities lead to a systematically better fit to reading times (up to 89{\%} improvement in delta log likelihood), across several reading time corpora. Finally, we show that this improvement in fit is chiefly driven by words that are composed of multiple subword tokens.", }
A wide body of evidence shows that human language processing difficulty is predicted by the information-theoretic measure surprisal, a word{'}s negative log probability in context. However, it is still unclear how to best estimate these probabilities needed for predicting human processing difficulty {--} while a long-standing belief held that models with lower perplexity would provide more accurate estimates of word predictability, and therefore lead to better reading time predictions, recent work has shown that for very large models, psycholinguistic predictive power decreases. One reason could be that language models might be more confident of their predictions than humans, because they have had exposure to several magnitudes more data. In this paper, we test what effect temperature-scaling of large language model (LLM) predictions has on surprisal estimates and their predictive power of reading times of English texts. Firstly, we show that calibration of large language models typically improves with model size, i.e. poorer calibration cannot account for poorer fit to reading times. Secondly, we find that temperature-scaling probabilities lead to a systematically better fit to reading times (up to 89{\%} improvement in delta log likelihood), across several reading time corpora. Finally, we show that this improvement in fit is chiefly driven by words that are composed of multiple subword tokens.
[ "Liu, Tong", "{\\v{S}}krjanec, Iza", "Demberg, Vera" ]
Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”?
acl-long.519
Poster
[ "https://github.com/TongLiu-github/TemperatureSaling4RTs" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.519/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.520.bib
@inproceedings{saadat-yazdi-kokciyan-2024-beyond, title = "Beyond Recognising Entailment: Formalising Natural Language Inference from an Argumentative Perspective", author = {Saadat-Yazdi, Ameer and K{\"o}kciyan, Nadin}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.520", pages = "9620--9636", abstract = "In argumentation theory, argument schemes are a characterisation of stereotypical patterns of inference. There has been little work done to develop computational approaches to identify these schemes in natural language. Moreover, advancements in recognizing textual entailment lack a standardized definition of inference, which makes it challenging to compare methods trained on different datasets and rely on the generalisability of their results. In this work, we propose a rigorous approach to align entailment recognition with argumentation theory. Wagemans{'} Periodic Table of Arguments (PTA), a taxonomy of argument schemes, provides the appropriate framework to unify these two fields. To operationalise the theoretical model, we introduce a tool to assist humans in annotating arguments according to the PTA. Beyond providing insights into non-expert annotator training, we present Kialo-PTA24, the first multi-topic dataset for the PTA. Finally, we benchmark the performance of pre-trained language models on various aspects of argument analysis. Our experiments show that the task of argument canonicalisation poses a significant challenge for state-of-the-art models, suggesting an inability to represent argumentative reasoning and a direction for future investigation.", }
In argumentation theory, argument schemes are a characterisation of stereotypical patterns of inference. There has been little work done to develop computational approaches to identify these schemes in natural language. Moreover, advancements in recognizing textual entailment lack a standardized definition of inference, which makes it challenging to compare methods trained on different datasets and rely on the generalisability of their results. In this work, we propose a rigorous approach to align entailment recognition with argumentation theory. Wagemans{'} Periodic Table of Arguments (PTA), a taxonomy of argument schemes, provides the appropriate framework to unify these two fields. To operationalise the theoretical model, we introduce a tool to assist humans in annotating arguments according to the PTA. Beyond providing insights into non-expert annotator training, we present Kialo-PTA24, the first multi-topic dataset for the PTA. Finally, we benchmark the performance of pre-trained language models on various aspects of argument analysis. Our experiments show that the task of argument canonicalisation poses a significant challenge for state-of-the-art models, suggesting an inability to represent argumentative reasoning and a direction for future investigation.
[ "Saadat-Yazdi, Ameer", "K{\\\"o}kciyan, Nadin" ]
Beyond Recognising Entailment: Formalising Natural Language Inference from an Argumentative Perspective
acl-long.520
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.520/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.521.bib
@inproceedings{zhan-etal-2024-anygpt, title = "{A}ny{GPT}: Unified Multimodal {LLM} with Discrete Sequence Modeling", author = "Zhan, Jun and Dai, Junqi and Ye, Jiasheng and Zhou, Yunhua and Zhang, Dong and Liu, Zhigeng and Zhang, Xin and Yuan, Ruibin and Zhang, Ge and Li, Linyang and Yan, Hang and Fu, Jie and Gui, Tao and Sun, Tianxiang and Jiang, Yu-Gang and Qiu, Xipeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.521", pages = "9637--9662", abstract = "We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music. AnyGPT can be trained stably without any alterations to the current large language model (LLM) architecture or training paradigms. Instead, it relies exclusively on data-level preprocessing, facilitating the seamless integration of new modalities into LLMs, akin to the incorporation of new languages.We build a multimodal text-centric dataset for multimodal alignment pre-training. Utilizing generative models, we synthesize the first large-scale any-to-any multimodal instruction dataset. It consists of 108k samples of multi-turn conversations that intricately interweave various modalities, thus equipping the model to handle arbitrary combinations of multimodal inputs and outputs.Experimental results demonstrate that AnyGPT is capable of facilitating any-to-any multimodal conversation while achieving performance comparable to specialized models across all modalities, proving that discrete representations can effectively and conveniently unify multiple modalities within a language model. Demos are shown in https://junzhan2000.github.io/AnyGPT.github.io/.", }
We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music. AnyGPT can be trained stably without any alterations to the current large language model (LLM) architecture or training paradigms. Instead, it relies exclusively on data-level preprocessing, facilitating the seamless integration of new modalities into LLMs, akin to the incorporation of new languages.We build a multimodal text-centric dataset for multimodal alignment pre-training. Utilizing generative models, we synthesize the first large-scale any-to-any multimodal instruction dataset. It consists of 108k samples of multi-turn conversations that intricately interweave various modalities, thus equipping the model to handle arbitrary combinations of multimodal inputs and outputs.Experimental results demonstrate that AnyGPT is capable of facilitating any-to-any multimodal conversation while achieving performance comparable to specialized models across all modalities, proving that discrete representations can effectively and conveniently unify multiple modalities within a language model. Demos are shown in https://junzhan2000.github.io/AnyGPT.github.io/.
[ "Zhan, Jun", "Dai, Junqi", "Ye, Jiasheng", "Zhou, Yunhua", "Zhang, Dong", "Liu, Zhigeng", "Zhang, Xin", "Yuan, Ruibin", "Zhang, Ge", "Li, Linyang", "Yan, Hang", "Fu, Jie", "Gui, Tao", "Sun, Tianxiang", "Jiang, Yu-Gang", "Qiu, Xipeng" ]
AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling
acl-long.521
Poster
2402.12226
[ "" ]
https://huggingface.co/papers/2402.12226
9
39
7
16
https://aclanthology.org/2024.acl-long.521/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.522.bib
@inproceedings{chen-etal-2024-cofipara, title = "{C}ofi{P}ara: A Coarse-to-fine Paradigm for Multimodal Sarcasm Target Identification with Large Multimodal Models", author = "Chen, Zixin and Lin, Hongzhan and Luo, Ziyang and Cheng, Mingfei and Ma, Jing and Chen, Guang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.522", pages = "9663--9687", abstract = "Social media abounds with multimodal sarcasm, and identifying sarcasm targets is particularly challenging due to the implicit incongruity not directly evident in the text and image modalities. Current methods for Multimodal Sarcasm Target Identification (MSTI) predominantly focus on superficial indicators in an end-to-end manner, overlooking the nuanced understanding of multimodal sarcasm conveyed through both the text and image. This paper proposes a versatile MSTI framework with a coarse-to-fine paradigm, by augmenting sarcasm explainability with reasoning and pre-training knowledge. Inspired by the powerful capacity of Large Multimodal Models (LMMs) on multimodal reasoning, we first engage LMMs to generate competing rationales for coarser-grained pre-training of a small language model on multimodal sarcasm detection. We then propose fine-tuning the model for finer-grained sarcasm target identification. Our framework is thus empowered to adeptly unveil the intricate targets within multimodal sarcasm and mitigate the negative impact posed by potential noise inherently in LMMs. Experimental results demonstrate that our model far outperforms state-of-the-art MSTI methods, and markedly exhibits explainability in deciphering sarcasm as well.", }
Social media abounds with multimodal sarcasm, and identifying sarcasm targets is particularly challenging due to the implicit incongruity not directly evident in the text and image modalities. Current methods for Multimodal Sarcasm Target Identification (MSTI) predominantly focus on superficial indicators in an end-to-end manner, overlooking the nuanced understanding of multimodal sarcasm conveyed through both the text and image. This paper proposes a versatile MSTI framework with a coarse-to-fine paradigm, by augmenting sarcasm explainability with reasoning and pre-training knowledge. Inspired by the powerful capacity of Large Multimodal Models (LMMs) on multimodal reasoning, we first engage LMMs to generate competing rationales for coarser-grained pre-training of a small language model on multimodal sarcasm detection. We then propose fine-tuning the model for finer-grained sarcasm target identification. Our framework is thus empowered to adeptly unveil the intricate targets within multimodal sarcasm and mitigate the negative impact posed by potential noise inherently in LMMs. Experimental results demonstrate that our model far outperforms state-of-the-art MSTI methods, and markedly exhibits explainability in deciphering sarcasm as well.
[ "Chen, Zixin", "Lin, Hongzhan", "Luo, Ziyang", "Cheng, Mingfei", "Ma, Jing", "Chen, Guang" ]
CofiPara: A Coarse-to-fine Paradigm for Multimodal Sarcasm Target Identification with Large Multimodal Models
acl-long.522
Poster
2405.00390
[ "https://github.com/lbotirx/cofipara" ]
https://huggingface.co/papers/2405.00390
1
0
0
6
https://aclanthology.org/2024.acl-long.522/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.523.bib
@inproceedings{liu-etal-2024-direct, title = "Direct Large Language Model Alignment Through Self-Rewarding Contrastive Prompt Distillation", author = "Liu, Aiwei and Bai, Haoping and Lu, Zhiyun and Kong, Xiang and Wang, Xiaoming and Shan, Jiulong and Cao, Meng and Wen, Lijie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.523", pages = "9688--9712", abstract = "Aligning large language models (LLMs) with human expectations without human-annotated preference data is an important problem. In this paper, we propose a method to evaluate the response preference by using the output probabilities of response pairs under contrastive prompt pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based on this, we propose an automatic alignment method, Direct Large Model Alignment (DLMA). First, we use contrastive prompt pairs to automatically generate preference data. Then, we continue to evaluate the generated preference data using contrastive prompt pairs and calculate a self-rewarding score. Finally, we use the DPO algorithm to effectively align LLMs by combining this self-rewarding score. In the experimental stage, our DLMA method could surpass the RLHF method without relying on human-annotated preference data.", }
Aligning large language models (LLMs) with human expectations without human-annotated preference data is an important problem. In this paper, we propose a method to evaluate the response preference by using the output probabilities of response pairs under contrastive prompt pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based on this, we propose an automatic alignment method, Direct Large Model Alignment (DLMA). First, we use contrastive prompt pairs to automatically generate preference data. Then, we continue to evaluate the generated preference data using contrastive prompt pairs and calculate a self-rewarding score. Finally, we use the DPO algorithm to effectively align LLMs by combining this self-rewarding score. In the experimental stage, our DLMA method could surpass the RLHF method without relying on human-annotated preference data.
[ "Liu, Aiwei", "Bai, Haoping", "Lu, Zhiyun", "Kong, Xiang", "Wang, Xiaoming", "Shan, Jiulong", "Cao, Meng", "Wen, Lijie" ]
Direct Large Language Model Alignment Through Self-Rewarding Contrastive Prompt Distillation
acl-long.523
Poster
2402.11907
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.523/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.524.bib
@inproceedings{toker-etal-2024-diffusion, title = "Diffusion Lens: Interpreting Text Encoders in Text-to-Image Pipelines", author = "Toker, Michael and Orgad, Hadas and Ventura, Mor and Arad, Dana and Belinkov, Yonatan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.524", pages = "9713--9728", abstract = "Text-to-image diffusion models (T2I) use a latent representation of a text prompt to guide the image generation process. However, the process by which the encoder produces the text representation is unknown. We propose the Diffusion Lens, a method for analyzing the text encoder of T2I models by generating images from its intermediate representations. Using the Diffusion Lens, we perform an extensive analysis of two recent T2I models. Exploring compound prompts, we find that complex scenes describing multiple objects are composed progressively and more slowly compared to simple scenes; Exploring knowledge retrieval, we find that representation of uncommon concepts require further computation compared to common concepts, and that knowledge retrieval is gradual across layers. Overall, our findings provide valuable insights into the text encoder component in T2I pipelines.", }
Text-to-image diffusion models (T2I) use a latent representation of a text prompt to guide the image generation process. However, the process by which the encoder produces the text representation is unknown. We propose the Diffusion Lens, a method for analyzing the text encoder of T2I models by generating images from its intermediate representations. Using the Diffusion Lens, we perform an extensive analysis of two recent T2I models. Exploring compound prompts, we find that complex scenes describing multiple objects are composed progressively and more slowly compared to simple scenes; Exploring knowledge retrieval, we find that representation of uncommon concepts require further computation compared to common concepts, and that knowledge retrieval is gradual across layers. Overall, our findings provide valuable insights into the text encoder component in T2I pipelines.
[ "Toker, Michael", "Orgad, Hadas", "Ventura, Mor", "Arad, Dana", "Belinkov, Yonatan" ]
Diffusion Lens: Interpreting Text Encoders in Text-to-Image Pipelines
acl-long.524
Poster
2403.05846
[ "" ]
https://huggingface.co/papers/2403.05846
0
0
0
5
https://aclanthology.org/2024.acl-long.524/
[]
[]
[ "tokeron/DiffusionLens" ]
1
https://aclanthology.org/2024.acl-long.525.bib
@inproceedings{sun-etal-2024-parrot, title = "Parrot: Enhancing Multi-Turn Instruction Following for Large Language Models", author = "Sun, Yuchong and Liu, Che and Zhou, Kun and Huang, Jinwen and Song, Ruihua and Zhao, Xin and Zhang, Fuzheng and Zhang, Di and Gai, Kun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.525", pages = "9729--9750", abstract = "Humans often interact with large language models (LLMs) in multi-turn interaction to obtain desired answers or more information. However, most existing studies overlook the multi-turn instruction following ability of LLMs, in terms of training dataset, training method, and evaluation benchmark. In this paper, we introduce Parrot, a solution aiming to enhance multi-turn instruction following for LLMs. First, we introduce an efficient but effective method for collecting multi-turn instructions that feature human-like queries, such as anaphora and ellipsis. Second, we propose a context-aware preference optimization strategy to further enhance LLMs for complex queries in multi-turn interaction. Moreover, to quantitatively evaluate LLMs in multi-turn instruction following, we manually build a multi-turn benchmark derived from existing ones. Extensive experiments show that Parrot improves current LLMs by up to 7.2{\%} in multi-turn instruction following. Our dataset and codes will be open-sourced to facilitate future research.", }
Humans often interact with large language models (LLMs) in multi-turn interaction to obtain desired answers or more information. However, most existing studies overlook the multi-turn instruction following ability of LLMs, in terms of training dataset, training method, and evaluation benchmark. In this paper, we introduce Parrot, a solution aiming to enhance multi-turn instruction following for LLMs. First, we introduce an efficient but effective method for collecting multi-turn instructions that feature human-like queries, such as anaphora and ellipsis. Second, we propose a context-aware preference optimization strategy to further enhance LLMs for complex queries in multi-turn interaction. Moreover, to quantitatively evaluate LLMs in multi-turn instruction following, we manually build a multi-turn benchmark derived from existing ones. Extensive experiments show that Parrot improves current LLMs by up to 7.2{\%} in multi-turn instruction following. Our dataset and codes will be open-sourced to facilitate future research.
[ "Sun, Yuchong", "Liu, Che", "Zhou, Kun", "Huang, Jinwen", "Song, Ruihua", "Zhao, Xin", "Zhang, Fuzheng", "Zhang, Di", "Gai, Kun" ]
Parrot: Enhancing Multi-Turn Instruction Following for Large Language Models
acl-long.525
Poster
2310.07301
[ "" ]
https://huggingface.co/papers/2310.07301
0
1
0
8
https://aclanthology.org/2024.acl-long.525/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.526.bib
@inproceedings{li-etal-2024-robust, title = "Robust Singing Voice Transcription Serves Synthesis", author = "Li, Ruiqi and Zhang, Yu and Wang, Yongqi and Hong, Zhiqing and Huang, Rongjie and Zhao, Zhou", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.526", pages = "9751--9766", abstract = "Note-level Automatic Singing Voice Transcription (AST) converts singing recordings into note sequences, facilitating the automatic annotation of singing datasets for Singing Voice Synthesis (SVS) applications. Current AST methods, however, struggle with accuracy and robustness when used for practical annotation. This paper presents ROSVOT, the first robust AST model that serves SVS, incorporating a multi-scale framework that effectively captures coarse-grained note information and ensures fine-grained frame-level segmentation, coupled with an attention-based pitch decoder for reliable pitch prediction. We also established a comprehensive annotation-and-training pipeline for SVS to test the model in real-world settings. Experimental findings reveal that the proposed model achieves state-of-the-art transcription accuracy with either clean or noisy inputs. Moreover, when trained on enlarged, automatically annotated datasets, the SVS model outperforms its baseline, affirming the capability for practical application. Audio samples are available at https://rosvot.github.io. Codes can be found at https://github.com/RickyL-2000/ROSVOT.", }
Note-level Automatic Singing Voice Transcription (AST) converts singing recordings into note sequences, facilitating the automatic annotation of singing datasets for Singing Voice Synthesis (SVS) applications. Current AST methods, however, struggle with accuracy and robustness when used for practical annotation. This paper presents ROSVOT, the first robust AST model that serves SVS, incorporating a multi-scale framework that effectively captures coarse-grained note information and ensures fine-grained frame-level segmentation, coupled with an attention-based pitch decoder for reliable pitch prediction. We also established a comprehensive annotation-and-training pipeline for SVS to test the model in real-world settings. Experimental findings reveal that the proposed model achieves state-of-the-art transcription accuracy with either clean or noisy inputs. Moreover, when trained on enlarged, automatically annotated datasets, the SVS model outperforms its baseline, affirming the capability for practical application. Audio samples are available at https://rosvot.github.io. Codes can be found at https://github.com/RickyL-2000/ROSVOT.
[ "Li, Ruiqi", "Zhang, Yu", "Wang, Yongqi", "Hong, Zhiqing", "Huang, Rongjie", "Zhao, Zhou" ]
Robust Singing Voice Transcription Serves Synthesis
acl-long.526
Poster
2405.09940
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.526/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.527.bib
@inproceedings{chen-etal-2024-vullibgen, title = "{V}ul{L}ib{G}en: Generating Names of Vulnerability-Affected Packages via a Large Language Model", author = "Chen, Tianyu and Li, Lin and ZhuLiuchuan, ZhuLiuchuan and Li, Zongyang and Liu, Xueqing and Liang, Guangtai and Wang, Qianxiang and Xie, Tao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.527", pages = "9767--9780", abstract = "Security practitioners maintain vulnerability reports (e.g., GitHub Advisory) to help developers mitigate security risks. An important task for these databases is automatically extracting structured information mentioned in the report, e.g., the affected software packages, to accelerate the defense of the vulnerability ecosystem.However, it is challenging for existing work on affected package identification to achieve high precision. One reason is that all existing work focuses on relatively smaller models, thus they cannot harness the knowledge and semantic capabilities of large language models.To address this limitation, we propose VulLibGen, the first method to use LLM for affected package identification. In contrast to existing work, VulLibGen proposes the novel idea to directly generate the affected package. To improve the precision, VulLibGen employs supervised fine-tuning (SFT), retrieval augmented generation (RAG) and a local search algorithm. The local search algorithm is a novel post-processing algorithm we introduce for reducing the hallucination of the generated packages. Our evaluation results show that VulLibGen has an average precision of 0.806 for identifying vulnerable packages in the four most popular ecosystems in GitHub Advisory (Java, JS, Python, Go) while the best average precision in previous work is 0.721. Additionally, VulLibGen has high value to security practice: we submitted 60 {\textless}vulnerability, affected package{\textgreater} pairs to GitHub Advisory (covers four ecosystems) and 34 of them have been accepted and merged.", }
Security practitioners maintain vulnerability reports (e.g., GitHub Advisory) to help developers mitigate security risks. An important task for these databases is automatically extracting structured information mentioned in the report, e.g., the affected software packages, to accelerate the defense of the vulnerability ecosystem.However, it is challenging for existing work on affected package identification to achieve high precision. One reason is that all existing work focuses on relatively smaller models, thus they cannot harness the knowledge and semantic capabilities of large language models.To address this limitation, we propose VulLibGen, the first method to use LLM for affected package identification. In contrast to existing work, VulLibGen proposes the novel idea to directly generate the affected package. To improve the precision, VulLibGen employs supervised fine-tuning (SFT), retrieval augmented generation (RAG) and a local search algorithm. The local search algorithm is a novel post-processing algorithm we introduce for reducing the hallucination of the generated packages. Our evaluation results show that VulLibGen has an average precision of 0.806 for identifying vulnerable packages in the four most popular ecosystems in GitHub Advisory (Java, JS, Python, Go) while the best average precision in previous work is 0.721. Additionally, VulLibGen has high value to security practice: we submitted 60 {\textless}vulnerability, affected package{\textgreater} pairs to GitHub Advisory (covers four ecosystems) and 34 of them have been accepted and merged.
[ "Chen, Tianyu", "Li, Lin", "ZhuLiuchuan, ZhuLiuchuan", "Li, Zongyang", "Liu, Xueqing", "Liang, Guangtai", "Wang, Qianxiang", "Xie, Tao" ]
VulLibGen: Generating Names of Vulnerability-Affected Packages via a Large Language Model
acl-long.527
Poster
2308.04662
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.527/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.528.bib
@inproceedings{yu-etal-2024-self, title = "Self-Modifying State Modeling for Simultaneous Machine Translation", author = "Yu, Donglei and Kang, Xiaomian and Liu, Yuchen and Zhou, Yu and Zong, Chengqing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.528", pages = "9781--9795", abstract = "Simultaneous Machine Translation (SiMT) generates target outputs while receiving stream source inputs and requires a read/write policy to decide whether to wait for the next source token or generate a new target token, whose decisions form a decision path. Existing SiMT methods, which learn the policy by exploring various decision paths in training, face inherent limitations. These methods not only fail to precisely optimize the policy due to the inability to accurately assess the individual impact of each decision on SiMT performance, but also cannot sufficiently explore all potential paths because of their vast number. Besides, building decision paths requires unidirectional encoders to simulate streaming source inputs, which impairs the translation quality of SiMT models. To solve these issues, we propose Self-Modifying State Modeling (SM$^2$), a novel training paradigm for SiMT task. Without building decision paths, SM$^2$ individually optimizes decisions at each state during training. To precisely optimize the policy, SM$^2$ introduces Self-Modifying process to independently assess and adjust decisions at each state. For sufficient exploration, SM$^2$ proposes Prefix Sampling to efficiently traverse all potential states. Moreover, SM$^2$ ensures compatibility with bidirectional encoders, thus achieving higher translation quality. Experiments show that SM$^2$ outperforms strong baselines. Furthermore, SM$^2$ allows offline machine translation models to acquire SiMT ability with fine-tuning.", }
Simultaneous Machine Translation (SiMT) generates target outputs while receiving stream source inputs and requires a read/write policy to decide whether to wait for the next source token or generate a new target token, whose decisions form a decision path. Existing SiMT methods, which learn the policy by exploring various decision paths in training, face inherent limitations. These methods not only fail to precisely optimize the policy due to the inability to accurately assess the individual impact of each decision on SiMT performance, but also cannot sufficiently explore all potential paths because of their vast number. Besides, building decision paths requires unidirectional encoders to simulate streaming source inputs, which impairs the translation quality of SiMT models. To solve these issues, we propose Self-Modifying State Modeling (SM$^2$), a novel training paradigm for SiMT task. Without building decision paths, SM$^2$ individually optimizes decisions at each state during training. To precisely optimize the policy, SM$^2$ introduces Self-Modifying process to independently assess and adjust decisions at each state. For sufficient exploration, SM$^2$ proposes Prefix Sampling to efficiently traverse all potential states. Moreover, SM$^2$ ensures compatibility with bidirectional encoders, thus achieving higher translation quality. Experiments show that SM$^2$ outperforms strong baselines. Furthermore, SM$^2$ allows offline machine translation models to acquire SiMT ability with fine-tuning.
[ "Yu, Donglei", "Kang, Xiaomian", "Liu, Yuchen", "Zhou, Yu", "Zong, Chengqing" ]
Self-Modifying State Modeling for Simultaneous Machine Translation
acl-long.528
Poster
2406.02237
[ "https://github.com/EurekaForNLP/SM2" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.528/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.529.bib
@inproceedings{chen-etal-2024-mapgpt, title = "{M}ap{GPT}: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation", author = "Chen, Jiaqi and Lin, Bingqian and Xu, Ran and Chai, Zhenhua and Liang, Xiaodan and Wong, Kwan-Yee", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.529", pages = "9796--9810", abstract = "Embodied agents equipped with GPT as their brain have exhibited extraordinary decision-making and generalization abilities across various tasks. However, existing zero-shot agents for vision-and-language navigation (VLN) only prompt the GPT-4 to select potential locations within localized environments, without constructing an effective {``}global-view{''} for the agent to understand the overall environment. In this work, we present a novel **map**-guided **GPT**-based agent, dubbed **MapGPT**, which introduces an online linguistic-formed map to encourage the global exploration. Specifically, we build an online map and incorporate it into the prompts that include node information and topological relationships, to help GPT understand the spatial environment. Benefiting from this design, we further propose an adaptive planning mechanism to assist the agent in performing multi-step path planning based on a map, systematically exploring multiple candidate nodes or sub-goals step by step. Extensive experiments demonstrate that our MapGPT is applicable to both GPT-4 and GPT-4V, achieving state-of-the-art zero-shot performance on the R2R and REVERIE simultaneously ({\textasciitilde}10{\%} and {\textasciitilde}12{\%} improvements in SR), and showcasing the newly emergent global thinking and path planning abilities of the GPT.", }
Embodied agents equipped with GPT as their brain have exhibited extraordinary decision-making and generalization abilities across various tasks. However, existing zero-shot agents for vision-and-language navigation (VLN) only prompt the GPT-4 to select potential locations within localized environments, without constructing an effective {``}global-view{''} for the agent to understand the overall environment. In this work, we present a novel **map**-guided **GPT**-based agent, dubbed **MapGPT**, which introduces an online linguistic-formed map to encourage the global exploration. Specifically, we build an online map and incorporate it into the prompts that include node information and topological relationships, to help GPT understand the spatial environment. Benefiting from this design, we further propose an adaptive planning mechanism to assist the agent in performing multi-step path planning based on a map, systematically exploring multiple candidate nodes or sub-goals step by step. Extensive experiments demonstrate that our MapGPT is applicable to both GPT-4 and GPT-4V, achieving state-of-the-art zero-shot performance on the R2R and REVERIE simultaneously ({\textasciitilde}10{\%} and {\textasciitilde}12{\%} improvements in SR), and showcasing the newly emergent global thinking and path planning abilities of the GPT.
[ "Chen, Jiaqi", "Lin, Bingqian", "Xu, Ran", "Chai, Zhenhua", "Liang, Xiaodan", "Wong, Kwan-Yee" ]
MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation
acl-long.529
Poster
2401.07314
[ "" ]
https://huggingface.co/papers/2401.07314
0
0
0
6
https://aclanthology.org/2024.acl-long.529/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.530.bib
@inproceedings{wang-etal-2024-badagent, title = "{B}ad{A}gent: Inserting and Activating Backdoor Attacks in {LLM} Agents", author = "Wang, Yifei and Xue, Dizhan and Zhang, Shengjie and Qian, Shengsheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.530", pages = "9811--9827", abstract = "With the prosperity of large language models (LLMs), powerful LLM-based intelligent agents have been developed to provide customized services with a set of user-defined tools. State-of-the-art methods for constructing LLM agents adopt trained LLMs and further fine-tune them on data for the agent task. However, we show that such methods are vulnerable to our proposed backdoor attacks named BadAgent on various agent tasks, where a backdoor can be embedded by fine-tuning on the backdoor data. At test time, the attacker can manipulate the deployed LLM agents to execute harmful operations by showing the trigger in the agent input or environment. To our surprise, our proposed attack methods are extremely robust even after fine-tuning on trustworthy data. Though backdoor attacks have been studied extensively in natural language processing, to the best of our knowledge, we could be the first to study them on LLM agents that are more dangerous due to the permission to use external tools. Our work demonstrates the clear risk of constructing LLM agents based on untrusted LLMs or data. Our code is public at https://github.com/DPamK/BadAgent", }
With the prosperity of large language models (LLMs), powerful LLM-based intelligent agents have been developed to provide customized services with a set of user-defined tools. State-of-the-art methods for constructing LLM agents adopt trained LLMs and further fine-tune them on data for the agent task. However, we show that such methods are vulnerable to our proposed backdoor attacks named BadAgent on various agent tasks, where a backdoor can be embedded by fine-tuning on the backdoor data. At test time, the attacker can manipulate the deployed LLM agents to execute harmful operations by showing the trigger in the agent input or environment. To our surprise, our proposed attack methods are extremely robust even after fine-tuning on trustworthy data. Though backdoor attacks have been studied extensively in natural language processing, to the best of our knowledge, we could be the first to study them on LLM agents that are more dangerous due to the permission to use external tools. Our work demonstrates the clear risk of constructing LLM agents based on untrusted LLMs or data. Our code is public at https://github.com/DPamK/BadAgent
[ "Wang, Yifei", "Xue, Dizhan", "Zhang, Shengjie", "Qian, Shengsheng" ]
BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents
acl-long.530
Poster
2406.03007
[ "https://github.com/dpamk/badagent" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.530/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.531.bib
@inproceedings{sun-etal-2024-determlr, title = "{D}eterm{LR}: Augmenting {LLM}-based Logical Reasoning from Indeterminacy to Determinacy", author = "Sun, Hongda and Xu, Weikai and Liu, Wei and Luan, Jian and Wang, Bin and Shang, Shuo and Wen, Ji-Rong and Yan, Rui", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.531", pages = "9828--9862", abstract = "Recent advances in large language models (LLMs) have revolutionized the landscape of reasoning tasks. To enhance the capabilities of LLMs to emulate human reasoning, prior studies have focused on modeling reasoning steps using various thought structures like chains, trees, or graphs. However, LLM-based reasoning still encounters the following challenges: (1) Limited adaptability of preset structures to diverse tasks; (2) Insufficient precision in exploiting known conditions to derive new ones; and (3) Inadequate consideration of historical reasoning experiences for subsequent reasoning steps. To this end, we propose DetermLR, a novel perspective that rethinks the reasoning process as an evolution from indeterminacy to determinacy. First, we categorize known conditions into two types: determinate and indeterminate premises, facilitating the transformation process. Subsequently, we leverage quantitative measurements to prioritize more relevant premises to explore new insights. Furthermore, we automate the storage and extraction of available premises and reasoning paths with reasoning memory, preserving historical reasoning details for subsequent reasoning steps. Comprehensive experimental results demonstrate that DetermLR surpasses all baselines on various logical reasoning benchmarks: LogiQA, ProofWriter, FOLIO, PrOntoQA, and LogicalDeduction. Compared to previous multi-step reasoning methods, DetermLR achieves higher accuracy with fewer reasoning steps, highlighting its superior efficiency and effectiveness in solving logical reasoning tasks.", }
Recent advances in large language models (LLMs) have revolutionized the landscape of reasoning tasks. To enhance the capabilities of LLMs to emulate human reasoning, prior studies have focused on modeling reasoning steps using various thought structures like chains, trees, or graphs. However, LLM-based reasoning still encounters the following challenges: (1) Limited adaptability of preset structures to diverse tasks; (2) Insufficient precision in exploiting known conditions to derive new ones; and (3) Inadequate consideration of historical reasoning experiences for subsequent reasoning steps. To this end, we propose DetermLR, a novel perspective that rethinks the reasoning process as an evolution from indeterminacy to determinacy. First, we categorize known conditions into two types: determinate and indeterminate premises, facilitating the transformation process. Subsequently, we leverage quantitative measurements to prioritize more relevant premises to explore new insights. Furthermore, we automate the storage and extraction of available premises and reasoning paths with reasoning memory, preserving historical reasoning details for subsequent reasoning steps. Comprehensive experimental results demonstrate that DetermLR surpasses all baselines on various logical reasoning benchmarks: LogiQA, ProofWriter, FOLIO, PrOntoQA, and LogicalDeduction. Compared to previous multi-step reasoning methods, DetermLR achieves higher accuracy with fewer reasoning steps, highlighting its superior efficiency and effectiveness in solving logical reasoning tasks.
[ "Sun, Hongda", "Xu, Weikai", "Liu, Wei", "Luan, Jian", "Wang, Bin", "Shang, Shuo", "Wen, Ji-Rong", "Yan, Rui" ]
DetermLR: Augmenting LLM-based Logical Reasoning from Indeterminacy to Determinacy
acl-long.531
Poster
2310.18659
[ "https://github.com/xiaomi/determlr" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.531/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.532.bib
@inproceedings{mahari-etal-2024-lepard, title = "{L}e{P}a{RD}: A Large-Scale Dataset of Judicial Citations to Precedent", author = "Mahari, Robert and Stammbach, Dominik and Ash, Elliott and Pentland, Alex", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.532", pages = "9863--9877", abstract = "We present the Legal Passage Retrieval Dataset, LePaRD. LePaRD contains millions of examples of U.S. federal judges citing precedent in context. The dataset aims to facilitate work on legal passage retrieval, a challenging practice-oriented legal retrieval and reasoning task. Legal passage retrieval seeks to predict relevant passages from precedential court decisions given the context of a legal argument. We extensively evaluate various approaches on LePaRD, and find that classification-based retrieval appears to work best. Our best models only achieve a recall of 59{\%} when trained on data corresponding to the 10,000 most-cited passages, underscoring the difficulty of legal passage retrieval. By publishing LePaRD, we provide a large-scale and high quality resource to foster further research on legal passage retrieval. We hope that research on this practice-oriented NLP task will help expand access to justice by reducing the burden associated with legal research via computational assistance. Warning: Extracts from judicial opinions may contain offensive language.", }
We present the Legal Passage Retrieval Dataset, LePaRD. LePaRD contains millions of examples of U.S. federal judges citing precedent in context. The dataset aims to facilitate work on legal passage retrieval, a challenging practice-oriented legal retrieval and reasoning task. Legal passage retrieval seeks to predict relevant passages from precedential court decisions given the context of a legal argument. We extensively evaluate various approaches on LePaRD, and find that classification-based retrieval appears to work best. Our best models only achieve a recall of 59{\%} when trained on data corresponding to the 10,000 most-cited passages, underscoring the difficulty of legal passage retrieval. By publishing LePaRD, we provide a large-scale and high quality resource to foster further research on legal passage retrieval. We hope that research on this practice-oriented NLP task will help expand access to justice by reducing the burden associated with legal research via computational assistance. Warning: Extracts from judicial opinions may contain offensive language.
[ "Mahari, Robert", "Stammbach, Dominik", "Ash, Elliott", "Pentl", ", Alex" ]
LePaRD: A Large-Scale Dataset of Judicial Citations to Precedent
acl-long.532
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.532/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.533.bib
@inproceedings{frisoni-etal-2024-generate, title = "To Generate or to Retrieve? On the Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering", author = "Frisoni, Giacomo and Cocchieri, Alessio and Presepi, Alex and Moro, Gianluca and Meng, Zaiqiao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.533", pages = "9878--9919", abstract = "Medical open-domain question answering demands substantial access to specialized knowledge. Recent efforts have sought to decouple knowledge from model parameters, counteracting architectural scaling and allowing for training on common low-resource hardware. The retrieve-then-read paradigm has become ubiquitous, with model predictions grounded on relevant knowledge pieces from external repositories such as PubMed, textbooks, and UMLS. An alternative path, still under-explored but made possible by the advent of domain-specific large language models, entails constructing artificial contexts through prompting. As a result, {``}to generate or to retrieve{''} is the modern equivalent of Hamlet{'}s dilemma. This paper presents MedGENIE, the first generate-then-read framework for multiple-choice question answering in medicine. We conduct extensive experiments on MedQA-USMLE, MedMCQA, and MMLU, incorporating a practical perspective by assuming a maximum of 24GB VRAM. MedGENIE sets a new state-of-the-art in the open-book setting of each testbed, allowing a small-scale reader to outcompete zero-shot closed-book 175B baselines while using up to 706x fewer parameters. Our findings reveal that generated passages are more effective than retrieved ones in attaining higher accuracy.", }
Medical open-domain question answering demands substantial access to specialized knowledge. Recent efforts have sought to decouple knowledge from model parameters, counteracting architectural scaling and allowing for training on common low-resource hardware. The retrieve-then-read paradigm has become ubiquitous, with model predictions grounded on relevant knowledge pieces from external repositories such as PubMed, textbooks, and UMLS. An alternative path, still under-explored but made possible by the advent of domain-specific large language models, entails constructing artificial contexts through prompting. As a result, {``}to generate or to retrieve{''} is the modern equivalent of Hamlet{'}s dilemma. This paper presents MedGENIE, the first generate-then-read framework for multiple-choice question answering in medicine. We conduct extensive experiments on MedQA-USMLE, MedMCQA, and MMLU, incorporating a practical perspective by assuming a maximum of 24GB VRAM. MedGENIE sets a new state-of-the-art in the open-book setting of each testbed, allowing a small-scale reader to outcompete zero-shot closed-book 175B baselines while using up to 706x fewer parameters. Our findings reveal that generated passages are more effective than retrieved ones in attaining higher accuracy.
[ "Frisoni, Giacomo", "Cocchieri, Alessio", "Presepi, Alex", "Moro, Gianluca", "Meng, Zaiqiao" ]
To Generate or to Retrieve? On the Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering
acl-long.533
Poster
2403.01924
[ "https://github.com/disi-unibo-nlp/medgenie" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.533/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.534.bib
@inproceedings{fenogenova-etal-2024-mera, title = "{MERA}: A Comprehensive {LLM} Evaluation in {R}ussian", author = "Fenogenova, Alena and Chervyakov, Artem and Martynov, Nikita and Kozlova, Anastasia and Tikhonova, Maria and Akhmetgareeva, Albina and Emelyanov, Anton and Shevelev, Denis and Lebedev, Pavel and Sinev, Leonid and Isaeva, Ulyana and Kolomeytseva, Katerina and Moskovskiy, Daniil and Goncharova, Elizaveta and Savushkin, Nikita and Mikhailova, Polina and Minaeva, Anastasia and Dimitrov, Denis and Panchenko, Alexander and Markov, Sergey", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.534", pages = "9920--9948", abstract = "Over the past few years, one of the most notable advancements in AI research has been in foundation models (FMs), headlined by the rise of language models (LMs). However, despite researchers{'} attention and the rapid growth in LM application, the capabilities, limitations, and associated risks still need to be better understood. To address these issues, we introduce a new instruction benchmark, MERA, oriented towards the FMs{'} performance on the Russian language. The benchmark encompasses 21 evaluation tasks for generative models covering 10 skills and is supplied with private answer scoring to prevent data leakage. The paper introduces a methodology to evaluate FMs and LMs in fixed zero- and few-shot instruction settings that can be extended to other modalities. We propose an evaluation methodology, an open-source code base for the MERA assessment, and a leaderboard with a submission system. We evaluate open LMs as baselines and find they are still far behind the human level. We publicly release MERA to guide forthcoming research, anticipate groundbreaking model features, standardize the evaluation procedure, and address potential ethical concerns and drawbacks.", }
Over the past few years, one of the most notable advancements in AI research has been in foundation models (FMs), headlined by the rise of language models (LMs). However, despite researchers{'} attention and the rapid growth in LM application, the capabilities, limitations, and associated risks still need to be better understood. To address these issues, we introduce a new instruction benchmark, MERA, oriented towards the FMs{'} performance on the Russian language. The benchmark encompasses 21 evaluation tasks for generative models covering 10 skills and is supplied with private answer scoring to prevent data leakage. The paper introduces a methodology to evaluate FMs and LMs in fixed zero- and few-shot instruction settings that can be extended to other modalities. We propose an evaluation methodology, an open-source code base for the MERA assessment, and a leaderboard with a submission system. We evaluate open LMs as baselines and find they are still far behind the human level. We publicly release MERA to guide forthcoming research, anticipate groundbreaking model features, standardize the evaluation procedure, and address potential ethical concerns and drawbacks.
[ "Fenogenova, Alena", "Chervyakov, Artem", "Martynov, Nikita", "Kozlova, Anastasia", "Tikhonova, Maria", "Akhmetgareeva, Albina", "Emelyanov, Anton", "Shevelev, Denis", "Lebedev, Pavel", "Sinev, Leonid", "Isaeva, Ulyana", "Kolomeytseva, Katerina", "Moskovskiy, Daniil", "Goncharova, Elizaveta", "Savushkin, Nikita", "Mikhailova, Polina", "Minaeva, Anastasia", "Dimitrov, Denis", "Panchenko, Alex", "er", "Markov, Sergey" ]
MERA: A Comprehensive LLM Evaluation in Russian
acl-long.534
Poster
2401.04531
[ "https://github.com/ai-forever/mera" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.534/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.535.bib
@inproceedings{zhao-etal-2024-sc2, title = "{SC}2: Towards Enhancing Content Preservation and Style Consistency in Long Text Style Transfer", author = "Zhao, Jie and Guan, Ziyu and Xu, Cai and Zhao, Wei and Jiang, Yue", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.535", pages = "9949--9960", abstract = "Text style transfer (TST) aims to vary the style polarity of text while preserving the semantic content. Although recent advancements have demonstrated remarkable progress in short TST, it remains a relatively straightforward task with limited practical applications. The more comprehensive long TST task presents two challenges: (1) existing methods encounter difficulties in accurately evaluating content attributes in multiple words, leading to content degradation; (2) the conventional vanilla style classifier loss encounters obstacles in maintaining consistent style across multiple generated sentences.In this paper, we propose a novel method SC2, where a multilayer Joint Style-Content Weighed (JSCW) module and a Style Consistency loss are designed to address the two issues. The JSCW simultaneously assesses the amounts of style and content attributes within a token, aiming to acquire a lossless content representation and thereby enhancing content preservation. The multiple JSCW layers further progressively refine content representations. We design a style consistency loss to ensure the generated multiple sentences consistently reflect the target style polarity. Moreover, we incorporate a denoising non-autoregressive decoder to accelerate the training. We conduct plentiful experiments and the results show significant improvements of SC2 over competitive baselines. Our code: https://github.com/jiezhao6/SC2.", }
Text style transfer (TST) aims to vary the style polarity of text while preserving the semantic content. Although recent advancements have demonstrated remarkable progress in short TST, it remains a relatively straightforward task with limited practical applications. The more comprehensive long TST task presents two challenges: (1) existing methods encounter difficulties in accurately evaluating content attributes in multiple words, leading to content degradation; (2) the conventional vanilla style classifier loss encounters obstacles in maintaining consistent style across multiple generated sentences.In this paper, we propose a novel method SC2, where a multilayer Joint Style-Content Weighed (JSCW) module and a Style Consistency loss are designed to address the two issues. The JSCW simultaneously assesses the amounts of style and content attributes within a token, aiming to acquire a lossless content representation and thereby enhancing content preservation. The multiple JSCW layers further progressively refine content representations. We design a style consistency loss to ensure the generated multiple sentences consistently reflect the target style polarity. Moreover, we incorporate a denoising non-autoregressive decoder to accelerate the training. We conduct plentiful experiments and the results show significant improvements of SC2 over competitive baselines. Our code: https://github.com/jiezhao6/SC2.
[ "Zhao, Jie", "Guan, Ziyu", "Xu, Cai", "Zhao, Wei", "Jiang, Yue" ]
SC2: Towards Enhancing Content Preservation and Style Consistency in Long Text Style Transfer
acl-long.535
Poster
2406.04578
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.535/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.536.bib
@inproceedings{qin-etal-2024-dodo, title = "Dodo: Dynamic Contextual Compression for Decoder-only {LM}s", author = "Qin, Guanghui and Rosset, Corby and Chau, Ethan and Rao, Nikhil and Van Durme, Benjamin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.536", pages = "9961--9975", abstract = "Transformer-based language models (LMs) are inefficient in long contexts. We propose Dodo, a solution for context compression. Instead of one vector per token in a standard transformer model, Dodo represents text with a dynamic number of hidden states at each layer, reducing the cost of self-attention to a fraction of typical time and space. Moreover, off-the-shelf models such as LLaMA can be adapted to Dodo by efficient parameter tuning methods such as LoRA. In use, Dodo can act as either an autoregressive LM or a context compressor for downstream tasks. We demonstrate through experiments in language modeling, question answering, and summarization that Dodo retains capabilities in these tasks, while drastically reducing the overhead during decoding. For example, in the autoencoding task, Dodo shrinks context at a 20x compression ratio with a BLEU score of 98{\%} for reconstruction, achieving nearly lossless encoding.", }
Transformer-based language models (LMs) are inefficient in long contexts. We propose Dodo, a solution for context compression. Instead of one vector per token in a standard transformer model, Dodo represents text with a dynamic number of hidden states at each layer, reducing the cost of self-attention to a fraction of typical time and space. Moreover, off-the-shelf models such as LLaMA can be adapted to Dodo by efficient parameter tuning methods such as LoRA. In use, Dodo can act as either an autoregressive LM or a context compressor for downstream tasks. We demonstrate through experiments in language modeling, question answering, and summarization that Dodo retains capabilities in these tasks, while drastically reducing the overhead during decoding. For example, in the autoencoding task, Dodo shrinks context at a 20x compression ratio with a BLEU score of 98{\%} for reconstruction, achieving nearly lossless encoding.
[ "Qin, Guanghui", "Rosset, Corby", "Chau, Ethan", "Rao, Nikhil", "Van Durme, Benjamin" ]
Dodo: Dynamic Contextual Compression for Decoder-only LMs
acl-long.536
Oral
2310.02409
[ "" ]
https://huggingface.co/papers/2310.02409
1
1
0
5
https://aclanthology.org/2024.acl-long.536/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.537.bib
@inproceedings{pan-etal-2024-pomp, title = "{POMP}: Probability-driven Meta-graph Prompter for {LLM}s in Low-resource Unsupervised Neural Machine Translation", author = "Pan, Shilong and Tian, Zhiliang and Ding, Liang and Zheng, Haoqi and Huang, Zhen and Wen, Zhihua and Li, Dongsheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.537", pages = "9976--9992", abstract = "Low-resource languages (LRLs) face challenges in supervised neural machine translation (NMT) due to limited parallel data, prompting research in unsupervised NMT.Unsupervised NMT (UNMT), without requiring ground truth, provides solutions for LRL translations using synthetic pseudo-parallel data and parallel data from auxiliary language pairs. However, they usually encounter translation errors, including errors from synthetic data and from auxiliary language pairs with linguistic biases.We argue that large language models (LLMs) mitigate UNMT{'}s translation errors by dynamically organizing auxiliary languages in prompts to improve LRL translations. In this paper, we propose $\textbf{P}$r$\textbf{O}$bability-driven $\textbf{M}$eta-graph $\textbf{P}$rompter (POMP), an approach employing a dynamic graph to organize multiple auxiliary languages, to prompt LLMs in LRL translations. POMP proposes a language-specific meta-graph that dynamically samples multiple translation paths to organize auxiliary languages in constructing prompts. Following the path, POMP prompts LLMs to translate with a mixture of auxiliary languages. We achieve the meta-graph{'}s evolution by back-propagating evaluation scores to update probabilities on the graph.Our experimental improvements show POMP{'}s effectiveness on LRLs{'} translation.", }
Low-resource languages (LRLs) face challenges in supervised neural machine translation (NMT) due to limited parallel data, prompting research in unsupervised NMT.Unsupervised NMT (UNMT), without requiring ground truth, provides solutions for LRL translations using synthetic pseudo-parallel data and parallel data from auxiliary language pairs. However, they usually encounter translation errors, including errors from synthetic data and from auxiliary language pairs with linguistic biases.We argue that large language models (LLMs) mitigate UNMT{'}s translation errors by dynamically organizing auxiliary languages in prompts to improve LRL translations. In this paper, we propose $\textbf{P}$r$\textbf{O}$bability-driven $\textbf{M}$eta-graph $\textbf{P}$rompter (POMP), an approach employing a dynamic graph to organize multiple auxiliary languages, to prompt LLMs in LRL translations. POMP proposes a language-specific meta-graph that dynamically samples multiple translation paths to organize auxiliary languages in constructing prompts. Following the path, POMP prompts LLMs to translate with a mixture of auxiliary languages. We achieve the meta-graph{'}s evolution by back-propagating evaluation scores to update probabilities on the graph.Our experimental improvements show POMP{'}s effectiveness on LRLs{'} translation.
[ "Pan, Shilong", "Tian, Zhiliang", "Ding, Liang", "Zheng, Haoqi", "Huang, Zhen", "Wen, Zhihua", "Li, Dongsheng" ]
POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource Unsupervised Neural Machine Translation
acl-long.537
Poster
2401.05596
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.537/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.538.bib
@inproceedings{li-etal-2024-newsbench, title = "{N}ews{B}ench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in {C}hinese Journalism", author = "Li, Miao and Chen, Ming-Bin and Tang, Bo and ShengbinHou, ShengbinHou and Wang, Pengyu and Deng, Haiying and Li, Zhiyu and Xiong, Feiyu and Mao, Keming and Peng, Cheng and Luo, Yi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.538", pages = "9993--10014", abstract = "We present NewsBench, a novel evaluation framework to systematically assess the capabilities of Large Language Models (LLMs) for editorial capabilities in Chinese journalism. Our constructed benchmark dataset is focused on four facets of writing proficiency and six facets of safety adherence, and it comprises manually and carefully designed 1,267 test samples in the types of multiple choice questions and short answer questions for five editorial tasks in 24 news domains. To measure performances, we propose different GPT-4 based automatic evaluation protocols to assess LLM generations for short answer questions in terms of writing proficiency and safety adherence, and both are validated by the high correlations with human evaluations. Based on the systematic evaluation framework, we conduct a comprehensive analysis of eleven popular LLMs which can handle Chinese. The experimental results highlight GPT-4 and ERNIE Bot as top performers, yet reveal a relative deficiency in journalistic safety adherence in creative writing tasks. Our findings also underscore the need for enhanced ethical guidance in machine-generated journalistic content, marking a step forward in aligning LLMs with journalistic standards and safety considerations. The evaluation framework and experimental results are expected to provide an in-depth understanding of the editorial capabilities of LLMs and speed up the development of LLMs in journalism.", }
We present NewsBench, a novel evaluation framework to systematically assess the capabilities of Large Language Models (LLMs) for editorial capabilities in Chinese journalism. Our constructed benchmark dataset is focused on four facets of writing proficiency and six facets of safety adherence, and it comprises manually and carefully designed 1,267 test samples in the types of multiple choice questions and short answer questions for five editorial tasks in 24 news domains. To measure performances, we propose different GPT-4 based automatic evaluation protocols to assess LLM generations for short answer questions in terms of writing proficiency and safety adherence, and both are validated by the high correlations with human evaluations. Based on the systematic evaluation framework, we conduct a comprehensive analysis of eleven popular LLMs which can handle Chinese. The experimental results highlight GPT-4 and ERNIE Bot as top performers, yet reveal a relative deficiency in journalistic safety adherence in creative writing tasks. Our findings also underscore the need for enhanced ethical guidance in machine-generated journalistic content, marking a step forward in aligning LLMs with journalistic standards and safety considerations. The evaluation framework and experimental results are expected to provide an in-depth understanding of the editorial capabilities of LLMs and speed up the development of LLMs in journalism.
[ "Li, Miao", "Chen, Ming-Bin", "Tang, Bo", "ShengbinHou, ShengbinHou", "Wang, Pengyu", "Deng, Haiying", "Li, Zhiyu", "Xiong, Feiyu", "Mao, Keming", "Peng, Cheng", "Luo, Yi" ]
NewsBench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in Chinese Journalism
acl-long.538
Poster
2403.00862
[ "https://github.com/iaar-shanghai/newsbench" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.538/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.539.bib
@inproceedings{she-etal-2024-mapo, title = "{MAPO}: Advancing Multilingual Reasoning through Multilingual-Alignment-as-Preference Optimization", author = "She, Shuaijie and Zou, Wei and Huang, Shujian and Zhu, Wenhao and Liu, Xiang and Geng, Xiang and Chen, Jiajun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.539", pages = "10015--10027", abstract = "Intuitively, reasoning abilities are considered language-agnostic. However, existing LLMs exhibit inconsistent reasoning abilities across different languages, e.g., reasoning in the dominant language like English is superior to other languages due to the imbalance of multilingual training data. To enhance reasoning abilities in non-dominant languages, we propose a Multilingual-Alignment-as-Preference Optimization framework (MAPO) to align the reasoning processes in other languages with the dominant language. Specifically, we harness an off-the-shelf translation model for the consistency between answers in non-dominant and dominant languages, which we adopt as the preference for optimization, e.g., Direct Preference Optimization(DPO) or Proximal Policy Optimization (PPO). Experiments show that MAPO stably achieves significant improvements in the multilingual reasoning of various models on all three benchmarks (MSVAMP +16.2{\%}, MGSM +6.1{\%}, and MNumGLUESub +13.3{\%}), with improved reasoning consistency across languages. The project is available at https://github.com/NJUNLP/MAPO.", }
Intuitively, reasoning abilities are considered language-agnostic. However, existing LLMs exhibit inconsistent reasoning abilities across different languages, e.g., reasoning in the dominant language like English is superior to other languages due to the imbalance of multilingual training data. To enhance reasoning abilities in non-dominant languages, we propose a Multilingual-Alignment-as-Preference Optimization framework (MAPO) to align the reasoning processes in other languages with the dominant language. Specifically, we harness an off-the-shelf translation model for the consistency between answers in non-dominant and dominant languages, which we adopt as the preference for optimization, e.g., Direct Preference Optimization(DPO) or Proximal Policy Optimization (PPO). Experiments show that MAPO stably achieves significant improvements in the multilingual reasoning of various models on all three benchmarks (MSVAMP +16.2{\%}, MGSM +6.1{\%}, and MNumGLUESub +13.3{\%}), with improved reasoning consistency across languages. The project is available at https://github.com/NJUNLP/MAPO.
[ "She, Shuaijie", "Zou, Wei", "Huang, Shujian", "Zhu, Wenhao", "Liu, Xiang", "Geng, Xiang", "Chen, Jiajun" ]
MAPO: Advancing Multilingual Reasoning through Multilingual-Alignment-as-Preference Optimization
acl-long.539
Poster
[ "https://github.com/njunlp/mapo" ]
https://huggingface.co/papers/2401.06838
2
0
0
7
https://aclanthology.org/2024.acl-long.539/
[ "kevinpro/MetaMathOctopus-7B", "kevinpro/MathOctopus-MAPO-DPO-7B", "kevinpro/MetaMathOctopus-MAPO-DPO-7B", "kevinpro/MetaMathOctopus-MAPO-DPO-13B", "kevinpro/MetaMathOctopus-13B", "kevinpro/MathOctopus-MAPO-DPO-13B", "kevinpro/MistralMathOctopus-7B", "kevinpro/MistralMathOctopus-MAPO-DPO-7B" ]
[]
[ "kevinpro/Open-Multilingual-Reasoning-Leaderboard" ]
1
https://aclanthology.org/2024.acl-long.540.bib
@inproceedings{fang-etal-2024-enhancing, title = "Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training", author = "Fang, Feiteng and Bai, Yuelin and Ni, Shiwen and Yang, Min and Chen, Xiaojun and Xu, Ruifeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.540", pages = "10028--10039", abstract = "Large Language Models (LLMs) exhibit substantial capabilities yet encounter challenges including hallucination, outdated knowledge, and untraceable reasoning processes. Retrieval-augmented generation (RAG) has emerged as a promising solution, integrating knowledge from external databases to mitigate these challenges. However, inappropriate retrieved passages can potentially hinder the LLMs{'} capacity to generate comprehensive and high-quality responses. Prior RAG studies on the robustness of retrieval noises often confine themselves to a limited set of noise types, deviating from real-world retrieval environments and limiting practical applicability. In this study, we initially investigate retrieval noises and categorize them into three distinct types, reflecting real-world environments. We analyze the impact of these various retrieval noises on the robustness of LLMs. Subsequently, we propose a novel RAG approach known as Retrieval-augmented Adaptive Adversarial Training (RAAT). RAAT leverages adaptive adversarial training to dynamically adjust the model{'}s training process in response to retrieval noises. Concurrently, it employs multi-task learning to ensure the model{'}s capacity to internally recognize noisy contexts. Extensive experiments demonstrate that the LLaMA-2 7B model trained using RAAT exhibits significant improvements in F1 and EM scores under diverse noise conditions. For reproducibility, we will release our code and data upon acceptance.", }
Large Language Models (LLMs) exhibit substantial capabilities yet encounter challenges including hallucination, outdated knowledge, and untraceable reasoning processes. Retrieval-augmented generation (RAG) has emerged as a promising solution, integrating knowledge from external databases to mitigate these challenges. However, inappropriate retrieved passages can potentially hinder the LLMs{'} capacity to generate comprehensive and high-quality responses. Prior RAG studies on the robustness of retrieval noises often confine themselves to a limited set of noise types, deviating from real-world retrieval environments and limiting practical applicability. In this study, we initially investigate retrieval noises and categorize them into three distinct types, reflecting real-world environments. We analyze the impact of these various retrieval noises on the robustness of LLMs. Subsequently, we propose a novel RAG approach known as Retrieval-augmented Adaptive Adversarial Training (RAAT). RAAT leverages adaptive adversarial training to dynamically adjust the model{'}s training process in response to retrieval noises. Concurrently, it employs multi-task learning to ensure the model{'}s capacity to internally recognize noisy contexts. Extensive experiments demonstrate that the LLaMA-2 7B model trained using RAAT exhibits significant improvements in F1 and EM scores under diverse noise conditions. For reproducibility, we will release our code and data upon acceptance.
[ "Fang, Feiteng", "Bai, Yuelin", "Ni, Shiwen", "Yang, Min", "Chen, Xiaojun", "Xu, Ruifeng" ]
Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
acl-long.540
Poster
2405.20978
[ "https://github.com/calubkk/raat" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.540/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.541.bib
@inproceedings{yan-etal-2024-predicting, title = "Predicting Text Preference Via Structured Comparative Reasoning", author = "Yan, Jing Nathan and Liu, Tianqi and Chiu, Justin and Shen, Jiaming and Qin, Zhen and Yu, Yue and Lakshmanan, Charumathi and Kurzion, Yair and Rush, Alexander and Liu, Jialu and Bendersky, Michael", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.541", pages = "10040--10060", abstract = "Comparative reasoning plays a crucial role in predicting text preferences; however, large language models (LLMs) often demonstrate inconsistencies in their reasoning, leading to incorrect preference predictions. While approaches like Chain-of-Thought improve accuracy in many settings, they struggle to consistently distinguish the similarities and differences of complex texts. We introduce $SC^2$, a model that prompts LLMs to predict text preferences by generating structured intermediate comparisons. $SC^2$ begins by proposing aspects for comparison, followed by generating textual comparisons under each aspect. We select consistent comparisons with a pairwise comparator that ensures each comparison of a given aspect clearly distinguishes differences between texts, significantly reducing hallucination and improving consistency. Our empirical studies across various NLP tasks, including summarization, retrieval, and automatic rating, demonstrate that $SC^2${`}s enhanced performance in text preference prediction is significant.", }
Comparative reasoning plays a crucial role in predicting text preferences; however, large language models (LLMs) often demonstrate inconsistencies in their reasoning, leading to incorrect preference predictions. While approaches like Chain-of-Thought improve accuracy in many settings, they struggle to consistently distinguish the similarities and differences of complex texts. We introduce $SC^2$, a model that prompts LLMs to predict text preferences by generating structured intermediate comparisons. $SC^2$ begins by proposing aspects for comparison, followed by generating textual comparisons under each aspect. We select consistent comparisons with a pairwise comparator that ensures each comparison of a given aspect clearly distinguishes differences between texts, significantly reducing hallucination and improving consistency. Our empirical studies across various NLP tasks, including summarization, retrieval, and automatic rating, demonstrate that $SC^2${`}s enhanced performance in text preference prediction is significant.
[ "Yan, Jing Nathan", "Liu, Tianqi", "Chiu, Justin", "Shen, Jiaming", "Qin, Zhen", "Yu, Yue", "Lakshmanan, Charumathi", "Kurzion, Yair", "Rush, Alex", "er", "Liu, Jialu", "Bendersky, Michael" ]
Predicting Text Preference Via Structured Comparative Reasoning
acl-long.541
Poster
2311.08390
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.541/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.542.bib
@inproceedings{xu-etal-2024-coelm, title = "{C}o{ELM}: Construction-Enhanced Language Modeling", author = "Xu, Lvxiaowei and Gong, Zhilin and Dai, Jianhua and Wang, Tianxiang and Cai, Ming and Peng, Jiawei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.542", pages = "10061--10081", abstract = "Recent studies have shown that integrating constructional information can improve the performance of pre-trained language models (PLMs) in natural language understanding. However, exploration into leveraging constructional information to enhance generative language models for natural language generation has been limited. Additionally, probing studies indicate that PLMs primarily grasp the syntactic structure of constructions but struggle to capture their semantics. In this work, we encode constructions as inductive biases to explicitly embed constructional semantics and guide the generation process. We begin by presenting a construction grammar induction framework designed to automatically identify constructions from corpora. Subsequently, we propose the Construction-Enhanced Language Model (CoELM). It introduces a construction-guided language modeling approach that employs a dynamic sequence reassembly strategy during pre-training. Extensive experiments have demonstrated the superiority of CoELM across various benchmarks.", }
Recent studies have shown that integrating constructional information can improve the performance of pre-trained language models (PLMs) in natural language understanding. However, exploration into leveraging constructional information to enhance generative language models for natural language generation has been limited. Additionally, probing studies indicate that PLMs primarily grasp the syntactic structure of constructions but struggle to capture their semantics. In this work, we encode constructions as inductive biases to explicitly embed constructional semantics and guide the generation process. We begin by presenting a construction grammar induction framework designed to automatically identify constructions from corpora. Subsequently, we propose the Construction-Enhanced Language Model (CoELM). It introduces a construction-guided language modeling approach that employs a dynamic sequence reassembly strategy during pre-training. Extensive experiments have demonstrated the superiority of CoELM across various benchmarks.
[ "Xu, Lvxiaowei", "Gong, Zhilin", "Dai, Jianhua", "Wang, Tianxiang", "Cai, Ming", "Peng, Jiawei" ]
CoELM: Construction-Enhanced Language Modeling
acl-long.542
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.542/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.543.bib
@inproceedings{lei-etal-2024-uni, title = "Uni-Dubbing: Zero-Shot Speech Synthesis from Visual Articulation", author = "Lei, Songju and Cheng, Xize and Lyu, Mengjiao and Hu, Jianqiao and Tan, Jintao and Liu, Runlin and Xiong, Lingyu and Jin, Tao and Li, Xiandong and Zhao, Zhou", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.543", pages = "10082--10099", abstract = "In the field of speech synthesis, there is a growing emphasis on employing multimodal speech to enhance robustness. A key challenge in this area is the scarcity of datasets that pair audio with corresponding video. We employ a methodology that incorporates modality alignment during the pre-training phase on multimodal datasets, uniquely facilitating zero-shot generalization through the process of freezing the video modality feature extraction component and the encoder module within the pretrained weights, thereby enabling effective cross-modal and cross-lingual transfer. We have named this method {`}Uni-Dubbing{'}. Our method finely tunes with both multimodal and single-modality audio data. In multimodal scenarios, it achieves a reduced word error rate (WER) of 31.73{\%}, surpassing the previous best of 33.9{\%}. It also excels in metrics like tone quality and synchronization. With single-modality audio, it achieves a WER of 36.08{\%}, demonstrating adaptability to limited data. Its domain generalization capabilities are proven across various language tasks in video translation and audio generation. Trained on 433 hours of audio data, it surpasses techniques using 200 hours of audiovisual data. The code and demo are available at https://diracer.github.io/unidubbing.", }
In the field of speech synthesis, there is a growing emphasis on employing multimodal speech to enhance robustness. A key challenge in this area is the scarcity of datasets that pair audio with corresponding video. We employ a methodology that incorporates modality alignment during the pre-training phase on multimodal datasets, uniquely facilitating zero-shot generalization through the process of freezing the video modality feature extraction component and the encoder module within the pretrained weights, thereby enabling effective cross-modal and cross-lingual transfer. We have named this method {`}Uni-Dubbing{'}. Our method finely tunes with both multimodal and single-modality audio data. In multimodal scenarios, it achieves a reduced word error rate (WER) of 31.73{\%}, surpassing the previous best of 33.9{\%}. It also excels in metrics like tone quality and synchronization. With single-modality audio, it achieves a WER of 36.08{\%}, demonstrating adaptability to limited data. Its domain generalization capabilities are proven across various language tasks in video translation and audio generation. Trained on 433 hours of audio data, it surpasses techniques using 200 hours of audiovisual data. The code and demo are available at https://diracer.github.io/unidubbing.
[ "Lei, Songju", "Cheng, Xize", "Lyu, Mengjiao", "Hu, Jianqiao", "Tan, Jintao", "Liu, Runlin", "Xiong, Lingyu", "Jin, Tao", "Li, Xi", "ong", "Zhao, Zhou" ]
Uni-Dubbing: Zero-Shot Speech Synthesis from Visual Articulation
acl-long.543
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.543/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.544.bib
@inproceedings{williams-aletras-2024-impact, title = "On the Impact of Calibration Data in Post-training Quantization and Pruning", author = "Williams, Miles and Aletras, Nikolaos", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.544", pages = "10100--10118", abstract = "Quantization and pruning form the foundation of compression for neural networks, enabling efficient inference for large language models (LLMs). Recently, various quantization and pruning techniques have demonstrated remarkable performance in a post-training setting. They rely upon calibration data, a small set of unlabeled examples that are used to generate layer activations. However, no prior work has systematically investigated how the calibration data impacts the effectiveness of model compression methods. In this paper, we present the first extensive empirical study on the effect of calibration data upon LLM performance. We trial a variety of quantization and pruning methods, datasets, tasks, and models. Surprisingly, we find substantial variations in downstream task performance, contrasting existing work that suggests a greater level of robustness to the calibration data. Finally, we make a series of recommendations for the effective use of calibration data in LLM quantization and pruning.", }
Quantization and pruning form the foundation of compression for neural networks, enabling efficient inference for large language models (LLMs). Recently, various quantization and pruning techniques have demonstrated remarkable performance in a post-training setting. They rely upon calibration data, a small set of unlabeled examples that are used to generate layer activations. However, no prior work has systematically investigated how the calibration data impacts the effectiveness of model compression methods. In this paper, we present the first extensive empirical study on the effect of calibration data upon LLM performance. We trial a variety of quantization and pruning methods, datasets, tasks, and models. Surprisingly, we find substantial variations in downstream task performance, contrasting existing work that suggests a greater level of robustness to the calibration data. Finally, we make a series of recommendations for the effective use of calibration data in LLM quantization and pruning.
[ "Williams, Miles", "Aletras, Nikolaos" ]
On the Impact of Calibration Data in Post-training Quantization and Pruning
acl-long.544
Poster
2311.09755
[ "" ]
https://huggingface.co/papers/2311.09755
0
0
0
2
https://aclanthology.org/2024.acl-long.544/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.545.bib
@inproceedings{agarwal-etal-2024-symkgqa, title = "{S}ym{KGQA}: Few-Shot Knowledge Graph Question Answering via Symbolic Program Generation and Execution", author = "Agarwal, Prerna and Kumar, Nishant and Bedathur, Srikanta", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.545", pages = "10119--10140", abstract = "Semantic Parsing of natural language questions into their executable logical form (LF) has shown state-of-the-art (SOTA) performance for Knowledge Graph Question Answering (KGQA). However, these methods are not applicable for real-world applications, due to lack of KG-specific training data. Recent advances in the capabilities of Large Language Models (LLMs) has led towards generating low-level LFs such as SPARQL and S-Expression in a few-shot setting. Unfortunately, these methods: (1) are limited to the knowledge of underlying LLM about the LF, (2) performs inferior for the harder complex benchmarks such as KQA Pro, (3) suffers while grounding the generated LF to a specific Knowledge Graph. Recently, a new LF called KoPL has been introduced that explicitly models complex reasoning process step-by-step in a symbolic manner and has shown SOTA on KQA Pro in fully-supervised setting. Inspired by this, we propose SymKGQA framework that generates step-by-step Symbolic LF i.e., KoPL in a few-shot in-context learning setting using LLM. Our framework is not dependent on pre-trained information of LLM about KoPL. We further build a Retrieval-Augmented Generation based Question-Aware Contextual KoPL (QUACK) resolver to ground the generated LF. Our experiments with different LLMs and few-shot settings demonstrate that SymKGQA outperforms all other few-shot and even many of the fully-supervised KGQA approaches.", }
Semantic Parsing of natural language questions into their executable logical form (LF) has shown state-of-the-art (SOTA) performance for Knowledge Graph Question Answering (KGQA). However, these methods are not applicable for real-world applications, due to lack of KG-specific training data. Recent advances in the capabilities of Large Language Models (LLMs) has led towards generating low-level LFs such as SPARQL and S-Expression in a few-shot setting. Unfortunately, these methods: (1) are limited to the knowledge of underlying LLM about the LF, (2) performs inferior for the harder complex benchmarks such as KQA Pro, (3) suffers while grounding the generated LF to a specific Knowledge Graph. Recently, a new LF called KoPL has been introduced that explicitly models complex reasoning process step-by-step in a symbolic manner and has shown SOTA on KQA Pro in fully-supervised setting. Inspired by this, we propose SymKGQA framework that generates step-by-step Symbolic LF i.e., KoPL in a few-shot in-context learning setting using LLM. Our framework is not dependent on pre-trained information of LLM about KoPL. We further build a Retrieval-Augmented Generation based Question-Aware Contextual KoPL (QUACK) resolver to ground the generated LF. Our experiments with different LLMs and few-shot settings demonstrate that SymKGQA outperforms all other few-shot and even many of the fully-supervised KGQA approaches.
[ "Agarwal, Prerna", "Kumar, Nishant", "Bedathur, Srikanta" ]
SymKGQA: Few-Shot Knowledge Graph Question Answering via Symbolic Program Generation and Execution
acl-long.545
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.545/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.546.bib
@inproceedings{lei-etal-2024-meta, title = "Meta-Task Prompting Elicits Embeddings from Large Language Models", author = "Lei, Yibin and Wu, Di and Zhou, Tianyi and Shen, Tao and Cao, Yu and Tao, Chongyang and Yates, Andrew", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.546", pages = "10141--10157", abstract = "We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) without the need for model fine-tuning. Leveraging meta-task prompting, MetaEOL guides LLMs to produce embeddings through a series of carefully designed prompts that address multiple representational aspects. Our comprehensive experiments demonstrate that embeddings averaged from various meta-tasks are versatile embeddings that yield competitive performance on Semantic Textual Similarity (STS) benchmarks and excel in downstream tasks, surpassing contrastive-trained models. Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.", }
We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) without the need for model fine-tuning. Leveraging meta-task prompting, MetaEOL guides LLMs to produce embeddings through a series of carefully designed prompts that address multiple representational aspects. Our comprehensive experiments demonstrate that embeddings averaged from various meta-tasks are versatile embeddings that yield competitive performance on Semantic Textual Similarity (STS) benchmarks and excel in downstream tasks, surpassing contrastive-trained models. Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.
[ "Lei, Yibin", "Wu, Di", "Zhou, Tianyi", "Shen, Tao", "Cao, Yu", "Tao, Chongyang", "Yates, Andrew" ]
Meta-Task Prompting Elicits Embeddings from Large Language Models
acl-long.546
Poster
2402.18458
[ "https://github.com/yibin-lei/metaeol" ]
https://huggingface.co/papers/2402.18458
1
0
0
7
https://aclanthology.org/2024.acl-long.546/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.547.bib
@inproceedings{li-etal-2024-sentiment, title = "A Sentiment Consolidation Framework for Meta-Review Generation", author = "Li, Miao and Lau, Jey Han and Hovy, Eduard", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.547", pages = "10158--10177", abstract = "Modern natural language generation systems with Large Language Models (LLMs) exhibit the capability to generate a plausible summary of multiple documents; however, it is uncertain if they truly possess the capability of information consolidation to generate summaries, especially on documents with opinionated information. We focus on meta-review generation, a form of sentiment summarisation for the scientific domain. To make scientific sentiment summarization more grounded, we hypothesize that human meta-reviewers follow a three-layer framework of sentiment consolidation to write meta-reviews. Based on the framework, we propose novel prompting methods for LLMs to generate meta-reviews and evaluation metrics to assess the quality of generated meta-reviews. Our framework is validated empirically as we find that prompting LLMs based on the framework {---} compared with prompting them with simple instructions {---} generates better meta-reviews.", }
Modern natural language generation systems with Large Language Models (LLMs) exhibit the capability to generate a plausible summary of multiple documents; however, it is uncertain if they truly possess the capability of information consolidation to generate summaries, especially on documents with opinionated information. We focus on meta-review generation, a form of sentiment summarisation for the scientific domain. To make scientific sentiment summarization more grounded, we hypothesize that human meta-reviewers follow a three-layer framework of sentiment consolidation to write meta-reviews. Based on the framework, we propose novel prompting methods for LLMs to generate meta-reviews and evaluation metrics to assess the quality of generated meta-reviews. Our framework is validated empirically as we find that prompting LLMs based on the framework {---} compared with prompting them with simple instructions {---} generates better meta-reviews.
[ "Li, Miao", "Lau, Jey Han", "Hovy, Eduard" ]
A Sentiment Consolidation Framework for Meta-Review Generation
acl-long.547
Poster
2402.18005
[ "https://github.com/oaimli/metareviewinglogic" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.547/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.548.bib
@inproceedings{zhou-etal-2024-revisiting-structured, title = "Revisiting Structured Sentiment Analysis as Latent Dependency Graph Parsing", author = "Zhou, Chengjie and Li, Bobo and Fei, Hao and Li, Fei and Teng, Chong and Ji, Donghong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.548", pages = "10178--10191", abstract = "Structured Sentiment Analysis (SSA) was cast as a problem of bi-lexical dependency graph parsing by prior studies.Multiple formulations have been proposed to construct the graph, which share several intrinsic drawbacks:(1) The internal structures of spans are neglected, thus only the boundary tokens of spans are used for relation prediction and span recognition, thus hindering the model{'}s expressiveness;(2) Long spans occupy a significant proportion in the SSA datasets, which further exacerbates the problem of internal structure neglect.In this paper, we treat the SSA task as a dependency parsing task on partially-observed dependency trees, regarding flat spans without determined tree annotations as latent subtrees to consider internal structures of spans.We propose a two-stage parsing method and leverage TreeCRFs with a novel constrained inside algorithm to model latent structures explicitly, which also takes advantages of joint scoring graph arcs and headed spans for global optimization and inference. Results of extensive experiments on five benchmark datasets reveal that our method performs significantly better than all previous bi-lexical methods, achieving new state-of-the-art.", }
Structured Sentiment Analysis (SSA) was cast as a problem of bi-lexical dependency graph parsing by prior studies.Multiple formulations have been proposed to construct the graph, which share several intrinsic drawbacks:(1) The internal structures of spans are neglected, thus only the boundary tokens of spans are used for relation prediction and span recognition, thus hindering the model{'}s expressiveness;(2) Long spans occupy a significant proportion in the SSA datasets, which further exacerbates the problem of internal structure neglect.In this paper, we treat the SSA task as a dependency parsing task on partially-observed dependency trees, regarding flat spans without determined tree annotations as latent subtrees to consider internal structures of spans.We propose a two-stage parsing method and leverage TreeCRFs with a novel constrained inside algorithm to model latent structures explicitly, which also takes advantages of joint scoring graph arcs and headed spans for global optimization and inference. Results of extensive experiments on five benchmark datasets reveal that our method performs significantly better than all previous bi-lexical methods, achieving new state-of-the-art.
[ "Zhou, Chengjie", "Li, Bobo", "Fei, Hao", "Li, Fei", "Teng, Chong", "Ji, Donghong" ]
Revisiting Structured Sentiment Analysis as Latent Dependency Graph Parsing
acl-long.548
Poster
2407.04801
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.548/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.549.bib
@inproceedings{peng-etal-2024-owsm, title = "{OWSM}-{CTC}: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification", author = "Peng, Yifan and Sudo, Yui and Shakeel, Muhammad and Watanabe, Shinji", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.549", pages = "10192--10209", abstract = "There has been an increasing interest in large speech models that can perform multiple tasks in a single model. Such models usually adopt an encoder-decoder or decoder-only architecture due to their popularity and good performance in many domains. However, autoregressive models can be slower during inference compared to non-autoregressive models and also have potential risks of hallucination. Though prior studies observed promising results of non-autoregressive models for certain tasks at small scales, it remains unclear if they can be scaled to speech-to-text generation in diverse languages and tasks. Inspired by the Open Whisper-style Speech Model (OWSM) project, we propose OWSM-CTC, a novel encoder-only speech foundation model based on Connectionist Temporal Classification (CTC). It is trained on 180k hours of public audio data for multilingual automatic speech recognition (ASR), speech translation (ST), and language identification (LID). Compared to encoder-decoder OWSM, our OWSM-CTC achieves competitive results on ASR and up to 24{\%} relative improvement on ST, while it is more robust and 3 to 4 times faster for inference. OWSM-CTC also improves the long-form ASR result with 20x speed-up.We will publicly release our code, pre-trained model, and training logs to promote open science in speech foundation models.", }
There has been an increasing interest in large speech models that can perform multiple tasks in a single model. Such models usually adopt an encoder-decoder or decoder-only architecture due to their popularity and good performance in many domains. However, autoregressive models can be slower during inference compared to non-autoregressive models and also have potential risks of hallucination. Though prior studies observed promising results of non-autoregressive models for certain tasks at small scales, it remains unclear if they can be scaled to speech-to-text generation in diverse languages and tasks. Inspired by the Open Whisper-style Speech Model (OWSM) project, we propose OWSM-CTC, a novel encoder-only speech foundation model based on Connectionist Temporal Classification (CTC). It is trained on 180k hours of public audio data for multilingual automatic speech recognition (ASR), speech translation (ST), and language identification (LID). Compared to encoder-decoder OWSM, our OWSM-CTC achieves competitive results on ASR and up to 24{\%} relative improvement on ST, while it is more robust and 3 to 4 times faster for inference. OWSM-CTC also improves the long-form ASR result with 20x speed-up.We will publicly release our code, pre-trained model, and training logs to promote open science in speech foundation models.
[ "Peng, Yifan", "Sudo, Yui", "Shakeel, Muhammad", "Watanabe, Shinji" ]
OWSM-CTC: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification
acl-long.549
Poster
2402.12654
[ "" ]
https://huggingface.co/papers/2402.12654
2
1
0
4
https://aclanthology.org/2024.acl-long.549/
[ "pyf98/owsm_ctc_v3.1_1B" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.550.bib
@inproceedings{yang-etal-2024-large-language-models, title = "Do Large Language Models Latently Perform Multi-Hop Reasoning?", author = "Yang, Sohee and Gribovskaya, Elena and Kassner, Nora and Geva, Mor and Riedel, Sebastian", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.550", pages = "10210--10229", abstract = "We study whether Large Language Models (LLMs) latently perform multi-hop reasoning with complex prompts such as {``}The mother of the singer of {`}Superstition{'} is{''}. We look for evidence of a latent reasoning pathway where an LLM (1) latently identifies {``}the singer of {`}Superstition{'}{''} as Stevie Wonder, the bridge entity, and (2) uses its knowledge of Stevie Wonder{'}s mother to complete the prompt. We analyze these two hops individually and consider their co-occurrence as indicative of latent multi-hop reasoning. For the first hop, we test if changing the prompt to indirectly mention the bridge entity instead of any other entity increases the LLM{'}s internal recall of the bridge entity. For the second hop, we test if increasing this recall causes the LLM to better utilize what it knows about the bridge entity. We find strong evidence of latent multi-hop reasoning for the prompts of certain relation types, with the reasoning pathway used in more than 80{\%} of the prompts. However, the utilization is highly contextual, varying across different types of prompts. Also, on average, the evidence for the second hop and the full multi-hop traversal is rather moderate and only substantial for the first hop. Moreover, we find a clear scaling trend with increasing model size for the first hop of reasoning but not for the second hop. Our experimental findings suggest potential challenges and opportunities for future development and applications of LLMs.", }
We study whether Large Language Models (LLMs) latently perform multi-hop reasoning with complex prompts such as {``}The mother of the singer of {`}Superstition{'} is{''}. We look for evidence of a latent reasoning pathway where an LLM (1) latently identifies {``}the singer of {`}Superstition{'}{''} as Stevie Wonder, the bridge entity, and (2) uses its knowledge of Stevie Wonder{'}s mother to complete the prompt. We analyze these two hops individually and consider their co-occurrence as indicative of latent multi-hop reasoning. For the first hop, we test if changing the prompt to indirectly mention the bridge entity instead of any other entity increases the LLM{'}s internal recall of the bridge entity. For the second hop, we test if increasing this recall causes the LLM to better utilize what it knows about the bridge entity. We find strong evidence of latent multi-hop reasoning for the prompts of certain relation types, with the reasoning pathway used in more than 80{\%} of the prompts. However, the utilization is highly contextual, varying across different types of prompts. Also, on average, the evidence for the second hop and the full multi-hop traversal is rather moderate and only substantial for the first hop. Moreover, we find a clear scaling trend with increasing model size for the first hop of reasoning but not for the second hop. Our experimental findings suggest potential challenges and opportunities for future development and applications of LLMs.
[ "Yang, Sohee", "Gribovskaya, Elena", "Kassner, Nora", "Geva, Mor", "Riedel, Sebastian" ]
Do Large Language Models Latently Perform Multi-Hop Reasoning?
acl-long.550
Poster
2402.16837
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.550/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.551.bib
@inproceedings{li-etal-2024-mugglemath, title = "{M}uggle{M}ath: Assessing the Impact of Query and Response Augmentation on Math Reasoning", author = "Li, Chengpeng and Yuan, Zheng and Yuan, Hongyi and Dong, Guanting and Lu, Keming and Wu, Jiancan and Tan, Chuanqi and Wang, Xiang and Zhou, Chang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.551", pages = "10230--10258", abstract = "In math reasoning with large language models (LLMs), fine-tuning data augmentation by query evolution and diverse reasoning paths is empirically verified effective, profoundly narrowing the gap between open-sourced LLMs and cutting-edge proprietary LLMs. In this paper, we conduct an investigation for such data augmentation in math reasoning and are intended to answer: (1) What strategies of data augmentation are more effective; (2) What is the scaling relationship between the amount of augmented data and model performance; and (3) Can data augmentation incentivize generalization to out-of-domain mathematical reasoning tasks?To this end, we create two new dataset AugGSM8K and AugMATH, by complicating and diversifying the queries and sampling multiple reasoning paths from GSM8K and MATH.We obtained a series of LLMs called MuggleMath by fine-tuning LLaMA models on AugGSM8K and AugMATH. MuggleMath substantially achieves new state-of-the-art on GSM8K and MATH.A log-linear relationship and a segmented log-linear are presented between MuggleMath{'}s performance and the amount of augmented data on GSM8K and MATH, respectively.We also find that it is weak in out-of-domain math reasoning generalization from AugGSM8K to MATH and from AugMATH to GSM8K, which suggests that augmenting queries that cover a broader range of subjects is more beneficial for generalization.", }
In math reasoning with large language models (LLMs), fine-tuning data augmentation by query evolution and diverse reasoning paths is empirically verified effective, profoundly narrowing the gap between open-sourced LLMs and cutting-edge proprietary LLMs. In this paper, we conduct an investigation for such data augmentation in math reasoning and are intended to answer: (1) What strategies of data augmentation are more effective; (2) What is the scaling relationship between the amount of augmented data and model performance; and (3) Can data augmentation incentivize generalization to out-of-domain mathematical reasoning tasks?To this end, we create two new dataset AugGSM8K and AugMATH, by complicating and diversifying the queries and sampling multiple reasoning paths from GSM8K and MATH.We obtained a series of LLMs called MuggleMath by fine-tuning LLaMA models on AugGSM8K and AugMATH. MuggleMath substantially achieves new state-of-the-art on GSM8K and MATH.A log-linear relationship and a segmented log-linear are presented between MuggleMath{'}s performance and the amount of augmented data on GSM8K and MATH, respectively.We also find that it is weak in out-of-domain math reasoning generalization from AugGSM8K to MATH and from AugMATH to GSM8K, which suggests that augmenting queries that cover a broader range of subjects is more beneficial for generalization.
[ "Li, Chengpeng", "Yuan, Zheng", "Yuan, Hongyi", "Dong, Guanting", "Lu, Keming", "Wu, Jiancan", "Tan, Chuanqi", "Wang, Xiang", "Zhou, Chang" ]
MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning
acl-long.551
Poster
2310.05506
[ "https://github.com/ofa-sys/gsm8k-screl" ]
https://huggingface.co/papers/2310.05506
2
1
0
8
https://aclanthology.org/2024.acl-long.551/
[ "OFA-Sys/MuggleMath_13B", "OFA-Sys/MuggleMath_7B" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.552.bib
@inproceedings{gupta-etal-2024-harnessing, title = "Harnessing Toulmin{'}s theory for zero-shot argument explication", author = "Gupta, Ankita and Zuckerman, Ethan and O{'}Connor, Brendan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.552", pages = "10259--10276", abstract = "To better analyze informal arguments on public forums, we propose the task of argument explication, which makes explicit a text{'}s argumentative structure and implicit reasoning by outputting triples of propositions ⟨claim, reason warrant⟩. The three slots, or argument components, are derived from the widely known Toulmin (1958) model of argumentation. While prior research applies Toulmin or related theories to annotate datasets and train supervised models, we develop an effective method to prompt generative large language models (LMs) to output explicitly named argument components proposed by Toulmin by prompting with the theory name (e.g., {`}According to Toulmin model{'}). We evaluate the outputs{'} coverage and validity through a human study and automatic evaluation based on prior argumentation datasets and perform robustness checks over alternative LMs, prompts, and argumentation theories. Finally, we conduct a proof-of-concept case study to extract an interpretable argumentation (hyper)graph from a large corpus of critical public comments on whether to allow the COVID-19 vaccine for children, suggesting future directions for corpus analysis and argument visualization.", }
To better analyze informal arguments on public forums, we propose the task of argument explication, which makes explicit a text{'}s argumentative structure and implicit reasoning by outputting triples of propositions ⟨claim, reason warrant⟩. The three slots, or argument components, are derived from the widely known Toulmin (1958) model of argumentation. While prior research applies Toulmin or related theories to annotate datasets and train supervised models, we develop an effective method to prompt generative large language models (LMs) to output explicitly named argument components proposed by Toulmin by prompting with the theory name (e.g., {`}According to Toulmin model{'}). We evaluate the outputs{'} coverage and validity through a human study and automatic evaluation based on prior argumentation datasets and perform robustness checks over alternative LMs, prompts, and argumentation theories. Finally, we conduct a proof-of-concept case study to extract an interpretable argumentation (hyper)graph from a large corpus of critical public comments on whether to allow the COVID-19 vaccine for children, suggesting future directions for corpus analysis and argument visualization.
[ "Gupta, Ankita", "Zuckerman, Ethan", "O{'}Connor, Brendan" ]
Harnessing Toulmin's theory for zero-shot argument explication
acl-long.552
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.552/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.553.bib
@inproceedings{latouche-etal-2024-binaryalign, title = "{B}inary{A}lign: Word Alignment as Binary Sequence Labeling", author = "Latouche, Gaetan and Carbonneau, Marc-Andr{\'e} and Swanson, Benjamin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.553", pages = "10277--10288", abstract = "Real world deployments of word alignment are almost certain to cover both high and low resource languages. However, the state-of-the-art for this task recommends a different model class depending on the availability of gold alignment training data for a particular language pair. We propose BinaryAlign, a novel word alignment technique based on binary sequence labeling that outperforms existing approaches in both scenarios, offering a unifying approach to the task. Additionally, we vary the specific choice of multilingual foundation model, perform stratified error analysis over alignment error type, and explore the performance of BinaryAlign on non-English language pairs. We make our source code publicly available.", }
Real world deployments of word alignment are almost certain to cover both high and low resource languages. However, the state-of-the-art for this task recommends a different model class depending on the availability of gold alignment training data for a particular language pair. We propose BinaryAlign, a novel word alignment technique based on binary sequence labeling that outperforms existing approaches in both scenarios, offering a unifying approach to the task. Additionally, we vary the specific choice of multilingual foundation model, perform stratified error analysis over alignment error type, and explore the performance of BinaryAlign on non-English language pairs. We make our source code publicly available.
[ "Latouche, Gaetan", "Carbonneau, Marc-Andr{\\'e}", "Swanson, Benjamin" ]
BinaryAlign: Word Alignment as Binary Sequence Labeling
acl-long.553
Poster
2407.12881
[ "" ]
https://huggingface.co/papers/2407.12881
0
0
0
3
https://aclanthology.org/2024.acl-long.553/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.554.bib
@inproceedings{hu-collier-2024-quantifying, title = "Quantifying the Persona Effect in {LLM} Simulations", author = "Hu, Tiancheng and Collier, Nigel", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.554", pages = "10289--10307", abstract = "Large language models (LLMs) have shown remarkable promise in simulating human language and behavior. This study investigates how integrating persona variables{---}demographic, social, and behavioral factors{---}impacts LLMs{'} ability to simulate diverse perspectives. We find that persona variables account for {\textless}10{\%} variance in annotations in existing subjective NLP datasets. Nonetheless, incorporating persona variables via prompting in LLMs provides modest but statistically significant improvements. Persona prompting is most effective in samples where many annotators disagree, but their disagreements are relatively minor. Notably, we find a linear relationship in our setting: the stronger the correlation between persona variables and human annotations, the more accurate the LLM predictions are using persona prompting. In a zero-shot setting, a powerful 70b model with persona prompting captures 81{\%} of the annotation variance achievable by linear regression trained on ground truth annotations. However, for most subjective NLP datasets, where persona variables have limited explanatory power, the benefits of persona prompting are limited.", }
Large language models (LLMs) have shown remarkable promise in simulating human language and behavior. This study investigates how integrating persona variables{---}demographic, social, and behavioral factors{---}impacts LLMs{'} ability to simulate diverse perspectives. We find that persona variables account for {\textless}10{\%} variance in annotations in existing subjective NLP datasets. Nonetheless, incorporating persona variables via prompting in LLMs provides modest but statistically significant improvements. Persona prompting is most effective in samples where many annotators disagree, but their disagreements are relatively minor. Notably, we find a linear relationship in our setting: the stronger the correlation between persona variables and human annotations, the more accurate the LLM predictions are using persona prompting. In a zero-shot setting, a powerful 70b model with persona prompting captures 81{\%} of the annotation variance achievable by linear regression trained on ground truth annotations. However, for most subjective NLP datasets, where persona variables have limited explanatory power, the benefits of persona prompting are limited.
[ "Hu, Tiancheng", "Collier, Nigel" ]
Quantifying the Persona Effect in LLM Simulations
acl-long.554
Poster
2402.10811
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.554/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.555.bib
@inproceedings{balepur-etal-2024-artifacts, title = "Artifacts or Abduction: How Do {LLM}s Answer Multiple-Choice Questions Without the Question?", author = "Balepur, Nishant and Ravichander, Abhilasha and Rudinger, Rachel", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.555", pages = "10308--10330", abstract = "Multiple-choice question answering (MCQA) is often used to evaluate large language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if LLMs can perform MCQA with choices-only prompts, where models must select the correct answer only from the choices. In three MCQA datasets and four LLMs, this prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy gain. To help explain this behavior, we conduct an in-depth, black-box analysis on memorization, choice dynamics, and question inference. Our key findings are threefold. First, we find no evidence that the choices-only accuracy stems from memorization alone. Second, priors over individual choices do not fully explain choices-only accuracy, hinting that LLMs use the group dynamics of choices. Third, LLMs have some ability to infer a relevant question from choices, and surprisingly can sometimes even match the original question. We hope to motivate the use of stronger baselines in MCQA benchmarks, the design of robust MCQA datasets, and further efforts to explain LLM decision-making.", }
Multiple-choice question answering (MCQA) is often used to evaluate large language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if LLMs can perform MCQA with choices-only prompts, where models must select the correct answer only from the choices. In three MCQA datasets and four LLMs, this prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy gain. To help explain this behavior, we conduct an in-depth, black-box analysis on memorization, choice dynamics, and question inference. Our key findings are threefold. First, we find no evidence that the choices-only accuracy stems from memorization alone. Second, priors over individual choices do not fully explain choices-only accuracy, hinting that LLMs use the group dynamics of choices. Third, LLMs have some ability to infer a relevant question from choices, and surprisingly can sometimes even match the original question. We hope to motivate the use of stronger baselines in MCQA benchmarks, the design of robust MCQA datasets, and further efforts to explain LLM decision-making.
[ "Balepur, Nishant", "Ravich", "er, Abhilasha", "Rudinger, Rachel" ]
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?
acl-long.555
Poster
2402.12483
[ "https://github.com/nbalepur/mcqa-artifacts" ]
https://huggingface.co/papers/2402.12483
1
0
0
3
https://aclanthology.org/2024.acl-long.555/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.556.bib
@inproceedings{yue-etal-2024-retrieval, title = "Retrieval Augmented Fact Verification by Synthesizing Contrastive Arguments", author = "Yue, Zhenrui and Zeng, Huimin and Shang, Lanyu and Liu, Yifan and Zhang, Yang and Wang, Dong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.556", pages = "10331--10343", abstract = "The rapid propagation of misinformation poses substantial risks to public interest. To combat misinformation, large language models (LLMs) are adapted to automatically verify claim credibility. Nevertheless, existing methods heavily rely on the embedded knowledge within LLMs and / or black-box APIs for evidence collection, leading to subpar performance with smaller LLMs or upon unreliable context. In this paper, we propose retrieval augmented fact verification through the synthesis of contrasting arguments (RAFTS). Upon input claims, RAFTS starts with evidence retrieval, where we design a retrieval pipeline to collect and re-rank relevant documents from verifiable sources. Then, RAFTS forms contrastive arguments (i.e., supporting or refuting) conditioned on the retrieved evidence. In addition, RAFTS leverages an embedding model to identify informative demonstrations, followed by in-context prompting to generate the prediction and explanation. Our method effectively retrieves relevant documents as evidence and evaluates arguments from varying perspectives, incorporating nuanced information for fine-grained decision-making. Combined with informative in-context examples as prior, RAFTS achieves significant improvements to supervised and LLM baselines without complex prompts. We demonstrate the effectiveness of our method through extensive experiments, where RAFTS can outperform GPT-based methods with a significantly smaller 7B LLM.", }
The rapid propagation of misinformation poses substantial risks to public interest. To combat misinformation, large language models (LLMs) are adapted to automatically verify claim credibility. Nevertheless, existing methods heavily rely on the embedded knowledge within LLMs and / or black-box APIs for evidence collection, leading to subpar performance with smaller LLMs or upon unreliable context. In this paper, we propose retrieval augmented fact verification through the synthesis of contrasting arguments (RAFTS). Upon input claims, RAFTS starts with evidence retrieval, where we design a retrieval pipeline to collect and re-rank relevant documents from verifiable sources. Then, RAFTS forms contrastive arguments (i.e., supporting or refuting) conditioned on the retrieved evidence. In addition, RAFTS leverages an embedding model to identify informative demonstrations, followed by in-context prompting to generate the prediction and explanation. Our method effectively retrieves relevant documents as evidence and evaluates arguments from varying perspectives, incorporating nuanced information for fine-grained decision-making. Combined with informative in-context examples as prior, RAFTS achieves significant improvements to supervised and LLM baselines without complex prompts. We demonstrate the effectiveness of our method through extensive experiments, where RAFTS can outperform GPT-based methods with a significantly smaller 7B LLM.
[ "Yue, Zhenrui", "Zeng, Huimin", "Shang, Lanyu", "Liu, Yifan", "Zhang, Yang", "Wang, Dong" ]
Retrieval Augmented Fact Verification by Synthesizing Contrastive Arguments
acl-long.556
Poster
2406.09815
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.556/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.557.bib
@inproceedings{fernandez-etal-2024-syllabusqa, title = "{S}yllabus{QA}: A Course Logistics Question Answering Dataset", author = "Fernandez, Nigel and Scarlatos, Alexander and Lan, Andrew", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.557", pages = "10344--10369", abstract = "Automated teaching assistants and chatbots have significant potential to reduce the workload of human instructors, especially for logistics-related question answering, which is important to students yet repetitive for instructors. However, due to privacy concerns, there is a lack of publicly available datasets. We introduce SyllabusQA, an open-source dataset with 63 real course syllabi covering 36 majors, containing 5,078 open-ended course logistics-related question-answer pairs that are diverse in both question types and answer formats. Since many logistics-related questions contain critical information like the date of an exam, it is important to evaluate the factuality of answers. We benchmark several strong baselines on this task, from large language model prompting to retrieval-augmented generation. We introduce Fact-QA, an LLM-based (GPT-4) evaluation metric to evaluate the factuality of predicted answers. We find that despite performing close to humans on traditional metrics of textual similarity, there remains a significant gap between automated approaches and humans in terms of fact precision.", }
Automated teaching assistants and chatbots have significant potential to reduce the workload of human instructors, especially for logistics-related question answering, which is important to students yet repetitive for instructors. However, due to privacy concerns, there is a lack of publicly available datasets. We introduce SyllabusQA, an open-source dataset with 63 real course syllabi covering 36 majors, containing 5,078 open-ended course logistics-related question-answer pairs that are diverse in both question types and answer formats. Since many logistics-related questions contain critical information like the date of an exam, it is important to evaluate the factuality of answers. We benchmark several strong baselines on this task, from large language model prompting to retrieval-augmented generation. We introduce Fact-QA, an LLM-based (GPT-4) evaluation metric to evaluate the factuality of predicted answers. We find that despite performing close to humans on traditional metrics of textual similarity, there remains a significant gap between automated approaches and humans in terms of fact precision.
[ "Fern", "ez, Nigel", "Scarlatos, Alex", "er", "Lan, Andrew" ]
SyllabusQA: A Course Logistics Question Answering Dataset
acl-long.557
Poster
2403.14666
[ "https://github.com/umass-ml4ed/syllabusqa" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.557/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.558.bib
@inproceedings{wen-etal-2024-mindmap, title = "{M}ind{M}ap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models", author = "Wen, Yilin and Wang, Zifeng and Sun, Jimeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.558", pages = "10370--10388", abstract = "Large language models (LLMs) have achieved remarkable performance in natural language understanding and generation tasks. However, they often suffer from limitations such as difficulty in incorporating new knowledge, generating hallucinations, and explaining their reasoning process. To address these challenges, we propose a novel prompting pipeline, named MindMap, that leverages knowledge graphs (KGs) to enhance LLMs{'} inference and transparency. Our method enables LLMs to comprehend KG inputs and infer with a combination of implicit and external knowledge. Moreover, our method elicits the mind map of LLMs, which reveals their reasoning pathways based on the ontology of knowledge. We evaluate our method on diverse question {\&} answering tasks, especially in medical domains, and show significant improvements over baselines. We also introduce a new hallucination evaluation benchmark and analyze the effects of different components of our method. Our results demonstrate the effectiveness and robustness of our method in merging knowledge from LLMs and KGs for combined inference.", }
Large language models (LLMs) have achieved remarkable performance in natural language understanding and generation tasks. However, they often suffer from limitations such as difficulty in incorporating new knowledge, generating hallucinations, and explaining their reasoning process. To address these challenges, we propose a novel prompting pipeline, named MindMap, that leverages knowledge graphs (KGs) to enhance LLMs{'} inference and transparency. Our method enables LLMs to comprehend KG inputs and infer with a combination of implicit and external knowledge. Moreover, our method elicits the mind map of LLMs, which reveals their reasoning pathways based on the ontology of knowledge. We evaluate our method on diverse question {\&} answering tasks, especially in medical domains, and show significant improvements over baselines. We also introduce a new hallucination evaluation benchmark and analyze the effects of different components of our method. Our results demonstrate the effectiveness and robustness of our method in merging knowledge from LLMs and KGs for combined inference.
[ "Wen, Yilin", "Wang, Zifeng", "Sun, Jimeng" ]
MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models
acl-long.558
Poster
2308.09729
[ "https://github.com/wyl-willing/MindMap" ]
https://huggingface.co/papers/2308.09729
2
4
1
3
https://aclanthology.org/2024.acl-long.558/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.559.bib
@inproceedings{braun-matthes-2024-agb, title = "{AGB}-{DE}: A Corpus for the Automated Legal Assessment of Clauses in {G}erman Consumer Contracts", author = "Braun, Daniel and Matthes, Florian", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.559", pages = "10389--10405", abstract = "Legal tasks and datasets are often used as benchmarks for the capabilities of language models. However, openly available annotated datasets are rare. In this paper, we introduce AGB-DE, a corpus of 3,764 clauses from German consumer contracts that have been annotated and legally assessed by legal experts. Together with the data, we present a first baseline for the task of detecting potentially void clauses, comparing the performance of an SVM baseline with three fine-tuned open language models and the performance of GPT-3.5. Our results show the challenging nature of the task, with no approach exceeding an F1-score of 0.54. While the fine-tuned models often performed better with regard to precision, GPT-3.5 outperformed the other approaches with regard to recall. An analysis of the errors indicates that one of the main challenges could be the correct interpretation of complex clauses, rather than the decision boundaries of what is permissible and what is not.", }
Legal tasks and datasets are often used as benchmarks for the capabilities of language models. However, openly available annotated datasets are rare. In this paper, we introduce AGB-DE, a corpus of 3,764 clauses from German consumer contracts that have been annotated and legally assessed by legal experts. Together with the data, we present a first baseline for the task of detecting potentially void clauses, comparing the performance of an SVM baseline with three fine-tuned open language models and the performance of GPT-3.5. Our results show the challenging nature of the task, with no approach exceeding an F1-score of 0.54. While the fine-tuned models often performed better with regard to precision, GPT-3.5 outperformed the other approaches with regard to recall. An analysis of the errors indicates that one of the main challenges could be the correct interpretation of complex clauses, rather than the decision boundaries of what is permissible and what is not.
[ "Braun, Daniel", "Matthes, Florian" ]
AGB-DE: A Corpus for the Automated Legal Assessment of Clauses in German Consumer Contracts
acl-long.559
Poster
2406.06809
[ "https://github.com/DaBr01/AGB-DE" ]
https://huggingface.co/papers/2406.06809
0
1
0
2
https://aclanthology.org/2024.acl-long.559/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.560.bib
@inproceedings{siska-etal-2024-examining, title = "Examining the robustness of {LLM} evaluation to the distributional assumptions of benchmarks", author = "Siska, Charlotte and Marazopoulou, Katerina and Ailem, Melissa and Bono, James", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.560", pages = "10406--10421", abstract = "Benchmarks have emerged as the central approach for evaluating Large Language Models (LLMs). The research community often relies on a model{'}s average performance across the test prompts of a benchmark to evaluate the model{'}s performance. This is consistent with the assumption that the test prompts within a benchmark represent a random sample from some real-world distribution of interest. We note that this is generally not the case; instead, we hold that the distribution of interest varies according to the specific use case. Hence, we analyze the robustness of LLM benchmarks to their underlying distributional assumptions. We find that (1) the correlation in model performance across test prompts is non-random, (2) accounting for correlations across test prompts can change model rankings on major benchmarks, (3) explanatory factors for these correlations include semantic similarity and common LLM failure points.", }
Benchmarks have emerged as the central approach for evaluating Large Language Models (LLMs). The research community often relies on a model{'}s average performance across the test prompts of a benchmark to evaluate the model{'}s performance. This is consistent with the assumption that the test prompts within a benchmark represent a random sample from some real-world distribution of interest. We note that this is generally not the case; instead, we hold that the distribution of interest varies according to the specific use case. Hence, we analyze the robustness of LLM benchmarks to their underlying distributional assumptions. We find that (1) the correlation in model performance across test prompts is non-random, (2) accounting for correlations across test prompts can change model rankings on major benchmarks, (3) explanatory factors for these correlations include semantic similarity and common LLM failure points.
[ "Siska, Charlotte", "Marazopoulou, Katerina", "Ailem, Melissa", "Bono, James" ]
Examining the robustness of LLM evaluation to the distributional assumptions of benchmarks
acl-long.560
Poster
2404.16966
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.560/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.561.bib
@inproceedings{pasewark-etal-2024-tuning, title = "Re-Tuning: Overcoming the Compositionality Limits of Large Language Models with Recursive Tuning", author = "Pasewark, Eric and Montgomery, Kyle and Duan, Kefei and Song, Dawn and Wang, Chenguang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.561", pages = "10422--10437", abstract = "We present a new method for large language models to solve compositional tasks. Although they have shown strong performance on traditional language understanding tasks, large language models struggle to solve compositional tasks, where the solution depends on solving smaller instances of the same problem. We propose a natural approach to solve compositional tasks recursively. Our method, Re-Tuning, tunes models to break down a problem into subproblems, solve those subproblems, and combine the results. We show that our method significantly improves model performance on three representative compositional tasks: integer addition, dynamic programming, and parity. Compared to state-of-the-art methods that keep intermediate steps towards solving the problems, Re-Tuning achieves significantly higher accuracy and is more GPU memory efficient.", }
We present a new method for large language models to solve compositional tasks. Although they have shown strong performance on traditional language understanding tasks, large language models struggle to solve compositional tasks, where the solution depends on solving smaller instances of the same problem. We propose a natural approach to solve compositional tasks recursively. Our method, Re-Tuning, tunes models to break down a problem into subproblems, solve those subproblems, and combine the results. We show that our method significantly improves model performance on three representative compositional tasks: integer addition, dynamic programming, and parity. Compared to state-of-the-art methods that keep intermediate steps towards solving the problems, Re-Tuning achieves significantly higher accuracy and is more GPU memory efficient.
[ "Pasewark, Eric", "Montgomery, Kyle", "Duan, Kefei", "Song, Dawn", "Wang, Chenguang" ]
Re-Tuning: Overcoming the Compositionality Limits of Large Language Models with Recursive Tuning
acl-long.561
Poster
2407.04787
[ "https://github.com/Pasewark/ReTuning" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.561/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.562.bib
@inproceedings{ke-etal-2024-bridging, title = "Bridging the Preference Gap between Retrievers and {LLM}s", author = "Ke, Zixuan and Kong, Weize and Li, Cheng and Zhang, Mingyang and Mei, Qiaozhu and Bendersky, Michael", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.562", pages = "10438--10451", abstract = "Large Language Models (LLMs) have demonstrated superior results across a wide range of tasks, and Retrieval-augmented Generation (RAG) is an effective way to enhance the performance by locating relevant information and placing it into the context window of the LLM. However, the relationship between retrievers and LLMs in a RAG is still under-investigated. Most existing work treats the retriever and the LLM as independent components and leaves a gap between retrieving human-{''}friendly{''} information and assembling a LLM-{''}friendly{''} context. In this work, we examine a novel bridge mechanism. We validate the ranking and selection assumptions of retrievers in the context of RAG and propose a framework that chains together supervised and reinforcement learning to train a bridge model that optimizes the connection between the retriever and the LLM. Empirical results demonstrate the effectiveness of our method in both question-answering and personalized generation tasks.", }
Large Language Models (LLMs) have demonstrated superior results across a wide range of tasks, and Retrieval-augmented Generation (RAG) is an effective way to enhance the performance by locating relevant information and placing it into the context window of the LLM. However, the relationship between retrievers and LLMs in a RAG is still under-investigated. Most existing work treats the retriever and the LLM as independent components and leaves a gap between retrieving human-{''}friendly{''} information and assembling a LLM-{''}friendly{''} context. In this work, we examine a novel bridge mechanism. We validate the ranking and selection assumptions of retrievers in the context of RAG and propose a framework that chains together supervised and reinforcement learning to train a bridge model that optimizes the connection between the retriever and the LLM. Empirical results demonstrate the effectiveness of our method in both question-answering and personalized generation tasks.
[ "Ke, Zixuan", "Kong, Weize", "Li, Cheng", "Zhang, Mingyang", "Mei, Qiaozhu", "Bendersky, Michael" ]
Bridging the Preference Gap between Retrievers and LLMs
acl-long.562
Poster
2401.06954
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.562/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.563.bib
@inproceedings{xiong-etal-2024-large, title = "Large Language Models Can Learn Temporal Reasoning", author = "Xiong, Siheng and Payani, Ali and Kompella, Ramana and Fekri, Faramarz", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.563", pages = "10452--10470", abstract = "While large language models (LLMs) have demonstrated remarkable reasoning capabilities, they are not without their flaws and inaccuracies. Recent studies have introduced various methods to mitigate these limitations. Temporal reasoning (TR), in particular, presents a significant challenge for LLMs due to its reliance on diverse temporal concepts and intricate temporal logic. In this paper, we propose TG-LLM, a novel framework towards language-based TR. Instead of reasoning over the original context, we adopt a latent representation, temporal graph (TG) that enhances the learning of TR. A synthetic dataset (TGQA), which is fully controllable and requires minimal supervision, is constructed for fine-tuning LLMs on this text-to-TG translation task. We confirmed in experiments that the capability of TG translation learned on our dataset can be transferred to other TR tasks and benchmarks. On top of that, we teach LLM to perform deliberate reasoning over the TGs via Chain-of-Thought (CoT) bootstrapping and graph data augmentation. We observed that those strategies, which maintain a balance between usefulness and diversity, bring more reliable CoTs and final results than the vanilla CoT distillation.", }
While large language models (LLMs) have demonstrated remarkable reasoning capabilities, they are not without their flaws and inaccuracies. Recent studies have introduced various methods to mitigate these limitations. Temporal reasoning (TR), in particular, presents a significant challenge for LLMs due to its reliance on diverse temporal concepts and intricate temporal logic. In this paper, we propose TG-LLM, a novel framework towards language-based TR. Instead of reasoning over the original context, we adopt a latent representation, temporal graph (TG) that enhances the learning of TR. A synthetic dataset (TGQA), which is fully controllable and requires minimal supervision, is constructed for fine-tuning LLMs on this text-to-TG translation task. We confirmed in experiments that the capability of TG translation learned on our dataset can be transferred to other TR tasks and benchmarks. On top of that, we teach LLM to perform deliberate reasoning over the TGs via Chain-of-Thought (CoT) bootstrapping and graph data augmentation. We observed that those strategies, which maintain a balance between usefulness and diversity, bring more reliable CoTs and final results than the vanilla CoT distillation.
[ "Xiong, Siheng", "Payani, Ali", "Kompella, Ramana", "Fekri, Faramarz" ]
Large Language Models Can Learn Temporal Reasoning
acl-long.563
Poster
2401.06853
[ "https://github.com/xiongsiheng/tg-llm" ]
https://huggingface.co/papers/2401.06853
1
0
0
4
https://aclanthology.org/2024.acl-long.563/
[]
[ "sxiong/TGQA" ]
[]
1
https://aclanthology.org/2024.acl-long.564.bib
@inproceedings{mouravieff-etal-2024-learning, title = "Learning Relational Decomposition of Queries for Question Answering from Tables", author = {Mouravieff, Rapha{\"e}l and Piwowarski, Benjamin and Lamprier, Sylvain}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.564", pages = "10471--10485", abstract = "Table Question-Answering involves both understanding the natural language query and grounding it in the context of the input table to extract relevant information. In this context, many methods have highlighted the benefits of intermediate pre-training using SQL queries. However, while most approaches aim at generating final answers directly from inputs, we claim that there is better to do with SQL queries during training.By learning to imitate a restricted subset of SQL-like algebraic operations, we demonstrate that their execution flow provides intermediate supervision steps that allow for increased generalization and structural reasoning compared to classical approaches. Our method, bridges the gap between semantic parsing and direct answering methods, offering valuable insights into which types of operations should be predicted by a generative architecture and which should be executed by an external algorithm. Our code can be found at https://github.com/RaphaelMouravieff/Partial-Exec.", }
Table Question-Answering involves both understanding the natural language query and grounding it in the context of the input table to extract relevant information. In this context, many methods have highlighted the benefits of intermediate pre-training using SQL queries. However, while most approaches aim at generating final answers directly from inputs, we claim that there is better to do with SQL queries during training.By learning to imitate a restricted subset of SQL-like algebraic operations, we demonstrate that their execution flow provides intermediate supervision steps that allow for increased generalization and structural reasoning compared to classical approaches. Our method, bridges the gap between semantic parsing and direct answering methods, offering valuable insights into which types of operations should be predicted by a generative architecture and which should be executed by an external algorithm. Our code can be found at https://github.com/RaphaelMouravieff/Partial-Exec.
[ "Mouravieff, Rapha{\\\"e}l", "Piwowarski, Benjamin", "Lamprier, Sylvain" ]
Learning Relational Decomposition of Queries for Question Answering from Tables
acl-long.564
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.564/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.565.bib
@inproceedings{huang-etal-2024-characterizing, title = "Characterizing Similarities and Divergences in Conversational Tones in Humans and {LLM}s by Sampling with People", author = "Huang, Dun-Ming and Van Rijn, Pol and Sucholutsky, Ilia and Marjieh, Raja and Jacoby, Nori", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.565", pages = "10486--10512", abstract = "Conversational tones {---} the manners and attitudes in which speakers communicate {---} are essential to effective communication. As Large Language Models (LLMs) become increasingly popular, it is necessary to characterize the divergences in their conversational tones relative to humans. Prior research relied on pre-existing taxonomies or text corpora, which suffer from experimenter bias and may not be representative of real-world distributions. Inspired by methods from cognitive science, we propose an iterative method for simultaneously eliciting conversational tones and sentences, where participants alternate between two tasks: (1) one participant identifies the tone of a given sentence and (2) a different participant generates a sentence based on that tone. We run 50 iterations of this process with both human participants and GPT-4 and obtain a dataset of sentences and frequent conversational tones. In an additional experiment, humans and GPT-4 annotated all sentences with all tones. With data from 1,339 participants, 33,370 human judgments, and 29,900 GPT-4 queries, we show how our approach can be used to create an interpretable geometric representation of relations between tones in humans and GPT-4. This work showcases how combining ideas from machine learning and cognitive science can address challenges in human-computer interactions.", }
Conversational tones {---} the manners and attitudes in which speakers communicate {---} are essential to effective communication. As Large Language Models (LLMs) become increasingly popular, it is necessary to characterize the divergences in their conversational tones relative to humans. Prior research relied on pre-existing taxonomies or text corpora, which suffer from experimenter bias and may not be representative of real-world distributions. Inspired by methods from cognitive science, we propose an iterative method for simultaneously eliciting conversational tones and sentences, where participants alternate between two tasks: (1) one participant identifies the tone of a given sentence and (2) a different participant generates a sentence based on that tone. We run 50 iterations of this process with both human participants and GPT-4 and obtain a dataset of sentences and frequent conversational tones. In an additional experiment, humans and GPT-4 annotated all sentences with all tones. With data from 1,339 participants, 33,370 human judgments, and 29,900 GPT-4 queries, we show how our approach can be used to create an interpretable geometric representation of relations between tones in humans and GPT-4. This work showcases how combining ideas from machine learning and cognitive science can address challenges in human-computer interactions.
[ "Huang, Dun-Ming", "Van Rijn, Pol", "Sucholutsky, Ilia", "Marjieh, Raja", "Jacoby, Nori" ]
Characterizing Similarities and Divergences in Conversational Tones in Humans and LLMs by Sampling with People
acl-long.565
Poster
2406.04278
[ "https://github.com/jacobyn/SamplingTonesACL" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.565/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.566.bib
@inproceedings{zhao-etal-2024-pareto, title = "{P}areto Optimal Learning for Estimating Large Language Model Errors", author = "Zhao, Theodore and Wei, Mu and Preston, J. and Poon, Hoifung", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.566", pages = "10513--10529", abstract = "Large Language Models (LLMs) have shown impressive abilities in many applications. When a concrete and precise answer is desired, it is important to have a quantitative estimation of the potential error rate. However, this can be challenging due to the text-in-text-out nature of the generative models. We present a method based on Pareto optimization that generates a risk score to estimate the probability of error in an LLM response by integrating multiple sources of information. We prove theoretically that the error estimator optimized in our framework aligns with the LLM and the information sources in an Pareto optimal manner. Experimental results show that the risk scores estimated by our method are well correlated with the true LLM error rate, thus facilitating error correction. By dynamically combining with prompting strategies such as self-verification and information retrieval, we demonstrate the proposed method can be utilized to increase the performance of an LLM, surpassing state-of-the-art task specific model.", }
Large Language Models (LLMs) have shown impressive abilities in many applications. When a concrete and precise answer is desired, it is important to have a quantitative estimation of the potential error rate. However, this can be challenging due to the text-in-text-out nature of the generative models. We present a method based on Pareto optimization that generates a risk score to estimate the probability of error in an LLM response by integrating multiple sources of information. We prove theoretically that the error estimator optimized in our framework aligns with the LLM and the information sources in an Pareto optimal manner. Experimental results show that the risk scores estimated by our method are well correlated with the true LLM error rate, thus facilitating error correction. By dynamically combining with prompting strategies such as self-verification and information retrieval, we demonstrate the proposed method can be utilized to increase the performance of an LLM, surpassing state-of-the-art task specific model.
[ "Zhao, Theodore", "Wei, Mu", "Preston, J.", "Poon, Hoifung" ]
Pareto Optimal Learning for Estimating Large Language Model Errors
acl-long.566
Poster
2306.16564
[ "" ]
https://huggingface.co/papers/2306.16564
0
3
1
4
https://aclanthology.org/2024.acl-long.566/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.567.bib
@inproceedings{agostinelli-etal-2024-simul, title = "Simul-{LLM}: A Framework for Exploring High-Quality Simultaneous Translation with Large Language Models", author = "Agostinelli, Victor and Wild, Max and Raffel, Matthew and Fuad, Kazi and Chen, Lizhong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.567", pages = "10530--10541", abstract = "Large language models (LLMs) with billions of parameters and pretrained on massive amounts of data are now capable of near or better than state-of-the-art performance in a variety of downstream natural language processing tasks. Neural machine translation (NMT) is one such task that LLMs have been applied to with great success. However, little research has focused on applying LLMs to the more difficult subset of NMT called simultaneous translation (SimulMT), where translation begins before the entire source context is available to the model. In this paper, we address key challenges facing LLMs fine-tuned for SimulMT, validate classical SimulMT concepts and practices in the context of LLMs, explore adapting LLMs that are fine-tuned for NMT to the task of SimulMT, and introduce Simul-LLM, the first open-source fine-tuning and evaluation pipeline development framework for LLMs focused on SimulMT.", }
Large language models (LLMs) with billions of parameters and pretrained on massive amounts of data are now capable of near or better than state-of-the-art performance in a variety of downstream natural language processing tasks. Neural machine translation (NMT) is one such task that LLMs have been applied to with great success. However, little research has focused on applying LLMs to the more difficult subset of NMT called simultaneous translation (SimulMT), where translation begins before the entire source context is available to the model. In this paper, we address key challenges facing LLMs fine-tuned for SimulMT, validate classical SimulMT concepts and practices in the context of LLMs, explore adapting LLMs that are fine-tuned for NMT to the task of SimulMT, and introduce Simul-LLM, the first open-source fine-tuning and evaluation pipeline development framework for LLMs focused on SimulMT.
[ "Agostinelli, Victor", "Wild, Max", "Raffel, Matthew", "Fuad, Kazi", "Chen, Lizhong" ]
Simul-LLM: A Framework for Exploring High-Quality Simultaneous Translation with Large Language Models
acl-long.567
Poster
2312.04691
[ "https://github.com/osu-starlab/simul-llm" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.567/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.568.bib
@inproceedings{cao-etal-2024-defending, title = "Defending Against Alignment-Breaking Attacks via Robustly Aligned {LLM}", author = "Cao, Bochuan and Cao, Yuanpu and Lin, Lu and Chen, Jinghui", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.568", pages = "10542--10560", abstract = "Recently, Large Language Models (LLMs) have made significant advancements and are now widely used across various domains. Unfortunately, there has been a rising concern that LLMs can be misused to generate harmful or malicious content. Though a line of research has focused on aligning LLMs with human values and preventing them from producing inappropriate content, such alignments are usually vulnerable and can be bypassed by alignment-breaking attacks via adversarially optimized or handcrafted jailbreaking prompts. In this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks. RA-LLM can be directly constructed upon an existing aligned LLM with a robust alignment checking function, without requiring any expensive retraining or fine-tuning process of the original LLM. Furthermore, we also provide a theoretical analysis for RA-LLM to verify its effectiveness in defending against alignment-breaking attacks. Through real-world experiments on open-source large language models, we demonstrate that RA-LLM can successfully defend against both state-of-the-art adversarial prompts and popular handcrafted jailbreaking prompts by reducing their attack success rates from nearly 100{\%} to around 10{\%} or less.", }
Recently, Large Language Models (LLMs) have made significant advancements and are now widely used across various domains. Unfortunately, there has been a rising concern that LLMs can be misused to generate harmful or malicious content. Though a line of research has focused on aligning LLMs with human values and preventing them from producing inappropriate content, such alignments are usually vulnerable and can be bypassed by alignment-breaking attacks via adversarially optimized or handcrafted jailbreaking prompts. In this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks. RA-LLM can be directly constructed upon an existing aligned LLM with a robust alignment checking function, without requiring any expensive retraining or fine-tuning process of the original LLM. Furthermore, we also provide a theoretical analysis for RA-LLM to verify its effectiveness in defending against alignment-breaking attacks. Through real-world experiments on open-source large language models, we demonstrate that RA-LLM can successfully defend against both state-of-the-art adversarial prompts and popular handcrafted jailbreaking prompts by reducing their attack success rates from nearly 100{\%} to around 10{\%} or less.
[ "Cao, Bochuan", "Cao, Yuanpu", "Lin, Lu", "Chen, Jinghui" ]
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
acl-long.568
Poster
2309.14348
[ "https://github.com/AAAAAAsuka/llm_defends" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.568/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.569.bib
@inproceedings{xiong-etal-2024-interactive, title = "Interactive-{KBQA}: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models", author = "Xiong, Guanming and Bao, Junwei and Zhao, Wen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.569", pages = "10561--10582", abstract = "This study explores the realm of knowledge base question answering (KBQA). KBQA is considered a challenging task, particularly in parsing intricate questions into executable logical forms. Traditional semantic parsing (SP)-based methods require extensive data annotations, which result in significant costs. Recently, the advent of few-shot in-context learning, powered by large language models (LLMs), has showcased promising capabilities. Yet, fully leveraging LLMs to parse questions into logical forms in low-resource scenarios poses a substantial challenge. To tackle these hurdles, we introduce Interactive-KBQA, a framework designed to generate logical forms through direct interaction with knowledge bases (KBs). Within this framework, we have developed three generic APIs for KB interaction. For each category of complex question, we devised exemplars to guide LLMs through the reasoning processes. Our method achieves competitive results on the WebQuestionsSP, ComplexWebQuestions, KQA Pro, and MetaQA datasets with a minimal number of examples (shots). Importantly, our approach supports manual intervention, allowing for the iterative refinement of LLM outputs. By annotating a dataset with step-wise reasoning processes, we showcase our model{'}s adaptability and highlight its potential for contributing significant enhancements to the field.", }
This study explores the realm of knowledge base question answering (KBQA). KBQA is considered a challenging task, particularly in parsing intricate questions into executable logical forms. Traditional semantic parsing (SP)-based methods require extensive data annotations, which result in significant costs. Recently, the advent of few-shot in-context learning, powered by large language models (LLMs), has showcased promising capabilities. Yet, fully leveraging LLMs to parse questions into logical forms in low-resource scenarios poses a substantial challenge. To tackle these hurdles, we introduce Interactive-KBQA, a framework designed to generate logical forms through direct interaction with knowledge bases (KBs). Within this framework, we have developed three generic APIs for KB interaction. For each category of complex question, we devised exemplars to guide LLMs through the reasoning processes. Our method achieves competitive results on the WebQuestionsSP, ComplexWebQuestions, KQA Pro, and MetaQA datasets with a minimal number of examples (shots). Importantly, our approach supports manual intervention, allowing for the iterative refinement of LLM outputs. By annotating a dataset with step-wise reasoning processes, we showcase our model{'}s adaptability and highlight its potential for contributing significant enhancements to the field.
[ "Xiong, Guanming", "Bao, Junwei", "Zhao, Wen" ]
Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models
acl-long.569
Poster
2402.15131
[ "https://github.com/jimxionggm/interactive-kbqa" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.569/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.570.bib
@inproceedings{wang-etal-2024-llms-imaginarium, title = "{LLM}s in the Imaginarium: Tool Learning through Simulated Trial and Error", author = "Wang, Boshi and Fang, Hao and Eisner, Jason and Van Durme, Benjamin and Su, Yu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.570", pages = "10583--10604", abstract = "Tools are essential for large language models (LLMs) to acquire up-to-date information and take consequential actions in external environments. Existing work on tool-augmented LLMs primarily focuses on the broad coverage of tools and the flexibility of adding new tools. However, a critical aspect that has surprisingly been understudied is simply how accurately an LLM uses tools for which it has been trained. We find that existing LLMs, including GPT-4 and open-source LLMs specifically fine-tuned for tool use, only reach a correctness rate in the range of 30{\%} to 60{\%}, far from reliable use in practice. We propose a biologically inspired method for tool-augmented LLMs, simulated trial and error (STE), that orchestrates three key mechanisms for successful tool use behaviors in the biological system: trial and error, imagination, and memory. Specifically, STE leverages an LLM{'}s {`}imagination{'} to simulate plausible scenarios for using a tool, after which the LLM interacts with the tool to learn from its execution feedback. Both short-term and long-term memory are employed to improve the depth and breadth of the exploration, respectively. Comprehensive experiments on ToolBench show that STE substantially improves tool learning for LLMs under both in-context learning and fine-tuning settings, bringing a boost of 46.7{\%} to Mistral-Instruct-7B and enabling it to outperform GPT-4. We also show effective continual learning of tools via a simple experience replay strategy.", }
Tools are essential for large language models (LLMs) to acquire up-to-date information and take consequential actions in external environments. Existing work on tool-augmented LLMs primarily focuses on the broad coverage of tools and the flexibility of adding new tools. However, a critical aspect that has surprisingly been understudied is simply how accurately an LLM uses tools for which it has been trained. We find that existing LLMs, including GPT-4 and open-source LLMs specifically fine-tuned for tool use, only reach a correctness rate in the range of 30{\%} to 60{\%}, far from reliable use in practice. We propose a biologically inspired method for tool-augmented LLMs, simulated trial and error (STE), that orchestrates three key mechanisms for successful tool use behaviors in the biological system: trial and error, imagination, and memory. Specifically, STE leverages an LLM{'}s {`}imagination{'} to simulate plausible scenarios for using a tool, after which the LLM interacts with the tool to learn from its execution feedback. Both short-term and long-term memory are employed to improve the depth and breadth of the exploration, respectively. Comprehensive experiments on ToolBench show that STE substantially improves tool learning for LLMs under both in-context learning and fine-tuning settings, bringing a boost of 46.7{\%} to Mistral-Instruct-7B and enabling it to outperform GPT-4. We also show effective continual learning of tools via a simple experience replay strategy.
[ "Wang, Boshi", "Fang, Hao", "Eisner, Jason", "Van Durme, Benjamin", "Su, Yu" ]
LLMs in the Imaginarium: Tool Learning through Simulated Trial and Error
acl-long.570
Poster
2403.04746
[ "https://github.com/microsoft/simulated-trial-and-error" ]
https://huggingface.co/papers/2403.04746
3
22
1
5
https://aclanthology.org/2024.acl-long.570/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.571.bib
@inproceedings{zhao-etal-2024-hypermoe, title = "{H}yper{M}o{E}: Towards Better Mixture of Experts via Transferring Among Experts", author = "Zhao, Hao and Qiu, Zihan and Wu, Huijia and Wang, Zili and He, Zhaofeng and Fu, Jie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.571", pages = "10605--10618", abstract = "The Mixture of Experts (MoE) for language models has been proven effective in augmenting the capacity of models by dynamically routing each input token to a specific subset of experts for processing. Despite the success, most existing methods face a challenge for balance between sparsity and the availability of expert knowledge: enhancing performance through increased use of expert knowledge often results in diminishing sparsity during expert selection. To mitigate this contradiction, we propose HyperMoE, a novel MoE framework built upon Hypernetworks. This framework integrates the computational processes of MoE with the concept of knowledge transferring in multi-task learning. Specific modules generated based on the information of unselected experts serve as supplementary information, which allows the knowledge of experts not selected to be used while maintaining selection sparsity. Our comprehensive empirical evaluations across multiple datasets and backbones establish that HyperMoE significantly outperforms existing MoE methods under identical conditions concerning the number of experts. Our code is publicly available at https://github.com/Bumble666/Hyper{\_}MoE", }
The Mixture of Experts (MoE) for language models has been proven effective in augmenting the capacity of models by dynamically routing each input token to a specific subset of experts for processing. Despite the success, most existing methods face a challenge for balance between sparsity and the availability of expert knowledge: enhancing performance through increased use of expert knowledge often results in diminishing sparsity during expert selection. To mitigate this contradiction, we propose HyperMoE, a novel MoE framework built upon Hypernetworks. This framework integrates the computational processes of MoE with the concept of knowledge transferring in multi-task learning. Specific modules generated based on the information of unselected experts serve as supplementary information, which allows the knowledge of experts not selected to be used while maintaining selection sparsity. Our comprehensive empirical evaluations across multiple datasets and backbones establish that HyperMoE significantly outperforms existing MoE methods under identical conditions concerning the number of experts. Our code is publicly available at https://github.com/Bumble666/Hyper{\_}MoE
[ "Zhao, Hao", "Qiu, Zihan", "Wu, Huijia", "Wang, Zili", "He, Zhaofeng", "Fu, Jie" ]
HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts
acl-long.571
Poster
2402.12656
[ "https://github.com/bumble666/hypermoe_early_version" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.571/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.572.bib
@inproceedings{liu-etal-2024-aligning, title = "Aligning Large Language Models with Human Preferences through Representation Engineering", author = "Liu, Wenhao and Wang, Xiaohua and Wu, Muling and Li, Tianlong and Lv, Changze and Ling, Zixuan and JianHao, Zhu and Zhang, Cenyuan and Zheng, Xiaoqing and Huang, Xuanjing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.572", pages = "10619--10638", abstract = "Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness. Existing methods for achieving this alignment often involve employing reinforcement learning from human feedback (RLHF) to fine-tune LLMs based on human labels assessing the relative quality of model responses. Nevertheless, RLHF is susceptible to instability during fine-tuning and presents challenges in implementation. Drawing inspiration from the emerging field of representation engineering (RepE), this study aims to identify relevant representations for high-level human preferences embedded in patterns of activity within an LLM and achieve precise control of model behavior by transforming its representations. This novel approach, denoted as Representation Alignment from Human Feedback (RAHF), proves to be effective, computationally efficient, and easy to implement. Extensive experiments demonstrate the efficacy of RAHF in not only capturing but also manipulating representations to align with a broad spectrum of human preferences or values, rather than being confined to a singular concept or function (e.g. honesty or bias). RAHF{'}s versatility in accommodating diverse human preferences shows its potential for advancing LLM performance.", }
Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness. Existing methods for achieving this alignment often involve employing reinforcement learning from human feedback (RLHF) to fine-tune LLMs based on human labels assessing the relative quality of model responses. Nevertheless, RLHF is susceptible to instability during fine-tuning and presents challenges in implementation. Drawing inspiration from the emerging field of representation engineering (RepE), this study aims to identify relevant representations for high-level human preferences embedded in patterns of activity within an LLM and achieve precise control of model behavior by transforming its representations. This novel approach, denoted as Representation Alignment from Human Feedback (RAHF), proves to be effective, computationally efficient, and easy to implement. Extensive experiments demonstrate the efficacy of RAHF in not only capturing but also manipulating representations to align with a broad spectrum of human preferences or values, rather than being confined to a singular concept or function (e.g. honesty or bias). RAHF{'}s versatility in accommodating diverse human preferences shows its potential for advancing LLM performance.
[ "Liu, Wenhao", "Wang, Xiaohua", "Wu, Muling", "Li, Tianlong", "Lv, Changze", "Ling, Zixuan", "JianHao, Zhu", "Zhang, Cenyuan", "Zheng, Xiaoqing", "Huang, Xuanjing" ]
Aligning Large Language Models with Human Preferences through Representation Engineering
acl-long.572
Poster
2312.15997
[ "https://github.com/liuamber/rahf" ]
https://huggingface.co/papers/2312.15997
0
1
2
10
https://aclanthology.org/2024.acl-long.572/
[ "Liuwenhao2022/Mistral-7B-LoRA-RAHF-DUAL" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.573.bib
@inproceedings{luo-etal-2024-codis, title = "{CODIS}: Benchmarking Context-dependent Visual Comprehension for Multimodal Large Language Models", author = "Luo, Fuwen and Chen, Chi and Wan, Zihao and Kang, Zhaolu and Yan, Qidong and Li, Yingjie and Wang, Xiaolong and Wang, Siyu and Wang, Ziyue and Mi, Xiaoyue and Li, Peng and Ma, Ning and Sun, Maosong and Liu, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.573", pages = "10639--10659", abstract = "Multimodal large language models (MLLMs) have demonstrated promising results in a variety of tasks that combine vision and language. As these models become more integral to research and applications, conducting comprehensive evaluations of their capabilities has grown increasingly important. However, most existing benchmarks fail to consider that, in certain situations, images need to be interpreted within a broader context. In this work, we introduce a new benchmark, named as CODIS, designed to assess the ability of models to use context provided in free-form text to enhance visual comprehension. Our findings indicate that MLLMs consistently fall short of human performance on this benchmark. Further analysis confirms that these models struggle to effectively extract and utilize contextual information to improve their understanding of images. This underscores the pressing need to enhance the ability of MLLMs to comprehend visuals in a context-dependent manner.", }
Multimodal large language models (MLLMs) have demonstrated promising results in a variety of tasks that combine vision and language. As these models become more integral to research and applications, conducting comprehensive evaluations of their capabilities has grown increasingly important. However, most existing benchmarks fail to consider that, in certain situations, images need to be interpreted within a broader context. In this work, we introduce a new benchmark, named as CODIS, designed to assess the ability of models to use context provided in free-form text to enhance visual comprehension. Our findings indicate that MLLMs consistently fall short of human performance on this benchmark. Further analysis confirms that these models struggle to effectively extract and utilize contextual information to improve their understanding of images. This underscores the pressing need to enhance the ability of MLLMs to comprehend visuals in a context-dependent manner.
[ "Luo, Fuwen", "Chen, Chi", "Wan, Zihao", "Kang, Zhaolu", "Yan, Qidong", "Li, Yingjie", "Wang, Xiaolong", "Wang, Siyu", "Wang, Ziyue", "Mi, Xiaoyue", "Li, Peng", "Ma, Ning", "Sun, Maosong", "Liu, Yang" ]
CODIS: Benchmarking Context-dependent Visual Comprehension for Multimodal Large Language Models
acl-long.573
Poster
2402.13607
[ "" ]
https://huggingface.co/papers/2402.13607
1
0
0
14
https://aclanthology.org/2024.acl-long.573/
[]
[ "CODIS/CODIS" ]
[]
1
https://aclanthology.org/2024.acl-long.574.bib
@inproceedings{huang-etal-2024-araida, title = "{ARAIDA}: Analogical Reasoning-Augmented Interactive Data Annotation", author = "Huang, Chen and Jin, Yiping and Ilievski, Ilija and Lei, Wenqiang and Lv, Jiancheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.574", pages = "10660--10675", abstract = "Human annotation is a time-consuming task that requires a significant amount of effort. To address this issue, interactive data annotation utilizes an annotation model to provide suggestions for humans to approve or correct. However, annotation models trained with limited labeled data are prone to generating incorrect suggestions, leading to extra human correction effort. To tackle this challenge, we propose Araida, an analogical reasoning-based approach that enhances automatic annotation accuracy in the interactive data annotation setting and reduces the need for human corrections. Araida involves an error-aware integration strategy that dynamically coordinates an annotation model and a k-nearest neighbors (KNN) model, giving more importance to KNN{'}s predictions when predictions from the annotation model are deemed inaccurate. Empirical studies demonstrate that Araida is adaptable to different annotation tasks and models. On average, it reduces human correction labor by 11.02{\%} compared to vanilla interactive data annotation methods.", }
Human annotation is a time-consuming task that requires a significant amount of effort. To address this issue, interactive data annotation utilizes an annotation model to provide suggestions for humans to approve or correct. However, annotation models trained with limited labeled data are prone to generating incorrect suggestions, leading to extra human correction effort. To tackle this challenge, we propose Araida, an analogical reasoning-based approach that enhances automatic annotation accuracy in the interactive data annotation setting and reduces the need for human corrections. Araida involves an error-aware integration strategy that dynamically coordinates an annotation model and a k-nearest neighbors (KNN) model, giving more importance to KNN{'}s predictions when predictions from the annotation model are deemed inaccurate. Empirical studies demonstrate that Araida is adaptable to different annotation tasks and models. On average, it reduces human correction labor by 11.02{\%} compared to vanilla interactive data annotation methods.
[ "Huang, Chen", "Jin, Yiping", "Ilievski, Ilija", "Lei, Wenqiang", "Lv, Jiancheng" ]
ARAIDA: Analogical Reasoning-Augmented Interactive Data Annotation
acl-long.574
Poster
2405.11912
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.574/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.575.bib
@inproceedings{yang-etal-2024-polclip, title = "{P}ol{CLIP}: A Unified Image-Text Word Sense Disambiguation Model via Generating Multimodal Complementary Representations", author = "Yang, Qihao and Li, Yong and Wang, Xuelin and Wang, Fu Lee and Hao, Tianyong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.575", pages = "10676--10690", abstract = "Word sense disambiguation (WSD) can be viewed as two subtasks: textual word sense disambiguation (Textual-WSD) and visual word sense disambiguation (Visual-WSD). They aim to identify the most semantically relevant senses or images to a given context containing ambiguous target words. However, existing WSD models seldom address these two subtasks jointly due to lack of images in Textual-WSD datasets or lack of senses in Visual-WSD datasets. To bridge this gap, we propose PolCLIP, a unified image-text WSD model. By employing an image-text complementarity strategy, it not only simulates stable diffusion models to generate implicit visual representations for word senses but also simulates image captioning models to provide implicit textual representations for images. Additionally, a disambiguation-oriented image-sense dataset is constructed for the training objective of learning multimodal polysemy representations. To the best of our knowledge, PolCLIP is the first model that can cope with both Textual-WSD and Visual-WSD. Extensive experimental results on benchmarks demonstrate the effectiveness of our method, achieving a 2.53{\%} F1-score increase over the state-of-the-art models on Textual-WSD and a 2.22{\%} HR@1 improvement on Visual-WSD.", }
Word sense disambiguation (WSD) can be viewed as two subtasks: textual word sense disambiguation (Textual-WSD) and visual word sense disambiguation (Visual-WSD). They aim to identify the most semantically relevant senses or images to a given context containing ambiguous target words. However, existing WSD models seldom address these two subtasks jointly due to lack of images in Textual-WSD datasets or lack of senses in Visual-WSD datasets. To bridge this gap, we propose PolCLIP, a unified image-text WSD model. By employing an image-text complementarity strategy, it not only simulates stable diffusion models to generate implicit visual representations for word senses but also simulates image captioning models to provide implicit textual representations for images. Additionally, a disambiguation-oriented image-sense dataset is constructed for the training objective of learning multimodal polysemy representations. To the best of our knowledge, PolCLIP is the first model that can cope with both Textual-WSD and Visual-WSD. Extensive experimental results on benchmarks demonstrate the effectiveness of our method, achieving a 2.53{\%} F1-score increase over the state-of-the-art models on Textual-WSD and a 2.22{\%} HR@1 improvement on Visual-WSD.
[ "Yang, Qihao", "Li, Yong", "Wang, Xuelin", "Wang, Fu Lee", "Hao, Tianyong" ]
PolCLIP: A Unified Image-Text Word Sense Disambiguation Model via Generating Multimodal Complementary Representations
acl-long.575
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.575/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.576.bib
@inproceedings{tang-etal-2024-prompted, title = "Prompted Aspect Key Point Analysis for Quantitative Review Summarization", author = "Tang, An and Zhang, Xiuzhen and Dinh, Minh and Cambria, Erik", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.576", pages = "10691--10708", abstract = "Key Point Analysis (KPA) aims for quantitative summarization that provides key points (KPs) as succinct textual summaries and quantities measuring their prevalence. KPA studies for arguments and reviews have been reported in the literature. A majority of KPA studies for reviews adopt supervised learning to extract short sentences as KPs before matching KPs to review comments for quantification of KP prevalence. Recent abstractive approaches still generate KPs based on sentences, often leading to KPs with overlapping and hallucinated opinions, and inaccurate quantification. In this paper, we propose Prompted Aspect Key Point Analysis (PAKPA) for quantitative review summarization. PAKPA employs aspect sentiment analysis and prompted in-context learning with Large Language Models (LLMs) to generate and quantify KPs grounded in aspects for business entities, which achieves faithful KPs with accurate quantification, and removes the need for large amounts of annotated data for supervised training. Experiments on the popular review dataset Yelp and the aspect-oriented review summarization dataset SPACE show that our framework achieves state-of-the-art performance. Source code and data are available at: https://github.com/antangrocket1312/PAKPA", }
Key Point Analysis (KPA) aims for quantitative summarization that provides key points (KPs) as succinct textual summaries and quantities measuring their prevalence. KPA studies for arguments and reviews have been reported in the literature. A majority of KPA studies for reviews adopt supervised learning to extract short sentences as KPs before matching KPs to review comments for quantification of KP prevalence. Recent abstractive approaches still generate KPs based on sentences, often leading to KPs with overlapping and hallucinated opinions, and inaccurate quantification. In this paper, we propose Prompted Aspect Key Point Analysis (PAKPA) for quantitative review summarization. PAKPA employs aspect sentiment analysis and prompted in-context learning with Large Language Models (LLMs) to generate and quantify KPs grounded in aspects for business entities, which achieves faithful KPs with accurate quantification, and removes the need for large amounts of annotated data for supervised training. Experiments on the popular review dataset Yelp and the aspect-oriented review summarization dataset SPACE show that our framework achieves state-of-the-art performance. Source code and data are available at: https://github.com/antangrocket1312/PAKPA
[ "Tang, An", "Zhang, Xiuzhen", "Dinh, Minh", "Cambria, Erik" ]
Prompted Aspect Key Point Analysis for Quantitative Review Summarization
acl-long.576
Poster
2407.14049
[ "https://github.com/antangrocket1312/pakpa" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.576/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.577.bib
@inproceedings{xie-etal-2024-ask, title = "Ask Again, Then Fail: Large Language Models{'} Vacillations in Judgment", author = "Xie, Qiming and Wang, Zengzhi and Feng, Yi and Xia, Rui", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.577", pages = "10709--10745", abstract = "We observe that current large language models often waver in their judgments when faced with follow-up questions, even if the original judgment was correct. This wavering presents a significant challenge for generating reliable responses and building user trust. To comprehensively assess this issue, we introduce a Follow-up Questioning Mechanism along with two metrics to quantify this inconsistency, confirming its widespread presence in current large language models. Furthermore, to mitigate this issue, we explore various prompting strategies for closed-source models, and develop a training-based framework Unwavering-FQ that teaches large language models to maintain their originally correct judgments through synthesized high-quality preference data. Our experimental results confirm the effectiveness of our framework and its ability to enhance the general capabilities of large language models.", }
We observe that current large language models often waver in their judgments when faced with follow-up questions, even if the original judgment was correct. This wavering presents a significant challenge for generating reliable responses and building user trust. To comprehensively assess this issue, we introduce a Follow-up Questioning Mechanism along with two metrics to quantify this inconsistency, confirming its widespread presence in current large language models. Furthermore, to mitigate this issue, we explore various prompting strategies for closed-source models, and develop a training-based framework Unwavering-FQ that teaches large language models to maintain their originally correct judgments through synthesized high-quality preference data. Our experimental results confirm the effectiveness of our framework and its ability to enhance the general capabilities of large language models.
[ "Xie, Qiming", "Wang, Zengzhi", "Feng, Yi", "Xia, Rui" ]
Ask Again, Then Fail: Large Language Models' Vacillations in Judgment
acl-long.577
Poster
2310.02174
[ "https://github.com/nustm/llms-waver-in-judgements" ]
https://huggingface.co/papers/2310.02174
1
3
0
4
https://aclanthology.org/2024.acl-long.577/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.578.bib
@inproceedings{zhang-etal-2024-clamber, title = "{CLAMBER}: A Benchmark of Identifying and Clarifying Ambiguous Information Needs in Large Language Models", author = "Zhang, Tong and Qin, Peixin and Deng, Yang and Huang, Chen and Lei, Wenqiang and Liu, Junhong and Jin, Dingnan and Liang, Hongru and Chua, Tat-Seng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.578", pages = "10746--10766", abstract = "Large language models (LLMs) are increasingly used to meet user information needs, but their effectiveness in dealing with user queries that contain various types of ambiguity remains unknown, ultimately risking user trust and satisfaction. To this end, we introduce CLAMBER, a benchmark for evaluating LLMs using a well-organized taxonomy. Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries, even enhanced by chain-of-thought (CoT) and few-shot prompting. These techniques may result in overconfidence in LLMs and yield only marginal enhancements in identifying ambiguity. Furthermore, current LLMs fall short in generating high-quality clarifying questions due to a lack of conflict resolution and inaccurate utilization of inherent knowledge.In this paper, CLAMBER presents a guidance and promotes further research on proactive and trustworthy LLMs.", }
Large language models (LLMs) are increasingly used to meet user information needs, but their effectiveness in dealing with user queries that contain various types of ambiguity remains unknown, ultimately risking user trust and satisfaction. To this end, we introduce CLAMBER, a benchmark for evaluating LLMs using a well-organized taxonomy. Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries, even enhanced by chain-of-thought (CoT) and few-shot prompting. These techniques may result in overconfidence in LLMs and yield only marginal enhancements in identifying ambiguity. Furthermore, current LLMs fall short in generating high-quality clarifying questions due to a lack of conflict resolution and inaccurate utilization of inherent knowledge.In this paper, CLAMBER presents a guidance and promotes further research on proactive and trustworthy LLMs.
[ "Zhang, Tong", "Qin, Peixin", "Deng, Yang", "Huang, Chen", "Lei, Wenqiang", "Liu, Junhong", "Jin, Dingnan", "Liang, Hongru", "Chua, Tat-Seng" ]
CLAMBER: A Benchmark of Identifying and Clarifying Ambiguous Information Needs in Large Language Models
acl-long.578
Poster
2405.12063
[ "https://github.com/zt991211/clamber" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.578/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.579.bib
@inproceedings{lee-etal-2024-multimodal, title = "Multimodal Reasoning with Multimodal Knowledge Graph", author = "Lee, Junlin and Wang, Yequan and Li, Jing and Zhang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.579", pages = "10767--10782", abstract = "Multimodal reasoning with large language models (LLMs) often suffers from hallucinations and the presence of deficient or outdated knowledge within LLMs. Some approaches have sought to mitigate these issues by employing textual knowledge graphs, but their singular modality of knowledge limits comprehensive cross-modal understanding. In this paper, we propose the Multimodal Reasoning with Multimodal Knowledge Graph (MR-MKG) method, which leverages multimodal knowledge graphs (MMKGs) to learn rich and semantic knowledge across modalities, significantly enhancing the multimodal reasoning capabilities of LLMs. In particular, a relation graph attention network is utilized for encoding MMKGs and a cross-modal alignment module is designed for optimizing image-text alignment. A MMKG-grounded dataset is constructed to equip LLMs with initial expertise in multimodal reasoning through pretraining. Remarkably, MR-MKG achieves superior performance while training on only a small fraction of parameters, approximately 2.25{\%} of the LLM{'}s parameter size. Experimental results on multimodal question answering and multimodal analogy reasoning tasks demonstrate that our MR-MKG method outperforms previous state-of-the-art models.", }
Multimodal reasoning with large language models (LLMs) often suffers from hallucinations and the presence of deficient or outdated knowledge within LLMs. Some approaches have sought to mitigate these issues by employing textual knowledge graphs, but their singular modality of knowledge limits comprehensive cross-modal understanding. In this paper, we propose the Multimodal Reasoning with Multimodal Knowledge Graph (MR-MKG) method, which leverages multimodal knowledge graphs (MMKGs) to learn rich and semantic knowledge across modalities, significantly enhancing the multimodal reasoning capabilities of LLMs. In particular, a relation graph attention network is utilized for encoding MMKGs and a cross-modal alignment module is designed for optimizing image-text alignment. A MMKG-grounded dataset is constructed to equip LLMs with initial expertise in multimodal reasoning through pretraining. Remarkably, MR-MKG achieves superior performance while training on only a small fraction of parameters, approximately 2.25{\%} of the LLM{'}s parameter size. Experimental results on multimodal question answering and multimodal analogy reasoning tasks demonstrate that our MR-MKG method outperforms previous state-of-the-art models.
[ "Lee, Junlin", "Wang, Yequan", "Li, Jing", "Zhang, Min" ]
Multimodal Reasoning with Multimodal Knowledge Graph
acl-long.579
Poster
2406.02030
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.579/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.580.bib
@inproceedings{huang-etal-2024-confidence, title = "Confidence is not Timeless: Modeling Temporal Validity for Rule-based Temporal Knowledge Graph Forecasting", author = "Huang, Rikui and Wei, Wei and Qu, Xiaoye and Zhang, Shengzhe and Chen, Dangyang and Cheng, Yu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.580", pages = "10783--10794", abstract = "Recently, Temporal Knowledge Graph Forecasting (TKGF) has emerged as a pivotal domain for forecasting future events. Unlike black-box neural network methods, rule-based approaches are lauded for their efficiency and interpretability. For this line of work, it is crucial to correctly estimate the predictive effectiveness of the rules, i.e., the confidence. However, the existing literature lacks in-depth investigation into how confidence evolves with time. Moreover, inaccurate and heuristic confidence estimation limits the performance of rule-based methods. To alleviate such issues, we propose a framework named \textbf{TempValid} to explicitly model the temporal validity of rules for TKGF. Specifically, we design a time function to model the interaction between temporal information with confidence. TempValid conceptualizes confidence and other coefficients as learnable parameters to avoid inaccurate estimation and combinatorial explosion. Furthermore, we introduce a \textit{rule-adversarial negative sampling} and a \textit{time-aware negative sampling} strategies to facilitate TempValid learning. Extensive experiments show that TempValid significantly outperforms previous state-of-the-art (SOTA) rule-based methods on six TKGF datasets. Moreover, it exhibits substantial advancements in cross-domain and resource-constrained rule learning scenarios.", }
Recently, Temporal Knowledge Graph Forecasting (TKGF) has emerged as a pivotal domain for forecasting future events. Unlike black-box neural network methods, rule-based approaches are lauded for their efficiency and interpretability. For this line of work, it is crucial to correctly estimate the predictive effectiveness of the rules, i.e., the confidence. However, the existing literature lacks in-depth investigation into how confidence evolves with time. Moreover, inaccurate and heuristic confidence estimation limits the performance of rule-based methods. To alleviate such issues, we propose a framework named \textbf{TempValid} to explicitly model the temporal validity of rules for TKGF. Specifically, we design a time function to model the interaction between temporal information with confidence. TempValid conceptualizes confidence and other coefficients as learnable parameters to avoid inaccurate estimation and combinatorial explosion. Furthermore, we introduce a \textit{rule-adversarial negative sampling} and a \textit{time-aware negative sampling} strategies to facilitate TempValid learning. Extensive experiments show that TempValid significantly outperforms previous state-of-the-art (SOTA) rule-based methods on six TKGF datasets. Moreover, it exhibits substantial advancements in cross-domain and resource-constrained rule learning scenarios.
[ "Huang, Rikui", "Wei, Wei", "Qu, Xiaoye", "Zhang, Shengzhe", "Chen, Dangyang", "Cheng, Yu" ]
Confidence is not Timeless: Modeling Temporal Validity for Rule-based Temporal Knowledge Graph Forecasting
acl-long.580
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.580/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.581.bib
@inproceedings{du-etal-2024-care, title = "{CARE}: A Clue-guided Assistant for {CSR}s to Read User Manuals", author = "Du, Weihong and Liu, Jia and Wen, Zujie and Jin, Dingnan and Liang, Hongru and Lei, Wenqiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.581", pages = "10795--10811", abstract = "It is time-saving to build a reading assistant for customer service representations (CSRs) when reading user manuals, especially information-rich ones. Current solutions don{'}t fit the online custom service scenarios well due to the lack of attention to user questions and possible responses. Hence, we propose to develop a time-saving and careful reading assistant for CSRs, named CARE. It can help the CSRs quickly find proper responses from the user manuals via explicit clue chains. Specifically, each of the clue chains is formed by inferring over the user manuals, starting from the question clue aligned with the user question and ending at a possible response. To overcome the shortage of supervised data, we adopt the self-supervised strategy for model learning. The offline experiment shows that CARE is efficient in automatically inferring accurate responses from the user manual. The online experiment further demonstrates the superiority of CARE to reduce CSRs{'} reading burden and keep high service quality, in particular with {\textgreater}35{\%} decrease in time spent and keeping a {\textgreater}0.75 ICC score.", }
It is time-saving to build a reading assistant for customer service representations (CSRs) when reading user manuals, especially information-rich ones. Current solutions don{'}t fit the online custom service scenarios well due to the lack of attention to user questions and possible responses. Hence, we propose to develop a time-saving and careful reading assistant for CSRs, named CARE. It can help the CSRs quickly find proper responses from the user manuals via explicit clue chains. Specifically, each of the clue chains is formed by inferring over the user manuals, starting from the question clue aligned with the user question and ending at a possible response. To overcome the shortage of supervised data, we adopt the self-supervised strategy for model learning. The offline experiment shows that CARE is efficient in automatically inferring accurate responses from the user manual. The online experiment further demonstrates the superiority of CARE to reduce CSRs{'} reading burden and keep high service quality, in particular with {\textgreater}35{\%} decrease in time spent and keeping a {\textgreater}0.75 ICC score.
[ "Du, Weihong", "Liu, Jia", "Wen, Zujie", "Jin, Dingnan", "Liang, Hongru", "Lei, Wenqiang" ]
CARE: A Clue-guided Assistant for CSRs to Read User Manuals
acl-long.581
Poster
2408.03633
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.581/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.582.bib
@inproceedings{wang-etal-2024-enhancing-numerical, title = "Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning Processes", author = "Wang, Dingzirui and Dou, Longxu and Zhang, Xuanliang and Zhu, Qingfu and Che, Wanxiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.582", pages = "10812--10828", abstract = "Numerical reasoning is an essential ability for NLP systems to handle numeric information. Recent research indicates that fine-tuning a small-scale model to learn generating reasoning processes alongside answers can significantly enhance performance. However, current methods have the limitation that most methods generate reasoning processes with large language models (LLMs), which are {``}unreliable{''} since such processes could contain information unrelated to the answer. To address this limitation, we introduce enhancing numerical reasoning with reliable processes (Encore), which derives the reliable reasoning process by decomposing the answer formula, ensuring which fully supports the answer. Nevertheless, models could lack enough data to learn the reasoning process generation adequately, since our method generates only one single reasoning process for one formula. To overcome this difficulty, we present a series of pre-training tasks to help models learn the reasoning process generation with synthesized data. The experiments show that Encore yields improvement on all five experimental datasets with an average of 1.8{\%}, proving the effectiveness of our method.", }
Numerical reasoning is an essential ability for NLP systems to handle numeric information. Recent research indicates that fine-tuning a small-scale model to learn generating reasoning processes alongside answers can significantly enhance performance. However, current methods have the limitation that most methods generate reasoning processes with large language models (LLMs), which are {``}unreliable{''} since such processes could contain information unrelated to the answer. To address this limitation, we introduce enhancing numerical reasoning with reliable processes (Encore), which derives the reliable reasoning process by decomposing the answer formula, ensuring which fully supports the answer. Nevertheless, models could lack enough data to learn the reasoning process generation adequately, since our method generates only one single reasoning process for one formula. To overcome this difficulty, we present a series of pre-training tasks to help models learn the reasoning process generation with synthesized data. The experiments show that Encore yields improvement on all five experimental datasets with an average of 1.8{\%}, proving the effectiveness of our method.
[ "Wang, Dingzirui", "Dou, Longxu", "Zhang, Xuanliang", "Zhu, Qingfu", "Che, Wanxiang" ]
Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning Processes
acl-long.582
Poster
2402.10654
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.582/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.583.bib
@inproceedings{du-etal-2024-paged, title = "{PAGED}: A Benchmark for Procedural Graphs Extraction from Documents", author = "Du, Weihong and Liao, Wenrui and Liang, Hongru and Lei, Wenqiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.583", pages = "10829--10846", abstract = "Automatic extraction of procedural graphs from documents creates a low-cost way for users to easily understand a complex procedure by skimming visual graphs. Despite the progress in recent studies, it remains unanswered: whether the existing studies have well solved this task (Q1) and whether the emerging large language models (LLMs) can bring new opportunities to this task (Q2). To this end, we propose a new benchmark PAGED, equipped with a large high-quality dataset and standard evaluations. It investigates five state-of-the-art baselines, revealing that they fail to extract optimal procedural graphs well because of their heavy reliance on hand-written rules and limited available data. We further involve three advanced LLMs in PAGED and enhance them with a novel self-refine strategy. The results point out the advantages of LLMs in identifying textual elements and their gaps in building logical structures. We hope PAGED can serve as a major landmark for automatic procedural graph extraction and the investigations in PAGED can offer insights into the research on logic reasoning among non-sequential elements.", }
Automatic extraction of procedural graphs from documents creates a low-cost way for users to easily understand a complex procedure by skimming visual graphs. Despite the progress in recent studies, it remains unanswered: whether the existing studies have well solved this task (Q1) and whether the emerging large language models (LLMs) can bring new opportunities to this task (Q2). To this end, we propose a new benchmark PAGED, equipped with a large high-quality dataset and standard evaluations. It investigates five state-of-the-art baselines, revealing that they fail to extract optimal procedural graphs well because of their heavy reliance on hand-written rules and limited available data. We further involve three advanced LLMs in PAGED and enhance them with a novel self-refine strategy. The results point out the advantages of LLMs in identifying textual elements and their gaps in building logical structures. We hope PAGED can serve as a major landmark for automatic procedural graph extraction and the investigations in PAGED can offer insights into the research on logic reasoning among non-sequential elements.
[ "Du, Weihong", "Liao, Wenrui", "Liang, Hongru", "Lei, Wenqiang" ]
PAGED: A Benchmark for Procedural Graphs Extraction from Documents
acl-long.583
Poster
2408.03630
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.583/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.584.bib
@inproceedings{zhou-etal-2024-navigating, title = "Navigating the Shadows: Unveiling Effective Disturbances for {M}odern {AI} Content Detectors", author = "Zhou, Ying and He, Ben and Sun, Le", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.584", pages = "10847--10861", abstract = "With the launch of ChatGPT, large language models (LLMs) have attracted global attention. In the realm of article writing, LLMs have witnessed extensive utilization, giving rise to concerns related to intellectual property protection, personal privacy, and academic integrity. In response, AI-text detection has emerged to distinguish between human and machine-generated content. However, recent research indicates that these detection systems often lack robustness and struggle to effectively differentiate perturbed texts. Currently, there is a lack of systematic evaluations regarding detection performance in real-world applications, and a comprehensive examination of perturbation techniques and detector robustness is also absent. To bridge this gap, our work simulates real-world scenarios in both informal and professional writing, exploring the out-of-the-box performance of current detectors. Additionally, we have constructed 12 black-box text perturbation methods to assess the robustness of current detection models across various perturbation granularities. Furthermore, through adversarial learning experiments, we investigate the impact of perturbation data augmentation on the robustness of AI-text detectors. We have released our code and data at https://github.com/zhouying20/ai-text-detector-evaluation.", }
With the launch of ChatGPT, large language models (LLMs) have attracted global attention. In the realm of article writing, LLMs have witnessed extensive utilization, giving rise to concerns related to intellectual property protection, personal privacy, and academic integrity. In response, AI-text detection has emerged to distinguish between human and machine-generated content. However, recent research indicates that these detection systems often lack robustness and struggle to effectively differentiate perturbed texts. Currently, there is a lack of systematic evaluations regarding detection performance in real-world applications, and a comprehensive examination of perturbation techniques and detector robustness is also absent. To bridge this gap, our work simulates real-world scenarios in both informal and professional writing, exploring the out-of-the-box performance of current detectors. Additionally, we have constructed 12 black-box text perturbation methods to assess the robustness of current detection models across various perturbation granularities. Furthermore, through adversarial learning experiments, we investigate the impact of perturbation data augmentation on the robustness of AI-text detectors. We have released our code and data at https://github.com/zhouying20/ai-text-detector-evaluation.
[ "Zhou, Ying", "He, Ben", "Sun, Le" ]
Navigating the Shadows: Unveiling Effective Disturbances for Modern AI Content Detectors
acl-long.584
Poster
2406.08922
[ "https://github.com/zhouying20/ai-text-detector-evaluation" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.584/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.585.bib
@inproceedings{niu-etal-2024-ragtruth, title = "{RAGT}ruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models", author = "Niu, Cheng and Wu, Yuanhao and Zhu, Juno and Xu, Siliang and Shum, KaShun and Zhong, Randy and Song, Juntong and Zhang, Tong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.585", pages = "10862--10878", abstract = "Retrieval-augmented generation (RAG) has become a main technique for alleviating hallucinations in large language models (LLMs). Despite the integration of RAG, LLMs may still present unsupported or contradictory claims to the retrieved contents. In order to develop effective hallucination prevention strategies under RAG, it is important to create benchmark datasets that can measure the extent of hallucination. This paper presents RAGTruth, a corpus tailored for analyzing word-level hallucinations in various domains and tasks within the standard RAG frameworks for LLM applications. RAGTruth comprises nearly 18,000 naturally generated responses from diverse LLMs using RAG. These responses have undergone meticulous manual annotations at both the individual case and word levels, incorporating evaluations of hallucination intensity. We not only benchmark hallucination frequencies across different LLMs, but also critically assess the effectiveness of several existing hallucination detection methodologies. We show that using a high-quality dataset such as RAGTruth, it is possible to finetune a relatively small LLM and achieve a competitive hallucination detection performance when compared to the existing prompt-based approaches using state-of-the-art LLMs such as GPT-4. Furthermore, the finetuned model can effectively mitigate hallucination in LLM responses.", }
Retrieval-augmented generation (RAG) has become a main technique for alleviating hallucinations in large language models (LLMs). Despite the integration of RAG, LLMs may still present unsupported or contradictory claims to the retrieved contents. In order to develop effective hallucination prevention strategies under RAG, it is important to create benchmark datasets that can measure the extent of hallucination. This paper presents RAGTruth, a corpus tailored for analyzing word-level hallucinations in various domains and tasks within the standard RAG frameworks for LLM applications. RAGTruth comprises nearly 18,000 naturally generated responses from diverse LLMs using RAG. These responses have undergone meticulous manual annotations at both the individual case and word levels, incorporating evaluations of hallucination intensity. We not only benchmark hallucination frequencies across different LLMs, but also critically assess the effectiveness of several existing hallucination detection methodologies. We show that using a high-quality dataset such as RAGTruth, it is possible to finetune a relatively small LLM and achieve a competitive hallucination detection performance when compared to the existing prompt-based approaches using state-of-the-art LLMs such as GPT-4. Furthermore, the finetuned model can effectively mitigate hallucination in LLM responses.
[ "Niu, Cheng", "Wu, Yuanhao", "Zhu, Juno", "Xu, Siliang", "Shum, KaShun", "Zhong, R", "y", "Song, Juntong", "Zhang, Tong" ]
RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models
acl-long.585
Poster
2401.00396
[ "https://github.com/particlemedia/ragtruth" ]
https://huggingface.co/papers/2401.00396
0
3
0
8
https://aclanthology.org/2024.acl-long.585/
[ "vectara/hallucination_evaluation_model" ]
[ "lytang/LLM-AggreFact" ]
[ "vectara/leaderboard", "TeamTonic/MultiMed", "jayash391/RAG_MedMind", "Tonic1/hallucination-test", "itsJB/Fact-Checked", "Tonic/MultiMedTulu", "girgis/Cloudilic-Demo", "eaglelandsonce/Breaking-Free-Hackathon", "jimshadow666/vectara-hallucination_evaluation_model", "TeamTonic/TruEraMultiMed", "subhanliaqat/hhem", "eaglelandsonce/hhem", "ahmadtalha/hhem", "pyresearch/KitchenCreators", "Tonic/SureRAG", "Johan713/MedMind01", "abidlabs/HHEM", "ranavikas/NEXUS" ]
1
https://aclanthology.org/2024.acl-long.586.bib
@inproceedings{li-etal-2024-dawn, title = "The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models", author = "Li, Junyi and Chen, Jie and Ren, Ruiyang and Cheng, Xiaoxue and Zhao, Xin and Nie, Jian-Yun and Wen, Ji-Rong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.586", pages = "10879--10899", abstract = "In the era of large language models (LLMs), hallucination (the tendency to generate factually incorrect content) poses great challenges to trustworthy and reliable deployment of LLMs in real-world applications. To tackle the hallucination, three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them (mitigation). To address these challenges, this work presents a systematic empirical study on LLM hallucinations, focused on the three aspects of hallucination detection, source and mitigation. Specially, we construct a new hallucination benchmark HaluEval 2.0, and design a simple yet effective detection method for LLM hallucinations. Furthermore, we zoom into the different training or utilization stages of LLMs and extensively analyze the potential factors that lead to the LLM hallucinations. Finally, we implement and examine a series of widely used techniques to mitigate the hallucinations in LLMs. Our work has led to several important findings to understand the hallucination origin and mitigate the hallucinations in LLMs.", }
In the era of large language models (LLMs), hallucination (the tendency to generate factually incorrect content) poses great challenges to trustworthy and reliable deployment of LLMs in real-world applications. To tackle the hallucination, three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them (mitigation). To address these challenges, this work presents a systematic empirical study on LLM hallucinations, focused on the three aspects of hallucination detection, source and mitigation. Specially, we construct a new hallucination benchmark HaluEval 2.0, and design a simple yet effective detection method for LLM hallucinations. Furthermore, we zoom into the different training or utilization stages of LLMs and extensively analyze the potential factors that lead to the LLM hallucinations. Finally, we implement and examine a series of widely used techniques to mitigate the hallucinations in LLMs. Our work has led to several important findings to understand the hallucination origin and mitigate the hallucinations in LLMs.
[ "Li, Junyi", "Chen, Jie", "Ren, Ruiyang", "Cheng, Xiaoxue", "Zhao, Xin", "Nie, Jian-Yun", "Wen, Ji-Rong" ]
The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models
acl-long.586
Poster
2401.03205
[ "https://github.com/rucaibox/halueval-2.0" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.586/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.587.bib
@inproceedings{zhong-etal-2024-revisiting, title = "Revisiting Knowledge Distillation for Autoregressive Language Models", author = "Zhong, Qihuang and Ding, Liang and Shen, Li and Liu, Juhua and Du, Bo and Tao, Dacheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.587", pages = "10900--10913", abstract = "Knowledge distillation (KD) is a common approach to compress a teacher model to reduce its inference cost and memory footprint, by training a smaller student model. However, in the context of autoregressive language models (LMs), we empirically find that larger teacher LMs might dramatically result in a poorer student. In response to this problem, we conduct a series of analyses and reveal that different tokens have different teaching modes, neglecting which will lead to performance degradation. Motivated by this, we propose a simple yet effective adaptive teaching approach (ATKD) to improve the KD. The core of ATKD is to reduce rote learning and make teaching more diverse and flexible. Extensive experiments on 8 LM tasks show that, with the help of ATKD, various baseline KD methods can achieve consistent and significant performance gains (up to +3.04{\%} average score) across all model types and sizes. More encouragingly, ATKD can improve the student model generalization effectively.", }
Knowledge distillation (KD) is a common approach to compress a teacher model to reduce its inference cost and memory footprint, by training a smaller student model. However, in the context of autoregressive language models (LMs), we empirically find that larger teacher LMs might dramatically result in a poorer student. In response to this problem, we conduct a series of analyses and reveal that different tokens have different teaching modes, neglecting which will lead to performance degradation. Motivated by this, we propose a simple yet effective adaptive teaching approach (ATKD) to improve the KD. The core of ATKD is to reduce rote learning and make teaching more diverse and flexible. Extensive experiments on 8 LM tasks show that, with the help of ATKD, various baseline KD methods can achieve consistent and significant performance gains (up to +3.04{\%} average score) across all model types and sizes. More encouragingly, ATKD can improve the student model generalization effectively.
[ "Zhong, Qihuang", "Ding, Liang", "Shen, Li", "Liu, Juhua", "Du, Bo", "Tao, Dacheng" ]
Revisiting Knowledge Distillation for Autoregressive Language Models
acl-long.587
Poster
2402.11890
[ "" ]
https://huggingface.co/papers/2402.11890
1
0
0
6
https://aclanthology.org/2024.acl-long.587/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.588.bib
@inproceedings{liang-etal-2024-continual, title = "Continual Learning with Semi-supervised Contrastive Distillation for Incremental Neural Machine Translation", author = "Liang, Yunlong and Meng, Fandong and Wang, Jiaan and Xu, Jinan and Chen, Yufeng and Zhou, Jie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.588", pages = "10914--10928", abstract = "Incrementally expanding the capability of an existing translation model to solve new domain tasks over time is a fundamental and practical problem, which usually suffers from catastrophic forgetting. Generally, multi-domain learning can be seen as a good solution. However, there are two drawbacks: 1) it requires having the training data for all domains available at the same time, which may be unrealistic due to storage or privacy concerns; 2) it requires re-training the model on the data of all domains from scratch when adding a new domain and this is time-consuming and computationally expensive. To address these issues, we present a semi-supervised contrastive distillation framework for incremental neural machine translation. Specifically, to avoid catastrophic forgetting, we propose to exploit unlabeled data from the same distributions of the older domains through knowledge distillation. Further, to ensure the distinct domain characteristics in the model as the number of domains increases, we devise a cross-domain contrastive objective to enhance the distilled knowledge. Extensive experiments on domain translation benchmarks show that our approach, without accessing any previous training data or re-training on all domains from scratch, can significantly prevent the model from forgetting previously learned knowledge while obtaining good performance on the incrementally added domains. The code and data with step-by-step instructions will be released upon acceptance.", }
Incrementally expanding the capability of an existing translation model to solve new domain tasks over time is a fundamental and practical problem, which usually suffers from catastrophic forgetting. Generally, multi-domain learning can be seen as a good solution. However, there are two drawbacks: 1) it requires having the training data for all domains available at the same time, which may be unrealistic due to storage or privacy concerns; 2) it requires re-training the model on the data of all domains from scratch when adding a new domain and this is time-consuming and computationally expensive. To address these issues, we present a semi-supervised contrastive distillation framework for incremental neural machine translation. Specifically, to avoid catastrophic forgetting, we propose to exploit unlabeled data from the same distributions of the older domains through knowledge distillation. Further, to ensure the distinct domain characteristics in the model as the number of domains increases, we devise a cross-domain contrastive objective to enhance the distilled knowledge. Extensive experiments on domain translation benchmarks show that our approach, without accessing any previous training data or re-training on all domains from scratch, can significantly prevent the model from forgetting previously learned knowledge while obtaining good performance on the incrementally added domains. The code and data with step-by-step instructions will be released upon acceptance.
[ "Liang, Yunlong", "Meng, F", "ong", "Wang, Jiaan", "Xu, Jinan", "Chen, Yufeng", "Zhou, Jie" ]
Continual Learning with Semi-supervised Contrastive Distillation for Incremental Neural Machine Translation
acl-long.588
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.588/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.589.bib
@inproceedings{huang-etal-2024-make, title = "Make-A-Voice: Revisiting Voice Large Language Models as Scalable Multilingual and Multitask Learners", author = "Huang, Rongjie and Zhang, Chunlei and Wang, Yongqi and Yang, Dongchao and Tian, Jinchuan and Ye, Zhenhui and Liu, Luping and Wang, Zehan and Jiang, Ziyue and Chang, Xuankai and Shi, Jiatong and Weng, Chao and Zhao, Zhou and Yu, Dong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.589", pages = "10929--10942", abstract = "Large language models (LLMs) have successfully served as a general-purpose interface across multiple tasks and languages, while the adaptation of voice LLMs is mostly designed for specific purposes (either single-task or monolingual), where the advantages of LLMs especially for low-resource language processing and zero-shot task generalization are less exploited in the audio community. To bridge the gap, we introduce Make-A-Voice as a multi-modal voice LLM and conduct a comprehensive study on its capability to deal with multiple tasks/languages. When trained on {\textasciitilde}200K hours of 6-language data for 4 voice generation applications, Make-A-Voice emerges notable advantages: 1) as scalable learners to improve performance with end-to-end local and global multiscale transformers; and 2) as multitask learners by adjusting prompts to share common knowledge across modalities (speech/singing) and present in-context learning abilities by generalizing to unseen tasks not explicitly train on; 3) as multilingual learners to alleviate data scarcity of low-resource languages by including rich-resource language training data. Experimental results demonstrate that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models in monolingual/cross-lingual voice generation. Audio samples are available at https://M-Voice.github.io", }
Large language models (LLMs) have successfully served as a general-purpose interface across multiple tasks and languages, while the adaptation of voice LLMs is mostly designed for specific purposes (either single-task or monolingual), where the advantages of LLMs especially for low-resource language processing and zero-shot task generalization are less exploited in the audio community. To bridge the gap, we introduce Make-A-Voice as a multi-modal voice LLM and conduct a comprehensive study on its capability to deal with multiple tasks/languages. When trained on {\textasciitilde}200K hours of 6-language data for 4 voice generation applications, Make-A-Voice emerges notable advantages: 1) as scalable learners to improve performance with end-to-end local and global multiscale transformers; and 2) as multitask learners by adjusting prompts to share common knowledge across modalities (speech/singing) and present in-context learning abilities by generalizing to unseen tasks not explicitly train on; 3) as multilingual learners to alleviate data scarcity of low-resource languages by including rich-resource language training data. Experimental results demonstrate that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models in monolingual/cross-lingual voice generation. Audio samples are available at https://M-Voice.github.io
[ "Huang, Rongjie", "Zhang, Chunlei", "Wang, Yongqi", "Yang, Dongchao", "Tian, Jinchuan", "Ye, Zhenhui", "Liu, Luping", "Wang, Zehan", "Jiang, Ziyue", "Chang, Xuankai", "Shi, Jiatong", "Weng, Chao", "Zhao, Zhou", "Yu, Dong" ]
Make-A-Voice: Revisiting Voice Large Language Models as Scalable Multilingual and Multitask Learners
acl-long.589
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.589/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.590.bib
@inproceedings{huang-etal-2024-chat, title = "Chat Vector: A Simple Approach to Equip {LLM}s with Instruction Following and Model Alignment in New Languages", author = "Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tsai, Richard and Lee, Hung-yi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.590", pages = "10943--10959", abstract = "Recently, the development of open-source large language models (LLMs) has advanced rapidly. Nevertheless, due to data constraints, the capabilities of most open-source LLMs are primarily focused on English. To address this issue, we introduce the concept of $\textit{chat vector}$ to equip pre-trained language models with instruction following and human value alignment via simple model arithmetic. The chat vector is derived by subtracting the weights of a pre-trained base model (e.g. LLaMA2) from those of its corresponding chat model (e.g. LLaMA2-chat). By simply adding the chat vector to a continual pre-trained model{'}s weights, we can endow the model with chat capabilities in new languages without the need for further training.Our empirical studies demonstrate the superior efficacy of the chat vector from three different aspects: instruction following, toxicity mitigation, and multi-turn dialogue. Moreover, to showcase the adaptability of our approach, we extend our experiments to encompass various languages, base models, and chat vectors. The results underscore the chat vector{'}s simplicity, effectiveness, and wide applicability, making it a compelling solution for efficiently enabling conversational capabilities in pre-trained language models. Our code is available at https://github.com/aqweteddy/ChatVector.", }
Recently, the development of open-source large language models (LLMs) has advanced rapidly. Nevertheless, due to data constraints, the capabilities of most open-source LLMs are primarily focused on English. To address this issue, we introduce the concept of $\textit{chat vector}$ to equip pre-trained language models with instruction following and human value alignment via simple model arithmetic. The chat vector is derived by subtracting the weights of a pre-trained base model (e.g. LLaMA2) from those of its corresponding chat model (e.g. LLaMA2-chat). By simply adding the chat vector to a continual pre-trained model{'}s weights, we can endow the model with chat capabilities in new languages without the need for further training.Our empirical studies demonstrate the superior efficacy of the chat vector from three different aspects: instruction following, toxicity mitigation, and multi-turn dialogue. Moreover, to showcase the adaptability of our approach, we extend our experiments to encompass various languages, base models, and chat vectors. The results underscore the chat vector{'}s simplicity, effectiveness, and wide applicability, making it a compelling solution for efficiently enabling conversational capabilities in pre-trained language models. Our code is available at https://github.com/aqweteddy/ChatVector.
[ "Huang, Shih-Cheng", "Li, Pin-Zu", "Hsu, Yu-chi", "Chen, Kuang-Ming", "Lin, Yu Tung", "Hsiao, Shih-Kai", "Tsai, Richard", "Lee, Hung-yi" ]
Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages
acl-long.590
Poster
2310.04799
[ "" ]
https://huggingface.co/papers/2310.04799
0
0
0
8
https://aclanthology.org/2024.acl-long.590/
[ "beomi/Llama-3-Open-Ko-8B", "beomi/Llama-3-Open-Ko-8B-Instruct-preview", "teddylee777/Llama-3-Open-Ko-8B-gguf", "maywell/Llama-3-Ko-8B-Instruct", "beomi/Llama-3-KoEn-8B-Instruct-preview", "teddylee777/Llama-3-Open-Ko-8B-Instruct-preview-gguf", "napopoa32/swallow-hermes-st-v1", "beomi/Llama-3-KoEn-8B-xtuner-llava-preview", "beomi/Llama-3-KoEn-8B", "kuotient/Llama-3-8B-Instruct-vector-diff", "aixsatoshi/Swallow-MX-8x7b-NVE-chatvector-Mixtral-instruct", "rinna/llama-3-youko-8b-instruct", "HachiML/SkillTree-Math-OpenMath-Mistral-7B-v0.1", "kousw/stablelm-gamma-7b-chatvector", "QuantFactory/Llama-3-Ko-8B-Instruct-GGUF", "HachiML/SkillTree-Code-llama2-7b-hf", "toshi456/chat-vector-llava-v1.5-7b-ja", "aqweteddy/xwin-7b_chatvec-tulu2", "aqweteddy/mistral_tv-neural-marconroni", "ryota39/Gakki-7B-reward-v0.1", "ryota39/Gakki-7B", "jovyan/Swallow-MS-7b-v0.1-ChatVector", "nebchi/Llama3-Chat_Vector-kor", "LiteLLMs/Llama-3-Open-Ko-8B-GGUF", "RichardErkhov/beomi_-_Llama-3-KoEn-8B-gguf", "QuantFactory/Llama-3-Open-Ko-8B-GGUF", "LiteLLMs/Llama-3-Open-Ko-8B-Instruct-preview-GGUF", "HachiML/SkillTree-Chat-Mistral-7B-v0.1", "aeolian83/Llama-3-8B-Instruct-cp-transfer_1.0", "aqweteddy/Llama3-Taiwan-70B-Instruct-128K_cv-llama3", "RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-Instruct-preview-gguf", "HachiML/SkillTree-Chat-LAB-Mistral-7B-v0.1", "aqweteddy/Llama3-Taiwan-70B-Instruct-128K_cv-llama3-emb", "RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-4bits", "RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf", "sbtom/karakuri-midrose-CV.gguf", "aeolian83/Llama-3-8B-Instruct-cp-transfer_0.7", "aeolian83/Llama-3-Open-Ko-8B-aeolian83-chatvec", "RichardErkhov/maywell_-_Llama-3-Ko-8B-Instruct-8bits", "RichardErkhov/maywell_-_Llama-3-Ko-8B-Instruct-4bits", "jcwee0873/llama3-8b-cv-swap-v0.1", "RichardErkhov/beomi_-_Llama-3-KoEn-8B-Instruct-preview-gguf", "Heoni/v2_1_pt_ep1_sft_ep1_merged_model_based_on_llama3_20240721", "rinna/llama-3-youko-70b-instruct", "shinyice/chatvector-llava-v1.5-plus-houou-v3-7b", "RioShiina/llama-3-youko-8b-instruct-exl2", "RichardErkhov/rinna_-_llama-3-youko-8b-instruct-gguf", "Bohanlu/Taigi-Llama-2-Chat-7B", "Bohanlu/Taigi-Llama-2-Chat-13B" ]
[]
[ "Intel/low_bit_open_llm_leaderboard", "BAAI/open_cn_llm_leaderboard", "open-llm-leaderboard-old/open_llm_leaderboard", "featherless-ai/try-this-model", "felixz/open_llm_leaderboard", "Vikhrmodels/small-shlepa-lb", "rinna/llama-3-youko-8b-instruct", "neubla/neubla-llm-evaluation-board", "rodrigomasini/data_only_open_llm_leaderboard", "Sprost/beomi-Llama-3-Open-Ko-8B", "0x1668/open_llm_leaderboard", "pngwn/open_llm_leaderboard-check", "asir0z/open_llm_leaderboard", "kbmlcoding/open_llm_leaderboard_free", "aichampions/open_llm_leaderboard", "Adeco/open_llm_leaderboard", "smothiki/open_llm_leaderboard_old", "Darok/Featherless-Feud" ]
1
https://aclanthology.org/2024.acl-long.591.bib
@inproceedings{mangaokar-etal-2024-prp, title = "{PRP}: Propagating Universal Perturbations to Attack Large Language Model Guard-Rails", author = "Mangaokar, Neal and Hooda, Ashish and Choi, Jihye and Chandrashekaran, Shreyas and Fawaz, Kassem and Jha, Somesh and Prakash, Atul", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.591", pages = "10960--10976", abstract = "Large language models (LLMs) are typically aligned to be harmless to humans. Unfortunately, recent work has shown that such models are susceptible to automated jailbreak attacks that induce them to generate harmful content. More recent LLMs often incorporate an additional layer of defense, a Guard Model, which is a second LLM that is designed to check and moderate the output response of the primary LLM. Our key contribution is to show a novel attack strategy, PRP, that is successful against several open-source (e.g., Llama 2) and closed-source (e.g., GPT 3.5) implementations of Guard Models. PRP leverages a two step prefix-based attack that operates by (a) constructing a universal adversarial prefix for the Guard Model, and (b) propagating this prefix to the response. We find that this procedure is effective across multiple threat models, including ones in which the adversary has no access to the Guard Model at all. Our work suggests that further advances are required on defenses and Guard Models before they can be considered effective. Code at https://github.com/AshishHoodaIITD/prp-llm-guard-rail-attack.", }
Large language models (LLMs) are typically aligned to be harmless to humans. Unfortunately, recent work has shown that such models are susceptible to automated jailbreak attacks that induce them to generate harmful content. More recent LLMs often incorporate an additional layer of defense, a Guard Model, which is a second LLM that is designed to check and moderate the output response of the primary LLM. Our key contribution is to show a novel attack strategy, PRP, that is successful against several open-source (e.g., Llama 2) and closed-source (e.g., GPT 3.5) implementations of Guard Models. PRP leverages a two step prefix-based attack that operates by (a) constructing a universal adversarial prefix for the Guard Model, and (b) propagating this prefix to the response. We find that this procedure is effective across multiple threat models, including ones in which the adversary has no access to the Guard Model at all. Our work suggests that further advances are required on defenses and Guard Models before they can be considered effective. Code at https://github.com/AshishHoodaIITD/prp-llm-guard-rail-attack.
[ "Mangaokar, Neal", "Hooda, Ashish", "Choi, Jihye", "Ch", "rashekaran, Shreyas", "Fawaz, Kassem", "Jha, Somesh", "Prakash, Atul" ]
PRP: Propagating Universal Perturbations to Attack Large Language Model Guard-Rails
acl-long.591
Poster
2402.15911
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.591/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.592.bib
@inproceedings{yuan-etal-2024-hide, title = "Hide and Seek in Noise Labels: Noise-Robust Collaborative Active Learning with {LLM}s-Powered Assistance", author = "Yuan, Bo and Chen, Yulin and Zhang, Yin and Jiang, Wei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.592", pages = "10977--11011", abstract = "Learning from noisy labels (LNL) is a challenge that arises in many real-world scenarios where collected training data can contain incorrect or corrupted labels. Most existing solutions identify noisy labels and adopt active learning to query human experts on them for denoising. In the era of large language models (LLMs), although we can reduce the human effort to improve these methods, their performances are still subject to accurately separating the clean and noisy samples from noisy data. In this paper, we propose an innovative collaborative learning framework NoiseAL based on active learning to combine LLMs and small models (SMs) for learning from noisy labels. During collaborative training, we first adopt two SMs to form a co-prediction network and propose a dynamic-enhanced threshold strategy to divide the noisy data into different subsets, then select the clean and noisy samples from these subsets to feed the active annotator LLMs to rectify noisy samples. Finally, we employ different optimization objectives to conquer subsets with different degrees of label noises. Extensive experiments on synthetic and real-world noise datasets further demonstrate the superiority of our framework over state-of-the-art baselines.", }
Learning from noisy labels (LNL) is a challenge that arises in many real-world scenarios where collected training data can contain incorrect or corrupted labels. Most existing solutions identify noisy labels and adopt active learning to query human experts on them for denoising. In the era of large language models (LLMs), although we can reduce the human effort to improve these methods, their performances are still subject to accurately separating the clean and noisy samples from noisy data. In this paper, we propose an innovative collaborative learning framework NoiseAL based on active learning to combine LLMs and small models (SMs) for learning from noisy labels. During collaborative training, we first adopt two SMs to form a co-prediction network and propose a dynamic-enhanced threshold strategy to divide the noisy data into different subsets, then select the clean and noisy samples from these subsets to feed the active annotator LLMs to rectify noisy samples. Finally, we employ different optimization objectives to conquer subsets with different degrees of label noises. Extensive experiments on synthetic and real-world noise datasets further demonstrate the superiority of our framework over state-of-the-art baselines.
[ "Yuan, Bo", "Chen, Yulin", "Zhang, Yin", "Jiang, Wei" ]
Hide and Seek in Noise Labels: Noise-Robust Collaborative Active Learning with LLMs-Powered Assistance
acl-long.592
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.592/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.593.bib
@inproceedings{huang-etal-2024-clomo, title = "{CLOMO}: Counterfactual Logical Modification with Large Language Models", author = "Huang, Yinya and Hong, Ruixin and Zhang, Hongming and Shao, Wei and Yang, Zhicheng and Yu, Dong and Zhang, Changshui and Liang, Xiaodan and Song, Linqi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.593", pages = "11012--11034", abstract = "In this study, we delve into the realm of counterfactual reasoning capabilities of large language models (LLMs). Our primary objective is to cultivate the counterfactual thought processes within LLMs and rigorously assess these processes for their validity. Specifically, we introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark. In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship. To effectively evaluate a generation model{'}s counterfactual capabilities, we propose an innovative evaluation metric, the decomposed Self-Evaluation Score (SES) to directly evaluate the natural language output of LLMs instead of modeling the task as a multiple-choice problem. Analysis shows that the proposed automatic metric aligns well with human preference. Our experimental results show that while LLMs demonstrate a notable capacity for logical counterfactual thinking, there remains a discernible gap between their current abilities and human performance. Code and data are available at https://github.com/Eleanor-H/CLOMO.", }
In this study, we delve into the realm of counterfactual reasoning capabilities of large language models (LLMs). Our primary objective is to cultivate the counterfactual thought processes within LLMs and rigorously assess these processes for their validity. Specifically, we introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark. In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship. To effectively evaluate a generation model{'}s counterfactual capabilities, we propose an innovative evaluation metric, the decomposed Self-Evaluation Score (SES) to directly evaluate the natural language output of LLMs instead of modeling the task as a multiple-choice problem. Analysis shows that the proposed automatic metric aligns well with human preference. Our experimental results show that while LLMs demonstrate a notable capacity for logical counterfactual thinking, there remains a discernible gap between their current abilities and human performance. Code and data are available at https://github.com/Eleanor-H/CLOMO.
[ "Huang, Yinya", "Hong, Ruixin", "Zhang, Hongming", "Shao, Wei", "Yang, Zhicheng", "Yu, Dong", "Zhang, Changshui", "Liang, Xiaodan", "Song, Linqi" ]
CLOMO: Counterfactual Logical Modification with Large Language Models
acl-long.593
Poster
2311.17438
[ "https://github.com/eleanor-h/clomo" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.593/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.594.bib
@inproceedings{shi-etal-2024-exploring, title = "Exploring Hybrid Question Answering via Program-based Prompting", author = "Shi, Qi and Cui, Han and Wang, Haofeng and Zhu, Qingfu and Che, Wanxiang and Liu, Ting", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.594", pages = "11035--11046", abstract = "Question answering over heterogeneous data requires reasoning over diverse sources of data, which is challenging due to the large scale of information and organic coupling of heterogeneous data. Various approaches have been proposed to address these challenges. One approach involves training specialized retrievers to select relevant information, thereby reducing the input length. Another approach is to transform diverse modalities of data into a single modality, simplifying the task difficulty and enabling more straightforward processing. In this paper, we propose HProPro, a novel program-based prompting framework for the hybrid question answering task. HProPro follows the code generation and execution paradigm. In addition, HProPro integrates various functions to tackle the hybrid reasoning scenario. Specifically, HProPro contains function declaration and function implementation to perform hybrid information-seeking over data from various sources and modalities, which enables reasoning over such data without training specialized retrievers or performing modal transformations. Experimental results on two typical hybrid question answering benchmarks HybridQA and MultiModalQA demonstrate the effectiveness of HProPro: it surpasses all baseline systems and achieves the best performances in the few-shot settings on both datasets.", }
Question answering over heterogeneous data requires reasoning over diverse sources of data, which is challenging due to the large scale of information and organic coupling of heterogeneous data. Various approaches have been proposed to address these challenges. One approach involves training specialized retrievers to select relevant information, thereby reducing the input length. Another approach is to transform diverse modalities of data into a single modality, simplifying the task difficulty and enabling more straightforward processing. In this paper, we propose HProPro, a novel program-based prompting framework for the hybrid question answering task. HProPro follows the code generation and execution paradigm. In addition, HProPro integrates various functions to tackle the hybrid reasoning scenario. Specifically, HProPro contains function declaration and function implementation to perform hybrid information-seeking over data from various sources and modalities, which enables reasoning over such data without training specialized retrievers or performing modal transformations. Experimental results on two typical hybrid question answering benchmarks HybridQA and MultiModalQA demonstrate the effectiveness of HProPro: it surpasses all baseline systems and achieves the best performances in the few-shot settings on both datasets.
[ "Shi, Qi", "Cui, Han", "Wang, Haofeng", "Zhu, Qingfu", "Che, Wanxiang", "Liu, Ting" ]
Exploring Hybrid Question Answering via Program-based Prompting
acl-long.594
Poster
2402.10812
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.594/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.595.bib
@inproceedings{singh-etal-2024-indicgenbench, title = "{I}ndic{G}en{B}ench: A Multilingual Benchmark to Evaluate Generation Capabilities of {LLM}s on {I}ndic Languages", author = "Singh, Harman and Gupta, Nitish and Bharadwaj, Shikhar and Tewari, Dinesh and Talukdar, Partha", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.595", pages = "11047--11073", abstract = "As large language models (LLMs) see increasing adoption across the globe, it is imperative for LLMs to be representative of the linguistic diversity of the world. India is a linguistically diverse country of 1.4 Billion people. To facilitate research on multilingual LLM evaluation, we release IndicGenBench {---} the largest benchmark for evaluating LLMs on user-facing generation tasks across a diverse set 29 of Indic languages covering 13 scripts and 4 language families. IndicGenBench is composed of diverse generation tasks like cross-lingual summarization, machine translation, and cross-lingual question answering. IndicGenBench extends existing benchmarks to many Indic languages through human curation providing multi-way parallel evaluation data for many under-represented Indic languages for the first time. We evaluate stateof-the-art LLMs like GPT-3.5, GPT-4, PaLM2, and LLaMA on IndicGenBench in a variety of settings. The largest PaLM-2 models performs the best on most tasks, however, there is a significant performance gap in all languages compared to English showing that further research is needed for the development of more inclusive multilingual language models. IndicGenBench isavailable at www.github.com/google-researchdatasets/indic-gen-bench", }
As large language models (LLMs) see increasing adoption across the globe, it is imperative for LLMs to be representative of the linguistic diversity of the world. India is a linguistically diverse country of 1.4 Billion people. To facilitate research on multilingual LLM evaluation, we release IndicGenBench {---} the largest benchmark for evaluating LLMs on user-facing generation tasks across a diverse set 29 of Indic languages covering 13 scripts and 4 language families. IndicGenBench is composed of diverse generation tasks like cross-lingual summarization, machine translation, and cross-lingual question answering. IndicGenBench extends existing benchmarks to many Indic languages through human curation providing multi-way parallel evaluation data for many under-represented Indic languages for the first time. We evaluate stateof-the-art LLMs like GPT-3.5, GPT-4, PaLM2, and LLaMA on IndicGenBench in a variety of settings. The largest PaLM-2 models performs the best on most tasks, however, there is a significant performance gap in all languages compared to English showing that further research is needed for the development of more inclusive multilingual language models. IndicGenBench isavailable at www.github.com/google-researchdatasets/indic-gen-bench
[ "Singh, Harman", "Gupta, Nitish", "Bharadwaj, Shikhar", "Tewari, Dinesh", "Talukdar, Partha" ]
IndicGenBench: A Multilingual Benchmark to Evaluate Generation Capabilities of LLMs on Indic Languages
acl-long.595
Poster
2404.16816
[ "https://github.com/google-research-datasets/indic-gen-bench" ]
https://huggingface.co/papers/2404.16816
0
1
2
5
https://aclanthology.org/2024.acl-long.595/
[]
[ "google/IndicGenBench_flores_in", "google/IndicGenBench_xquad_in", "google/IndicGenBench_xorqa_in", "google/IndicGenBench_crosssum_in" ]
[]
1
https://aclanthology.org/2024.acl-long.596.bib
@inproceedings{ying-etal-2024-simple, title = "Simple but Effective Compound Geometric Operations for Temporal Knowledge Graph Completion", author = "Ying, Rui and Hu, Mengting and Wu, Jianfeng and Xie, Yalan and Liu, Xiaoyi and Wang, Zhunheng and Jiang, Ming and Gao, Hang and Zhang, Linlin and Cheng, Renhong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.596", pages = "11074--11086", abstract = "Temporal knowledge graph completion aims to infer the missing facts in temporal knowledge graphs. Current approaches usually embed factual knowledge into continuous vector space and apply geometric operations to learn potential patterns in temporal knowledge graphs. However, these methods only adopt a single operation, which may have limitations in capturing the complex temporal dynamics present in temporal knowledge graphs. Therefore, we propose a simple but effective method, i.e. TCompoundE, which is specially designed with two geometric operations, including time-specific and relation-specific operations. We provide mathematical proofs to demonstrate the ability of TCompoundE to encode various relation patterns. Experimental results show that our proposed model significantly outperforms existing temporal knowledge graph embedding models. Our code is available at https://github.com/nk-ruiying/TCompoundE.", }
Temporal knowledge graph completion aims to infer the missing facts in temporal knowledge graphs. Current approaches usually embed factual knowledge into continuous vector space and apply geometric operations to learn potential patterns in temporal knowledge graphs. However, these methods only adopt a single operation, which may have limitations in capturing the complex temporal dynamics present in temporal knowledge graphs. Therefore, we propose a simple but effective method, i.e. TCompoundE, which is specially designed with two geometric operations, including time-specific and relation-specific operations. We provide mathematical proofs to demonstrate the ability of TCompoundE to encode various relation patterns. Experimental results show that our proposed model significantly outperforms existing temporal knowledge graph embedding models. Our code is available at https://github.com/nk-ruiying/TCompoundE.
[ "Ying, Rui", "Hu, Mengting", "Wu, Jianfeng", "Xie, Yalan", "Liu, Xiaoyi", "Wang, Zhunheng", "Jiang, Ming", "Gao, Hang", "Zhang, Linlin", "Cheng, Renhong" ]
Simple but Effective Compound Geometric Operations for Temporal Knowledge Graph Completion
acl-long.596
Poster
2408.06603
[ "https://github.com/nk-ruiying/tcompounde" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.596/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.597.bib
@inproceedings{wang-etal-2024-uncertainty, title = "Uncertainty Aware Learning for Language Model Alignment", author = "Wang, Yikun and Zheng, Rui and Ding, Liang and Zhang, Qi and Lin, Dahua and Tao, Dacheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.597", pages = "11087--11099", abstract = "As instruction-tuned large language models (LLMs) evolve, aligning pretrained foundation models presents increasing challenges. Existing alignment strategies, which typically leverage diverse and high-quality data sources, often overlook the intrinsic uncertainty of tasks, learning all data samples equally. This may lead to suboptimal data efficiency and model performance. In response, we propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios, by introducing the sample uncertainty (elicited from more capable LLMs). We implement UAL by a simple fashion {--} adaptively setting the label smoothing value of training according to the uncertainty of individual samples. Analysis shows that our UAL indeed facilitates better token clustering in the feature space, validating our hypothesis. Extensive experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning. Notably, LLMs aligned in a mixed scenario have achieved an average improvement of 10.62{\%} on high-entropy tasks (i.e., AlpacaEval leaderboard), and 1.81{\%} on complex low-entropy tasks (i.e., MetaMath and GSM8K).", }
As instruction-tuned large language models (LLMs) evolve, aligning pretrained foundation models presents increasing challenges. Existing alignment strategies, which typically leverage diverse and high-quality data sources, often overlook the intrinsic uncertainty of tasks, learning all data samples equally. This may lead to suboptimal data efficiency and model performance. In response, we propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios, by introducing the sample uncertainty (elicited from more capable LLMs). We implement UAL by a simple fashion {--} adaptively setting the label smoothing value of training according to the uncertainty of individual samples. Analysis shows that our UAL indeed facilitates better token clustering in the feature space, validating our hypothesis. Extensive experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning. Notably, LLMs aligned in a mixed scenario have achieved an average improvement of 10.62{\%} on high-entropy tasks (i.e., AlpacaEval leaderboard), and 1.81{\%} on complex low-entropy tasks (i.e., MetaMath and GSM8K).
[ "Wang, Yikun", "Zheng, Rui", "Ding, Liang", "Zhang, Qi", "Lin, Dahua", "Tao, Dacheng" ]
Uncertainty Aware Learning for Language Model Alignment
acl-long.597
Poster
2406.04854
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.597/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.598.bib
@inproceedings{lin-etal-2024-interpretable, title = "Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models", author = "Lin, Ying-Chun and Neville, Jennifer and Stokes, Jack and Yang, Longqi and Safavi, Tara and Wan, Mengting and Counts, Scott and Suri, Siddharth and Andersen, Reid and Xu, Xiaofeng and Gupta, Deepak and Jauhar, Sujay Kumar and Song, Xia and Buscher, Georg and Tiwary, Saurabh and Hecht, Brent and Teevan, Jaime", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.598", pages = "11100--11115", abstract = "Accurate and interpretable user satisfaction estimation (USE) is critical for understanding, evaluating, and continuously improving conversational systems. Users express their satisfaction or dissatisfaction with diverse conversational patterns in both general-purpose (ChatGPT and Bing Copilot) and task-oriented (customer service chatbot) conversational systems. Existing approaches based on featurized ML models or text embeddings fall short in extracting generalizable patterns and are hard to interpret. In this work, we show that LLMs can extract interpretable signals of user satisfaction from their natural language utterances more effectively than embedding-based approaches. Moreover, an LLM can be tailored for USE via an iterative prompting framework using supervision from labeled examples. Our proposed method, Supervised Prompting for User satisfaction Rubrics (SPUR), not only has higher accuracy but is more interpretable as it scores user satisfaction via learned rubrics with a detailed breakdown.", }
Accurate and interpretable user satisfaction estimation (USE) is critical for understanding, evaluating, and continuously improving conversational systems. Users express their satisfaction or dissatisfaction with diverse conversational patterns in both general-purpose (ChatGPT and Bing Copilot) and task-oriented (customer service chatbot) conversational systems. Existing approaches based on featurized ML models or text embeddings fall short in extracting generalizable patterns and are hard to interpret. In this work, we show that LLMs can extract interpretable signals of user satisfaction from their natural language utterances more effectively than embedding-based approaches. Moreover, an LLM can be tailored for USE via an iterative prompting framework using supervision from labeled examples. Our proposed method, Supervised Prompting for User satisfaction Rubrics (SPUR), not only has higher accuracy but is more interpretable as it scores user satisfaction via learned rubrics with a detailed breakdown.
[ "Lin, Ying-Chun", "Neville, Jennifer", "Stokes, Jack", "Yang, Longqi", "Safavi, Tara", "Wan, Mengting", "Counts, Scott", "Suri, Siddharth", "Andersen, Reid", "Xu, Xiaofeng", "Gupta, Deepak", "Jauhar, Sujay Kumar", "Song, Xia", "Buscher, Georg", "Tiwary, Saurabh", "Hecht, Brent", "Teevan, Jaime" ]
Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models
acl-long.598
Poster
2403.12388
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.598/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.599.bib
@inproceedings{li-etal-2024-fundamental, title = "Fundamental Capabilities of Large Language Models and their Applications in Domain Scenarios: A Survey", author = "Li, Jiawei and Yang, Yizhe and Bai, Yu and Zhou, Xiaofeng and Li, Yinghao and Sun, Huashan and Liu, Yuhang and Si, Xingpeng and Ye, Yuhao and Wu, Yixiao and 林一冠, 林一冠 and Xu, Bin and Bowen, Ren and Feng, Chong and Gao, Yang and Huang, Heyan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.599", pages = "11116--11141", abstract = "Large Language Models (LLMs) demonstrate significant value in domain-specific applications, benefiting from their fundamental capabilities. Nevertheless, it is still unclear which fundamental capabilities contribute to success in specific domains. Moreover, the existing benchmark-based evaluation cannot effectively reflect the performance of real-world applications. In this survey, we review recent advances of LLMs in domain applications, aiming to summarize the fundamental capabilities and their collaboration. Furthermore, we establish connections between fundamental capabilities and specific domains, evaluating the varying importance of different capabilities. Based on our findings, we propose a reliable strategy for domains to choose more robust backbone LLMs for real-world applications.", }
Large Language Models (LLMs) demonstrate significant value in domain-specific applications, benefiting from their fundamental capabilities. Nevertheless, it is still unclear which fundamental capabilities contribute to success in specific domains. Moreover, the existing benchmark-based evaluation cannot effectively reflect the performance of real-world applications. In this survey, we review recent advances of LLMs in domain applications, aiming to summarize the fundamental capabilities and their collaboration. Furthermore, we establish connections between fundamental capabilities and specific domains, evaluating the varying importance of different capabilities. Based on our findings, we propose a reliable strategy for domains to choose more robust backbone LLMs for real-world applications.
[ "Li, Jiawei", "Yang, Yizhe", "Bai, Yu", "Zhou, Xiaofeng", "Li, Yinghao", "Sun, Huashan", "Liu, Yuhang", "Si, Xingpeng", "Ye, Yuhao", "Wu, Yixiao", "林一冠, 林一å†", "Xu, Bin", "Bowen, Ren", "Feng, Chong", "Gao, Yang", "Huang, Heyan" ]
Fundamental Capabilities of Large Language Models and their Applications in Domain Scenarios: A Survey
acl-long.599
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.599/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.600.bib
@inproceedings{bang-etal-2024-measuring, title = "Measuring Political Bias in Large Language Models: What Is Said and How It Is Said", author = "Bang, Yejin and Chen, Delong and Lee, Nayeon and Fung, Pascale", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.600", pages = "11142--11159", abstract = "We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and can lead to polarization and other harms in downstream applications. In order to provide transparency to users, we advocate that there should be fine-grained and explainable measures of political biases generated by LLMs. Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias. We measured the political bias in eleven open-sourced LLMs and showed that our proposed framework is easily scalable to other topics and is explainable.", }
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and can lead to polarization and other harms in downstream applications. In order to provide transparency to users, we advocate that there should be fine-grained and explainable measures of political biases generated by LLMs. Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias. We measured the political bias in eleven open-sourced LLMs and showed that our proposed framework is easily scalable to other topics and is explainable.
[ "Bang, Yejin", "Chen, Delong", "Lee, Nayeon", "Fung, Pascale" ]
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
acl-long.600
Poster
2403.18932
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.600/
[]
[]
[]
0