Datasets:

bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
sequencelengths
1
45
title
stringlengths
21
199
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
28
upvotes
int64
-1
255
num_comments
int64
-1
23
n_authors
int64
-1
35
proceedings
stringlengths
38
47
Models
sequencelengths
0
57
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.acl-long.301.bib
@inproceedings{yan-etal-2024-codescope, title = "{C}ode{S}cope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating {LLM}s on Code Understanding and Generation", author = "Yan, Weixiang and Liu, Haitian and Wang, Yunkun and Li, Yunzhe and Chen, Qian and Wang, Wen and Lin, Tingyu and Zhao, Weishan and Zhu, Li and Sundaram, Hari and Deng, Shuiguang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.301", pages = "5511--5558", abstract = "Large Language Models (LLMs) have demonstrated remarkable performance on assisting humans in programming and facilitating programming automation. However, existing benchmarks for evaluating the code understanding and generation capacities of LLMs suffer from severe limitations. First, most benchmarks are insufficient as they focus on a narrow range of popular programming languages and specific tasks, whereas real-world software development scenarios show a critical need to implement systems with multilingual and multitask programming environments to satisfy diverse requirements. Second, most benchmarks fail to consider the actual executability and the consistency of execution results of the generated code. To bridge these gaps between existing benchmarks and expectations from practical applications, we introduce **CodeScope**, an execution-based, multilingual, multitask, multidimensional evaluation benchmark for comprehensively measuring LLM capabilities on coding tasks. CodeScope covers **43 programming languages** and **eight coding tasks**. It evaluates the coding performance of LLMs from three dimensions (perspectives): **length**, **difficulty**, and **efficiency**. To facilitate execution-based evaluations of code generation, we develop **MultiCodeEngine**, an automated code execution engine that supports 14 programming languages. Finally, we systematically evaluate and analyze eight mainstream LLMs and demonstrate the superior breadth and challenges of CodeScope for evaluating LLMs on code understanding and generation tasks compared to other benchmarks. The CodeScope benchmark and code are publicly available at https://github.com/WeixiangYAN/CodeScope.", }
Large Language Models (LLMs) have demonstrated remarkable performance on assisting humans in programming and facilitating programming automation. However, existing benchmarks for evaluating the code understanding and generation capacities of LLMs suffer from severe limitations. First, most benchmarks are insufficient as they focus on a narrow range of popular programming languages and specific tasks, whereas real-world software development scenarios show a critical need to implement systems with multilingual and multitask programming environments to satisfy diverse requirements. Second, most benchmarks fail to consider the actual executability and the consistency of execution results of the generated code. To bridge these gaps between existing benchmarks and expectations from practical applications, we introduce **CodeScope**, an execution-based, multilingual, multitask, multidimensional evaluation benchmark for comprehensively measuring LLM capabilities on coding tasks. CodeScope covers **43 programming languages** and **eight coding tasks**. It evaluates the coding performance of LLMs from three dimensions (perspectives): **length**, **difficulty**, and **efficiency**. To facilitate execution-based evaluations of code generation, we develop **MultiCodeEngine**, an automated code execution engine that supports 14 programming languages. Finally, we systematically evaluate and analyze eight mainstream LLMs and demonstrate the superior breadth and challenges of CodeScope for evaluating LLMs on code understanding and generation tasks compared to other benchmarks. The CodeScope benchmark and code are publicly available at https://github.com/WeixiangYAN/CodeScope.
[ "Yan, Weixiang", "Liu, Haitian", "Wang, Yunkun", "Li, Yunzhe", "Chen, Qian", "Wang, Wen", "Lin, Tingyu", "Zhao, Weishan", "Zhu, Li", "Sundaram, Hari", "Deng, Shuiguang" ]
CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation
acl-long.301
Poster
2311.08588
[ "https://github.com/weixiangyan/codescope" ]
https://huggingface.co/papers/2311.08588
0
0
0
11
https://aclanthology.org/2024.acl-long.301/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.302.bib
@inproceedings{gu-etal-2024-digital, title = "Digital Socrates: Evaluating {LLM}s through Explanation Critiques", author = "Gu, Yuling and Tafjord, Oyvind and Clark, Peter", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.302", pages = "5559--5586", abstract = "While LLMs can provide reasoned explanations along with their answers, the nature and quality of those explanations are still poorly understood. In response, our goal is to define a detailed way of characterizing the explanation capabilities of modern models and to create a nuanced, interpretable explanation evaluation tool that can generate such characterizations automatically, without relying on expensive API calls or human annotations. Our approach is to (a) define the new task of explanation critiquing - identifying and categorizing any main flaw in an explanation and providing suggestions to address the flaw, (b) create a sizeable, human-verified dataset for this task, and (c) train an open-source, automatic critique model (called Digital Socrates) using this data. Through quantitative and qualitative analysis, we demonstrate how Digital Socrates is useful for revealing insights about student models by examining their reasoning chains, and how it can provide high-quality, nuanced, automatic evaluation of those model explanations for the first time. Digital Socrates thus fills an important gap in evaluation tools for understanding and improving the explanation behavior of models.", }
While LLMs can provide reasoned explanations along with their answers, the nature and quality of those explanations are still poorly understood. In response, our goal is to define a detailed way of characterizing the explanation capabilities of modern models and to create a nuanced, interpretable explanation evaluation tool that can generate such characterizations automatically, without relying on expensive API calls or human annotations. Our approach is to (a) define the new task of explanation critiquing - identifying and categorizing any main flaw in an explanation and providing suggestions to address the flaw, (b) create a sizeable, human-verified dataset for this task, and (c) train an open-source, automatic critique model (called Digital Socrates) using this data. Through quantitative and qualitative analysis, we demonstrate how Digital Socrates is useful for revealing insights about student models by examining their reasoning chains, and how it can provide high-quality, nuanced, automatic evaluation of those model explanations for the first time. Digital Socrates thus fills an important gap in evaluation tools for understanding and improving the explanation behavior of models.
[ "Gu, Yuling", "Tafjord, Oyvind", "Clark, Peter" ]
Digital Socrates: Evaluating LLMs through Explanation Critiques
acl-long.302
Poster
2311.09613
[ "" ]
https://huggingface.co/papers/2311.09613
0
1
1
3
https://aclanthology.org/2024.acl-long.302/
[ "allenai/digital-socrates-13b", "allenai/digital-socrates-7b", "TheBloke/digital-socrates-13B-GGUF", "TheBloke/digital-socrates-7B-GGUF", "TheBloke/digital-socrates-13B-AWQ", "TheBloke/digital-socrates-13B-GPTQ", "TheBloke/digital-socrates-7B-GPTQ", "TheBloke/digital-socrates-7B-AWQ" ]
[ "allenai/DS_Critique_Bank" ]
[]
1
https://aclanthology.org/2024.acl-long.303.bib
@inproceedings{xu-etal-2024-safedecoding, title = "{S}afe{D}ecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding", author = "Xu, Zhangchen and Jiang, Fengqing and Niu, Luyao and Jia, Jinyuan and Lin, Bill Yuchen and Poovendran, Radha", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.303", pages = "5587--5605", abstract = "As large language models (LLMs) become increasingly integrated into real-world applications such as code generation and chatbot assistance, extensive efforts have been made to align LLM behavior with human values, including safety. Jailbreak attacks, which aim to provoke unintended and unsafe behaviors from LLMs, remain a significant LLM safety threat. We analyze tokens, which are the smallest unit of text that can be processed by LLMs and make the following observations: (1) probabilities of tokens representing harmful responses are higher than those of harmless responses, and (2) responses containing safety disclaimers appear among the top tokens when token probabilities are sorted in descending order. In this paper, we leverage (1) and (2) to develop SafeDecoding, a safety-aware decoding strategy for LLMs, to defend against jailbreak attacks. We perform extensive experiments to evaluate SafeDecoding against six SOTA jailbreak attacks (GCG, AutoDAN, PAIR, DeepInception, SAP30, and template based attack) on five LLMs (Vicuna, Llama2, Guanaco, falcon, and Dolphin) using four benchmark datasets (AdvBench, HEx-PHI, MT-Bench, and Just-Eval). Our results show that SafeDecoding significantly reduces attack success rate and harmfulness of jailbreak attacks without compromising the helpfulness of responses to benign user queries while outperforming six defense methods (Perpelexity, Paraphrase, Retokenization, Self-Reminder, ICD, and Self-Examination).", }
As large language models (LLMs) become increasingly integrated into real-world applications such as code generation and chatbot assistance, extensive efforts have been made to align LLM behavior with human values, including safety. Jailbreak attacks, which aim to provoke unintended and unsafe behaviors from LLMs, remain a significant LLM safety threat. We analyze tokens, which are the smallest unit of text that can be processed by LLMs and make the following observations: (1) probabilities of tokens representing harmful responses are higher than those of harmless responses, and (2) responses containing safety disclaimers appear among the top tokens when token probabilities are sorted in descending order. In this paper, we leverage (1) and (2) to develop SafeDecoding, a safety-aware decoding strategy for LLMs, to defend against jailbreak attacks. We perform extensive experiments to evaluate SafeDecoding against six SOTA jailbreak attacks (GCG, AutoDAN, PAIR, DeepInception, SAP30, and template based attack) on five LLMs (Vicuna, Llama2, Guanaco, falcon, and Dolphin) using four benchmark datasets (AdvBench, HEx-PHI, MT-Bench, and Just-Eval). Our results show that SafeDecoding significantly reduces attack success rate and harmfulness of jailbreak attacks without compromising the helpfulness of responses to benign user queries while outperforming six defense methods (Perpelexity, Paraphrase, Retokenization, Self-Reminder, ICD, and Self-Examination).
[ "Xu, Zhangchen", "Jiang, Fengqing", "Niu, Luyao", "Jia, Jinyuan", "Lin, Bill Yuchen", "Poovendran, Radha" ]
SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding
acl-long.303
Oral
2402.08983
[ "https://github.com/uw-nsl/safedecoding" ]
https://huggingface.co/papers/2402.08983
4
2
0
6
https://aclanthology.org/2024.acl-long.303/
[]
[ "flydust/SafeDecoding-Attackers" ]
[]
1
https://aclanthology.org/2024.acl-long.304.bib
@inproceedings{son-etal-2024-multi-task, title = "Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?", author = "Son, Guijin and Baek, SangWon and Nam, Sangdae and Jeong, Ilgyun and Kim, Seungone", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.304", pages = "5606--5627", abstract = "Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle \textit{multiple} instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench (Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by $\times 1.46$ times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3{\%} and 12.4{\%} improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this [link](https://anonymous.4open.science/r/MTI-Bench-6F01).", }
Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle \textit{multiple} instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench (Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by $\times 1.46$ times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3{\%} and 12.4{\%} improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this [link](https://anonymous.4open.science/r/MTI-Bench-6F01).
[ "Son, Guijin", "Baek, SangWon", "Nam, Sangdae", "Jeong, Ilgyun", "Kim, Seungone" ]
Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?
acl-long.304
Poster
2402.11597
[ "https://github.com/guijinson/mti-bench" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.304/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.305.bib
@inproceedings{qian-etal-2024-experiential, title = "Experiential Co-Learning of Software-Developing Agents", author = "Qian, Chen and Dang, Yufan and Li, Jiahao and Liu, Wei and Xie, Zihao and Wang, YiFei and Chen, Weize and Yang, Cheng and Cong, Xin and Che, Xiaoyin and Liu, Zhiyuan and Sun, Maosong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.305", pages = "5628--5640", abstract = "Recent advancements in large language models (LLMs) have brought significant changes to various domains, especially through LLM-driven autonomous agents. A representative scenario is in software development, where LLM agents demonstrate efficient collaboration, task division, and assurance of software quality, markedly reducing the need for manual involvement. However, these agents frequently perform a variety of tasks independently, without benefiting from past experiences, which leads to repeated mistakes and inefficient attempts in multi-step task execution. To this end, we introduce Experiential Co-Learning, a novel LLM-agent learning framework in which instructor and assistant agents gather shortcut-oriented experiences from their historical trajectories and use these past experiences for future task execution. The extensive experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively. We anticipate that our insights will guide LLM agents towards enhanced autonomy and contribute to their evolutionary growth in cooperative learning. The code and data are available at https://github.com/OpenBMB/ChatDev.", }
Recent advancements in large language models (LLMs) have brought significant changes to various domains, especially through LLM-driven autonomous agents. A representative scenario is in software development, where LLM agents demonstrate efficient collaboration, task division, and assurance of software quality, markedly reducing the need for manual involvement. However, these agents frequently perform a variety of tasks independently, without benefiting from past experiences, which leads to repeated mistakes and inefficient attempts in multi-step task execution. To this end, we introduce Experiential Co-Learning, a novel LLM-agent learning framework in which instructor and assistant agents gather shortcut-oriented experiences from their historical trajectories and use these past experiences for future task execution. The extensive experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively. We anticipate that our insights will guide LLM agents towards enhanced autonomy and contribute to their evolutionary growth in cooperative learning. The code and data are available at https://github.com/OpenBMB/ChatDev.
[ "Qian, Chen", "Dang, Yufan", "Li, Jiahao", "Liu, Wei", "Xie, Zihao", "Wang, YiFei", "Chen, Weize", "Yang, Cheng", "Cong, Xin", "Che, Xiaoyin", "Liu, Zhiyuan", "Sun, Maosong" ]
Experiential Co-Learning of Software-Developing Agents
acl-long.305
Poster
2312.17025
[ "https://github.com/openbmb/chatdev" ]
https://huggingface.co/papers/2312.17025
3
0
0
8
https://aclanthology.org/2024.acl-long.305/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.306.bib
@inproceedings{tang-etal-2024-learning, title = "Learning Geometry-Aware Representations for New Intent Discovery", author = "Tang, Kai and Zhao, Junbo and Ding, Xiao and Wu, Runze and Feng, Lei and Chen, Gang and Wang, Haobo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.306", pages = "5641--5654", abstract = "New intent discovery (NID) is an important problem for deploying practical dialogue systems, which trains intent classifiers on a semi-supervised corpus where unlabeled user utterances contain both known and novel intents. Most existing NID algorithms place hope on the sample similarity to cluster unlabeled corpus to known or new samples. Lacking supervision on new intents, we experimentally find the intent classifier fails to fully distinguish new intents since they tend to assemble into intertwined centers.To address this problem, we propose a novel GeoID framework that learns geometry-aware representations to maximally separate all intents. Specifically, we are motivated by the recent findings on Neural Collapse (NC) in classification tasks to derive optimal intent center structure. Meanwhile, we devise a dual pseudo-labeling strategy based on optimal transport assignments and semi-supervised clustering, ensuring proper utterances-to-center arrangement.Extensive results show that our GeoID method establishes a new state-of-the-art performance, achieving a +3.49{\%} average accuracy improvement on three standardized benchmarking datasets. We also verify its usefulness in assisting large language models for improved in-context performance.", }
New intent discovery (NID) is an important problem for deploying practical dialogue systems, which trains intent classifiers on a semi-supervised corpus where unlabeled user utterances contain both known and novel intents. Most existing NID algorithms place hope on the sample similarity to cluster unlabeled corpus to known or new samples. Lacking supervision on new intents, we experimentally find the intent classifier fails to fully distinguish new intents since they tend to assemble into intertwined centers.To address this problem, we propose a novel GeoID framework that learns geometry-aware representations to maximally separate all intents. Specifically, we are motivated by the recent findings on Neural Collapse (NC) in classification tasks to derive optimal intent center structure. Meanwhile, we devise a dual pseudo-labeling strategy based on optimal transport assignments and semi-supervised clustering, ensuring proper utterances-to-center arrangement.Extensive results show that our GeoID method establishes a new state-of-the-art performance, achieving a +3.49{\%} average accuracy improvement on three standardized benchmarking datasets. We also verify its usefulness in assisting large language models for improved in-context performance.
[ "Tang, Kai", "Zhao, Junbo", "Ding, Xiao", "Wu, Runze", "Feng, Lei", "Chen, Gang", "Wang, Haobo" ]
Learning Geometry-Aware Representations for New Intent Discovery
acl-long.306
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.306/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.307.bib
@inproceedings{yang-etal-2024-speaker, title = "Speaker Verification in Agent-generated Conversations", author = "Yang, Yizhe and Achananuparp, Palakorn and Huang, Heyan and Jiang, Jing and Lim, Ee-Peng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.307", pages = "5655--5676", abstract = "The recent success of large language models (LLMs) has attracted widespread interest to develop role-playing conversational agents personalized to the characteristics and styles of different speakers to enhance their abilities to perform both general and special purpose dialogue tasks. However, the ability to personalize the generated utterances to speakers, whether conducted by human or LLM, has not been well studied. To bridge this gap, our study introduces a novel evaluation challenge: speaker verification in agent-generated conversations, which aimed to verify whether two sets of utterances originate from the same speaker. To this end, we assemble a large dataset collection encompassing thousands of speakers and their utterances. We also develop and evaluate speaker verification models under experiment setups. We further utilize the speaker verification models to evaluate the personalization abilities of LLM-based role-playing models. Comprehensive experiments suggest that the current role-playing models fail in accurately mimicking speakers, primarily due to their inherent linguistic characteristics.", }
The recent success of large language models (LLMs) has attracted widespread interest to develop role-playing conversational agents personalized to the characteristics and styles of different speakers to enhance their abilities to perform both general and special purpose dialogue tasks. However, the ability to personalize the generated utterances to speakers, whether conducted by human or LLM, has not been well studied. To bridge this gap, our study introduces a novel evaluation challenge: speaker verification in agent-generated conversations, which aimed to verify whether two sets of utterances originate from the same speaker. To this end, we assemble a large dataset collection encompassing thousands of speakers and their utterances. We also develop and evaluate speaker verification models under experiment setups. We further utilize the speaker verification models to evaluate the personalization abilities of LLM-based role-playing models. Comprehensive experiments suggest that the current role-playing models fail in accurately mimicking speakers, primarily due to their inherent linguistic characteristics.
[ "Yang, Yizhe", "Achananuparp, Palakorn", "Huang, Heyan", "Jiang, Jing", "Lim, Ee-Peng" ]
Speaker Verification in Agent-generated Conversations
acl-long.307
Poster
2405.10150
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.307/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.308.bib
@inproceedings{zhang-etal-2024-benchmarking-data, title = "Benchmarking Data Science Agents", author = "Zhang, Yuge and Jiang, Qiyang and XingyuHan, XingyuHan and Chen, Nan and Yang, Yuqing and Ren, Kan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.308", pages = "5677--5700", abstract = "In the era of data-driven decision-making, the complexity of data analysis necessitates advanced expertise and tools of data science, presenting significant challenges even for specialists. Large Language Models (LLMs) have emerged as promising aids as data science agents, assisting humans in data analysis and processing. Yet their practical efficacy remains constrained by the varied demands of real-world applications and complicated analytical process. In this paper, we introduce DSEval {--} a novel evaluation paradigm, as well as a series of innovative benchmarks tailored for assessing the performance of these agents throughout the entire data science lifecycle. Incorporating a novel bootstrapped annotation method, we streamline dataset preparation, improve the evaluation coverage, and expand benchmarking comprehensiveness. Our findings uncover prevalent obstacles and provide critical insights to inform future advancements in the field.", }
In the era of data-driven decision-making, the complexity of data analysis necessitates advanced expertise and tools of data science, presenting significant challenges even for specialists. Large Language Models (LLMs) have emerged as promising aids as data science agents, assisting humans in data analysis and processing. Yet their practical efficacy remains constrained by the varied demands of real-world applications and complicated analytical process. In this paper, we introduce DSEval {--} a novel evaluation paradigm, as well as a series of innovative benchmarks tailored for assessing the performance of these agents throughout the entire data science lifecycle. Incorporating a novel bootstrapped annotation method, we streamline dataset preparation, improve the evaluation coverage, and expand benchmarking comprehensiveness. Our findings uncover prevalent obstacles and provide critical insights to inform future advancements in the field.
[ "Zhang, Yuge", "Jiang, Qiyang", "XingyuHan, XingyuHan", "Chen, Nan", "Yang, Yuqing", "Ren, Kan" ]
Benchmarking Data Science Agents
acl-long.308
Poster
2402.17168
[ "https://github.com/metacopilot/dseval" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.308/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.309.bib
@inproceedings{tang-etal-2024-language, title = "Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models", author = "Tang, Tianyi and Luo, Wenyang and Huang, Haoyang and Zhang, Dongdong and Wang, Xiaolei and Zhao, Xin and Wei, Furu and Wen, Ji-Rong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.309", pages = "5701--5715", abstract = "Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.It remains a challenging problem to explain the underlying mechanisms by which LLMs process multilingual texts.In this paper, we delve into the composition of Transformer architectures in LLMs to pinpoint language-specific regions.Specially, we propose a novel detection method, language activation probability entropy (LAPE), to identify language-specific neurons within LLMs.Based on LAPE, we conduct comprehensive experiments on several representative LLMs, such as LLaMA-2, BLOOM, and Mistral. Our findings indicate that LLMs{'} proficiency in processing a particular language is predominantly due to a small subset of neurons, primarily situated in the models{'} top and bottom layers.Furthermore, we showcase the feasibility to {``}steer{''} the output language of LLMs by selectively activating or deactivating language-specific neurons. Our research provides important evidence to the understanding and exploration of the multilingual capabilities of LLMs.", }
Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.It remains a challenging problem to explain the underlying mechanisms by which LLMs process multilingual texts.In this paper, we delve into the composition of Transformer architectures in LLMs to pinpoint language-specific regions.Specially, we propose a novel detection method, language activation probability entropy (LAPE), to identify language-specific neurons within LLMs.Based on LAPE, we conduct comprehensive experiments on several representative LLMs, such as LLaMA-2, BLOOM, and Mistral. Our findings indicate that LLMs{'} proficiency in processing a particular language is predominantly due to a small subset of neurons, primarily situated in the models{'} top and bottom layers.Furthermore, we showcase the feasibility to {``}steer{''} the output language of LLMs by selectively activating or deactivating language-specific neurons. Our research provides important evidence to the understanding and exploration of the multilingual capabilities of LLMs.
[ "Tang, Tianyi", "Luo, Wenyang", "Huang, Haoyang", "Zhang, Dongdong", "Wang, Xiaolei", "Zhao, Xin", "Wei, Furu", "Wen, Ji-Rong" ]
Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models
acl-long.309
Poster
2402.16438
[ "" ]
https://huggingface.co/papers/2402.16438
0
0
0
8
https://aclanthology.org/2024.acl-long.309/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.310.bib
@inproceedings{ni-etal-2024-forgetting, title = "Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models", author = "Ni, Shiwen and Chen, Dingwei and Li, Chengming and Hu, Xiping and Xu, Ruifeng and Yang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.310", pages = "5716--5731", abstract = "Recent advancements in Large Language Models (LLMs) have showcased their remarkable capabilities in text understanding and generation. However, even stronger LLMs are susceptible to acquiring erroneous or obsolete information from the training corpus. Direct secondary fine-tuning with data containing new knowledge may be ineffective in updating knowledge due to the conflict between old and new knowledge. In this paper, we propose a new paradigm for fine-tuning called F-Learning (Forgetting before Learning), which employs parametric arithmetic to facilitate the forgetting of old knowledge and learning of new knowledge. Experimental results on two publicly available datasets demonstrate that our proposed F-Learning can obviously improve the knowledge updating performance of both full fine-tuning and LoRA fine-tuning, simultaneously outperforming the existing baselines in most cases. Moreover, we have also discovered that forgetting old knowledge by subtracting the parameters of LoRA can yield a similar effect to subtracting the parameters of full fine-tuning, and occasionally even surpass it significantly.", }
Recent advancements in Large Language Models (LLMs) have showcased their remarkable capabilities in text understanding and generation. However, even stronger LLMs are susceptible to acquiring erroneous or obsolete information from the training corpus. Direct secondary fine-tuning with data containing new knowledge may be ineffective in updating knowledge due to the conflict between old and new knowledge. In this paper, we propose a new paradigm for fine-tuning called F-Learning (Forgetting before Learning), which employs parametric arithmetic to facilitate the forgetting of old knowledge and learning of new knowledge. Experimental results on two publicly available datasets demonstrate that our proposed F-Learning can obviously improve the knowledge updating performance of both full fine-tuning and LoRA fine-tuning, simultaneously outperforming the existing baselines in most cases. Moreover, we have also discovered that forgetting old knowledge by subtracting the parameters of LoRA can yield a similar effect to subtracting the parameters of full fine-tuning, and occasionally even surpass it significantly.
[ "Ni, Shiwen", "Chen, Dingwei", "Li, Chengming", "Hu, Xiping", "Xu, Ruifeng", "Yang, Min" ]
Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models
acl-long.310
Poster
2311.08011
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.310/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.311.bib
@inproceedings{thakkar-etal-2024-deep, title = "A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques", author = "Thakkar, Megh and Fournier, Quentin and Riemer, Matthew and Chen, Pin-Yu and Zouaq, Amal and Das, Payel and Chandar, Sarath", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.311", pages = "5732--5745", abstract = "Large language models are first pre-trained on trillions of tokens and then instruction-tuned or aligned to specific preferences. While pre-training remains out of reach for most researchers due to the compute required, fine-tuning has become affordable thanks to parameter-efficient methods such as LoRA and QLoRA. Alignment is known to be sensitive to the many factors involved, including the quantity and quality of data, the alignment method, and the adapter rank. However, there has not yet been an extensive study of their effect on downstream performance. To address this gap, we conduct an in-depth investigation of the impact of popular choices for three crucial axes: (i) the alignment dataset (HH-RLHF and BeaverTails), (ii) the alignment technique (SFT and DPO), and (iii) the model (LLaMA-1, Vicuna-v1.3, Mistral-7b, and Mistral-7b-Instruct). Our extensive setup spanning over 300 experiments reveals consistent trends and unexpected findings. We observe how more informative data helps with preference alignment, cases where supervised fine-tuning outperforms preference optimization, and how aligning to a distinct preference boosts performance on downstream tasks. Through our in-depth analyses, we put forward key guidelines to help researchers perform more effective parameter-efficient LLM alignment.", }
Large language models are first pre-trained on trillions of tokens and then instruction-tuned or aligned to specific preferences. While pre-training remains out of reach for most researchers due to the compute required, fine-tuning has become affordable thanks to parameter-efficient methods such as LoRA and QLoRA. Alignment is known to be sensitive to the many factors involved, including the quantity and quality of data, the alignment method, and the adapter rank. However, there has not yet been an extensive study of their effect on downstream performance. To address this gap, we conduct an in-depth investigation of the impact of popular choices for three crucial axes: (i) the alignment dataset (HH-RLHF and BeaverTails), (ii) the alignment technique (SFT and DPO), and (iii) the model (LLaMA-1, Vicuna-v1.3, Mistral-7b, and Mistral-7b-Instruct). Our extensive setup spanning over 300 experiments reveals consistent trends and unexpected findings. We observe how more informative data helps with preference alignment, cases where supervised fine-tuning outperforms preference optimization, and how aligning to a distinct preference boosts performance on downstream tasks. Through our in-depth analyses, we put forward key guidelines to help researchers perform more effective parameter-efficient LLM alignment.
[ "Thakkar, Megh", "Fournier, Quentin", "Riemer, Matthew", "Chen, Pin-Yu", "Zouaq, Amal", "Das, Payel", "Ch", "ar, Sarath" ]
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques
acl-long.311
Poster
2406.04879
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.311/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.312.bib
@inproceedings{luo-etal-2024-zero, title = "Zero-Shot Cross-Domain Dialogue State Tracking via Dual Low-Rank Adaptation", author = "Luo, Xiang and Tang, Zhiwen and Wang, Jin and Zhang, Xuejie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.312", pages = "5746--5765", abstract = "Zero-shot dialogue state tracking (DST) seeks to enable dialogue systems to transition to unfamiliar domains without manual annotation or extensive retraining. Prior research has approached this objective by embedding prompts into language models (LMs). Common methodologies include integrating prompts at the input layer or introducing learnable variables at each transformer layer. Nonetheless, each strategy exhibits inherent limitations. Prompts integrated at the input layer risk underutilization, with their impact potentially diminishing across successive transformer layers. Conversely, the addition of learnable variables to each layer can complicate the training process and increase inference latency. To tackle the issues mentioned above, this paper proposes Dual Low-Rank Adaptation (DualLoRA), a plug-and-play architecture designed for zero-shot DST. DualLoRA incorporates two distinct Low-Rank Adaptation (LoRA) components, targeting both dialogue context processing and prompt optimization, to ensure the comprehensive influence of prompts throughout the transformer model layers. This is achieved without incurring additional inference latency, showcasing an efficient integration into existing architectures. Through rigorous evaluation on the MultiWOZ and SGD datasets, DualLoRA demonstrates notable improvements across multiple domains, outperforming traditional baseline methods in zero-shot settings.", }
Zero-shot dialogue state tracking (DST) seeks to enable dialogue systems to transition to unfamiliar domains without manual annotation or extensive retraining. Prior research has approached this objective by embedding prompts into language models (LMs). Common methodologies include integrating prompts at the input layer or introducing learnable variables at each transformer layer. Nonetheless, each strategy exhibits inherent limitations. Prompts integrated at the input layer risk underutilization, with their impact potentially diminishing across successive transformer layers. Conversely, the addition of learnable variables to each layer can complicate the training process and increase inference latency. To tackle the issues mentioned above, this paper proposes Dual Low-Rank Adaptation (DualLoRA), a plug-and-play architecture designed for zero-shot DST. DualLoRA incorporates two distinct Low-Rank Adaptation (LoRA) components, targeting both dialogue context processing and prompt optimization, to ensure the comprehensive influence of prompts throughout the transformer model layers. This is achieved without incurring additional inference latency, showcasing an efficient integration into existing architectures. Through rigorous evaluation on the MultiWOZ and SGD datasets, DualLoRA demonstrates notable improvements across multiple domains, outperforming traditional baseline methods in zero-shot settings.
[ "Luo, Xiang", "Tang, Zhiwen", "Wang, Jin", "Zhang, Xuejie" ]
Zero-Shot Cross-Domain Dialogue State Tracking via Dual Low-Rank Adaptation
acl-long.312
Poster
2407.21633
[ "https://github.com/suntea233/duallora" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.312/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.313.bib
@inproceedings{luo-etal-2024-prp, title = "{PRP}-Graph: Pairwise Ranking Prompting to {LLM}s with Graph Aggregation for Effective Text Re-ranking", author = "Luo, Jian and Chen, Xuanang and He, Ben and Sun, Le", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.313", pages = "5766--5776", abstract = "Pairwise Ranking Prompting (PRP) demonstrates impressive effectiveness in zero-shot document re-ranking tasks with large language models (LLMs). However, in the existing methods, PRP only outputs the same label for the comparison results of different confidence intervals without considering the uncertainty of pairwise comparison, which implies an underutilization of the generation probability information of LLMs. To bridge this gap, we propose PRP-Graph, a novel pairwise re-ranking approach, based on a refined scoring PRP unit that exploits the output probabilities of target labels to capture the degree of certainty of the comparison results. Specifically, the PRP-Graph consists of two stages, namely ranking graph construction and ranking graph aggregation. Extensive experiments conducted on the BEIR benchmark demonstrate the superiority of our approach over existing PRP-based methods. Comprehensive analysis reveals that the PRP-Graph displays strong robustness towards the initial ranking order and delivers exceptional re-ranking results with acceptable efficiency. Our code and data are available at https://github.com/Memelank/PRP-Graph.", }
Pairwise Ranking Prompting (PRP) demonstrates impressive effectiveness in zero-shot document re-ranking tasks with large language models (LLMs). However, in the existing methods, PRP only outputs the same label for the comparison results of different confidence intervals without considering the uncertainty of pairwise comparison, which implies an underutilization of the generation probability information of LLMs. To bridge this gap, we propose PRP-Graph, a novel pairwise re-ranking approach, based on a refined scoring PRP unit that exploits the output probabilities of target labels to capture the degree of certainty of the comparison results. Specifically, the PRP-Graph consists of two stages, namely ranking graph construction and ranking graph aggregation. Extensive experiments conducted on the BEIR benchmark demonstrate the superiority of our approach over existing PRP-based methods. Comprehensive analysis reveals that the PRP-Graph displays strong robustness towards the initial ranking order and delivers exceptional re-ranking results with acceptable efficiency. Our code and data are available at https://github.com/Memelank/PRP-Graph.
[ "Luo, Jian", "Chen, Xuanang", "He, Ben", "Sun, Le" ]
PRP-Graph: Pairwise Ranking Prompting to LLMs with Graph Aggregation for Effective Text Re-ranking
acl-long.313
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.313/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.314.bib
@inproceedings{huang-etal-2024-repcodec, title = "{R}ep{C}odec: A Speech Representation Codec for Speech Tokenization", author = "Huang, Zhichao and Meng, Chutong and Ko, Tom", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.314", pages = "5777--5790", abstract = "With recent rapid growth of large language models (LLMs), discrete speech tokenization has played an important role for injecting speech into LLMs. However, this discretization gives rise to a loss of information, consequently impairing overall performance. To improve the performance of these discrete speech tokens, we present RepCodec, a novel speech representation codec for semantic speech tokenization. In contrast to audio codecs which reconstruct the raw audio, RepCodec learns a vector quantization codebook through reconstructing speech representations from speech encoders like HuBERT or data2vec. Together, the speech encoder, the codec encoder and the vector quantization codebook form a pipeline for converting speech waveforms into semantic tokens. The extensive experiments illustrate that RepCodec, by virtue of its enhanced information retention capacity, significantly outperforms the widely used k-means clustering approach in both speech understanding and generation. Furthermore, this superiority extends across various speech encoders and languages, affirming the robustness of RepCodec.We believe our method can facilitate large language modeling research on speech processing.", }
With recent rapid growth of large language models (LLMs), discrete speech tokenization has played an important role for injecting speech into LLMs. However, this discretization gives rise to a loss of information, consequently impairing overall performance. To improve the performance of these discrete speech tokens, we present RepCodec, a novel speech representation codec for semantic speech tokenization. In contrast to audio codecs which reconstruct the raw audio, RepCodec learns a vector quantization codebook through reconstructing speech representations from speech encoders like HuBERT or data2vec. Together, the speech encoder, the codec encoder and the vector quantization codebook form a pipeline for converting speech waveforms into semantic tokens. The extensive experiments illustrate that RepCodec, by virtue of its enhanced information retention capacity, significantly outperforms the widely used k-means clustering approach in both speech understanding and generation. Furthermore, this superiority extends across various speech encoders and languages, affirming the robustness of RepCodec.We believe our method can facilitate large language modeling research on speech processing.
[ "Huang, Zhichao", "Meng, Chutong", "Ko, Tom" ]
RepCodec: A Speech Representation Codec for Speech Tokenization
acl-long.314
Oral
2309.00169
[ "https://github.com/mct10/repcodec" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.314/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.315.bib
@inproceedings{fu-etal-2024-gumbelsoft, title = "{G}umbel{S}oft: Diversified Language Model Watermarking via the {G}umbel{M}ax-trick", author = "Fu, Jiayi and Zhao, Xuandong and Yang, Ruihan and Zhang, Yuansen and Chen, Jiangjie and Xiao, Yanghua", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.315", pages = "5791--5808", abstract = "Large language models (LLMs) excellently generate human-like text, but also raise concerns about misuse in fake news and academic dishonesty. Decoding-based watermark, particularly the watermark based on the GumbelMax trick (GM watermark), is a standout solution for safeguarding machine-generated texts due to its notable detectability. However, GM watermark encounters a major challenge with generation diversity, always yielding identical outputs for the same prompt, negatively impacting generation diversity and user experience. To overcome this limitation, we introduce a new type of GM watermark, the Logits-Addition watermark, as well as three variants that aim to enhance diversity, particularly the GumbelSoft watermark (i.e., the softmax variant of the Logits-Addition watermark). When assessed for detectability in high diversity settings, our Gumbelsoft demonstrates superior performance, with its AUROC score exceeding those of the two alternative variants by a margin of 0.1 to 0.3 and outperforming other decoding-based watermarking methods by a minimum of 0.1.", }
Large language models (LLMs) excellently generate human-like text, but also raise concerns about misuse in fake news and academic dishonesty. Decoding-based watermark, particularly the watermark based on the GumbelMax trick (GM watermark), is a standout solution for safeguarding machine-generated texts due to its notable detectability. However, GM watermark encounters a major challenge with generation diversity, always yielding identical outputs for the same prompt, negatively impacting generation diversity and user experience. To overcome this limitation, we introduce a new type of GM watermark, the Logits-Addition watermark, as well as three variants that aim to enhance diversity, particularly the GumbelSoft watermark (i.e., the softmax variant of the Logits-Addition watermark). When assessed for detectability in high diversity settings, our Gumbelsoft demonstrates superior performance, with its AUROC score exceeding those of the two alternative variants by a margin of 0.1 to 0.3 and outperforming other decoding-based watermarking methods by a minimum of 0.1.
[ "Fu, Jiayi", "Zhao, Xu", "ong", "Yang, Ruihan", "Zhang, Yuansen", "Chen, Jiangjie", "Xiao, Yanghua" ]
GumbelSoft: Diversified Language Model Watermarking via the GumbelMax-trick
acl-long.315
Poster
2402.12948
[ "https://github.com/poruna-byte/gumbelsoft" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.315/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.316.bib
@inproceedings{ma-etal-2024-event, title = "Event-Radar: Event-driven Multi-View Learning for Multimodal Fake News Detection", author = "Ma, Zihan and Luo, Minnan and Guo, Hao and Zeng, Zhi and Hao, Yiran and Zhao, Xiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.316", pages = "5809--5821", abstract = "The swift detection of multimedia fake news has emerged as a crucial task in combating malicious propaganda and safeguarding the security of the online environment. While existing methods have achieved commendable results in modeling entity-level inconsistency, addressing event-level inconsistency following the inherent subject-predicate logic of news and robustly learning news representations from poor-quality news samples remain two challenges. In this paper, we propose an Event-diven fake news detection framework (Event-Radar) based on multi-view learning, which integrates visual manipulation, textual emotion and multimodal inconsistency at event-level for fake news detection. Specifically, leveraging the capability of graph structures to capture interactions between events and parameters, Event-Radar captures event-level multimodal inconsistency by constructing an event graph that includes multimodal entity subject-predicate logic. Additionally, to mitigate the interference of poor-quality news, Event-Radar introduces a multi-view fusion mechanism, learning comprehensive and robust representations by computing the credibility of each view as a clue, thereby detecting fake news. Extensive experiments demonstrate that Event-Radar achieves outstanding performance on three large-scale fake news detection benchmarks. Our studies also confirm that Event-Radar exhibits strong robustness, providing a paradigm for detecting fake news from noisy news samples.", }
The swift detection of multimedia fake news has emerged as a crucial task in combating malicious propaganda and safeguarding the security of the online environment. While existing methods have achieved commendable results in modeling entity-level inconsistency, addressing event-level inconsistency following the inherent subject-predicate logic of news and robustly learning news representations from poor-quality news samples remain two challenges. In this paper, we propose an Event-diven fake news detection framework (Event-Radar) based on multi-view learning, which integrates visual manipulation, textual emotion and multimodal inconsistency at event-level for fake news detection. Specifically, leveraging the capability of graph structures to capture interactions between events and parameters, Event-Radar captures event-level multimodal inconsistency by constructing an event graph that includes multimodal entity subject-predicate logic. Additionally, to mitigate the interference of poor-quality news, Event-Radar introduces a multi-view fusion mechanism, learning comprehensive and robust representations by computing the credibility of each view as a clue, thereby detecting fake news. Extensive experiments demonstrate that Event-Radar achieves outstanding performance on three large-scale fake news detection benchmarks. Our studies also confirm that Event-Radar exhibits strong robustness, providing a paradigm for detecting fake news from noisy news samples.
[ "Ma, Zihan", "Luo, Minnan", "Guo, Hao", "Zeng, Zhi", "Hao, Yiran", "Zhao, Xiang" ]
Event-Radar: Event-driven Multi-View Learning for Multimodal Fake News Detection
acl-long.316
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.316/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.317.bib
@inproceedings{xu-etal-2024-fine, title = "Fine-Grained Modeling of Narrative Context: A Coherence Perspective via Retrospective Questions", author = "Xu, Liyan and Li, Jiangnan and Yu, Mo and Zhou, Jie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.317", pages = "5822--5838", abstract = "This work introduces an original and practical paradigm for narrative comprehension, stemming from the characteristics that individual passages within narratives tend to be more cohesively related than isolated.Complementary to the common end-to-end paradigm, we propose a fine-grained modeling of narrative context, by formulating a graph dubbed NarCo, which explicitly depicts task-agnostic coherence dependencies that are ready to be consumed by various downstream tasks. In particular, edges in NarCo encompass free-form retrospective questions between context snippets, inspired by human cognitive perception that constantly reinstates relevant events from prior context. Importantly, our graph formalism is practically instantiated by LLMs without human annotations, through our designed two-stage prompting scheme.To examine the graph properties and its utility, we conduct three studies in narratives, each from a unique angle: edge relation efficacy, local context enrichment, and broader application in QA. All tasks could benefit from the explicit coherence captured by NarCo.", }
This work introduces an original and practical paradigm for narrative comprehension, stemming from the characteristics that individual passages within narratives tend to be more cohesively related than isolated.Complementary to the common end-to-end paradigm, we propose a fine-grained modeling of narrative context, by formulating a graph dubbed NarCo, which explicitly depicts task-agnostic coherence dependencies that are ready to be consumed by various downstream tasks. In particular, edges in NarCo encompass free-form retrospective questions between context snippets, inspired by human cognitive perception that constantly reinstates relevant events from prior context. Importantly, our graph formalism is practically instantiated by LLMs without human annotations, through our designed two-stage prompting scheme.To examine the graph properties and its utility, we conduct three studies in narratives, each from a unique angle: edge relation efficacy, local context enrichment, and broader application in QA. All tasks could benefit from the explicit coherence captured by NarCo.
[ "Xu, Liyan", "Li, Jiangnan", "Yu, Mo", "Zhou, Jie" ]
Fine-Grained Modeling of Narrative Context: A Coherence Perspective via Retrospective Questions
acl-long.317
Poster
2402.13551
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.317/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.318.bib
@inproceedings{zhang-etal-2024-stealthy, title = "Stealthy Attack on Large Language Model based Recommendation", author = "Zhang, Jinghao and Liu, Yuting and Liu, Qiang and Wu, Shu and Guo, Guibing and Wang, Liang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.318", pages = "5839--5857", abstract = "Recently, the powerful large language models (LLMs) have been instrumental in propelling the progress of recommender systems (RS). However, while these systems have flourished, their susceptibility to security threats has been largely overlooked. In this work, we reveal that the introduction of LLMs into recommendation models presents new security vulnerabilities due to their emphasis on the textual content of items. We demonstrate that attackers can significantly boost an item{'}s exposure by merely altering its textual content during the testing phase, without requiring direct interference with the model{'}s training process. Additionally, the attack is notably stealthy, as it does not affect the overall recommendation performance and the modifications to the text are subtle, making it difficult for users and platforms to detect. Our comprehensive experiments across four mainstream LLM-based recommendation models demonstrate the superior efficacy and stealthiness of our approach. Our work unveils a significant security gap in LLM-based recommendation systems and paves the way for future research on protecting these systems.", }
Recently, the powerful large language models (LLMs) have been instrumental in propelling the progress of recommender systems (RS). However, while these systems have flourished, their susceptibility to security threats has been largely overlooked. In this work, we reveal that the introduction of LLMs into recommendation models presents new security vulnerabilities due to their emphasis on the textual content of items. We demonstrate that attackers can significantly boost an item{'}s exposure by merely altering its textual content during the testing phase, without requiring direct interference with the model{'}s training process. Additionally, the attack is notably stealthy, as it does not affect the overall recommendation performance and the modifications to the text are subtle, making it difficult for users and platforms to detect. Our comprehensive experiments across four mainstream LLM-based recommendation models demonstrate the superior efficacy and stealthiness of our approach. Our work unveils a significant security gap in LLM-based recommendation systems and paves the way for future research on protecting these systems.
[ "Zhang, Jinghao", "Liu, Yuting", "Liu, Qiang", "Wu, Shu", "Guo, Guibing", "Wang, Liang" ]
Stealthy Attack on Large Language Model based Recommendation
acl-long.318
Poster
2402.14836
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.318/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.319.bib
@inproceedings{ryu-etal-2024-multi, title = "Multi-Dimensional Optimization for Text Summarization via Reinforcement Learning", author = "Ryu, Sangwon and Do, Heejin and Kim, Yunsu and Lee, Gary and Ok, Jungseul", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.319", pages = "5858--5871", abstract = "The evaluation of summary quality encompasses diverse dimensions such as consistency, coherence, relevance, and fluency. However, existing summarization methods often target a specific dimension, facing challenges in generating well-balanced summaries across multiple dimensions. In this paper, we propose multi-objective reinforcement learning tailored to generate balanced summaries across all four dimensions. We introduce two multi-dimensional optimization (MDO) strategies for adaptive learning: 1) MDO{\_}min, rewarding the current lowest dimension score, and 2) MDO{\_}pro, optimizing multiple dimensions similar to multi-task learning, resolves conflicting gradients across dimensions through gradient projection. Unlike prior ROUGE-based rewards relying on reference summaries, we use a QA-based reward model that aligns with human preferences. Further, we discover the capability to regulate the length of summaries by adjusting the discount factor, seeking the generation of concise yet informative summaries that encapsulate crucial points. Our approach achieved substantial performance gains compared to baseline models on representative summarization datasets, particularly in the overlooked dimensions.", }
The evaluation of summary quality encompasses diverse dimensions such as consistency, coherence, relevance, and fluency. However, existing summarization methods often target a specific dimension, facing challenges in generating well-balanced summaries across multiple dimensions. In this paper, we propose multi-objective reinforcement learning tailored to generate balanced summaries across all four dimensions. We introduce two multi-dimensional optimization (MDO) strategies for adaptive learning: 1) MDO{\_}min, rewarding the current lowest dimension score, and 2) MDO{\_}pro, optimizing multiple dimensions similar to multi-task learning, resolves conflicting gradients across dimensions through gradient projection. Unlike prior ROUGE-based rewards relying on reference summaries, we use a QA-based reward model that aligns with human preferences. Further, we discover the capability to regulate the length of summaries by adjusting the discount factor, seeking the generation of concise yet informative summaries that encapsulate crucial points. Our approach achieved substantial performance gains compared to baseline models on representative summarization datasets, particularly in the overlooked dimensions.
[ "Ryu, Sangwon", "Do, Heejin", "Kim, Yunsu", "Lee, Gary", "Ok, Jungseul" ]
Multi-Dimensional Optimization for Text Summarization via Reinforcement Learning
acl-long.319
Poster
2406.00303
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.319/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.320.bib
@inproceedings{chen-etal-2024-masked, title = "Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models", author = "Chen, Changyu and Wang, Xiting and Lin, Ting-En and Lv, Ang and Wu, Yuchuan and Gao, Xin and Wen, Ji-Rong and Yan, Rui and Li, Yongbin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.320", pages = "5872--5900", abstract = "In reasoning tasks, even a minor error can cascade into inaccurate results, leading to suboptimal performance of large language models insuch domains. Earlier fine-tuning approaches sought to mitigate this by leveraging more precise supervisory signals from human labeling, larger models, or self-sampling, although at a high cost. Conversely, we develop a method that avoids external resources, relying instead on introducing perturbations to the input. Our training approach randomly masks certain tokens within the chain of thought, a techniquewe found to be particularly effective for reasoning tasks. When applied to fine-tuning with GSM8K on Llama-2-7B, this method achieveda 5{\%} improvement in GSM8K accuracy and a 10{\%} improvement in GSM-IC accuracy over standard supervised fine-tuning with a few codes modified. Furthermore, it is complementary to existing methods. When integrated with related explicit data augmentation methods, it leads to improvements across five datasets of various augmentation methods, as well as two different base models. We further investigate the mechanisms behind this improvement through case studies and quantitative analysis, suggesting that our approach may provide superior support for the model in capturing long-distance dependencies, especially those related to questions. This enhancement could deepen understanding of the premises in questions and prior steps.", }
In reasoning tasks, even a minor error can cascade into inaccurate results, leading to suboptimal performance of large language models insuch domains. Earlier fine-tuning approaches sought to mitigate this by leveraging more precise supervisory signals from human labeling, larger models, or self-sampling, although at a high cost. Conversely, we develop a method that avoids external resources, relying instead on introducing perturbations to the input. Our training approach randomly masks certain tokens within the chain of thought, a techniquewe found to be particularly effective for reasoning tasks. When applied to fine-tuning with GSM8K on Llama-2-7B, this method achieveda 5{\%} improvement in GSM8K accuracy and a 10{\%} improvement in GSM-IC accuracy over standard supervised fine-tuning with a few codes modified. Furthermore, it is complementary to existing methods. When integrated with related explicit data augmentation methods, it leads to improvements across five datasets of various augmentation methods, as well as two different base models. We further investigate the mechanisms behind this improvement through case studies and quantitative analysis, suggesting that our approach may provide superior support for the model in capturing long-distance dependencies, especially those related to questions. This enhancement could deepen understanding of the premises in questions and prior steps.
[ "Chen, Changyu", "Wang, Xiting", "Lin, Ting-En", "Lv, Ang", "Wu, Yuchuan", "Gao, Xin", "Wen, Ji-Rong", "Yan, Rui", "Li, Yongbin" ]
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models
acl-long.320
Poster
2403.02178
[ "https://github.com/changyuchen347/maskedthought" ]
https://huggingface.co/papers/2403.02178
1
1
0
9
https://aclanthology.org/2024.acl-long.320/
[ "euclaise/ReMask-3B", "adalaw/MetaMath-Mistral-7B-MFT", "adalaw/Llama2-7B-GSM8K-MFT", "adalaw/MAmmoTH-7B-Mistral-MFT" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.321.bib
@inproceedings{chen-etal-2024-seer, title = "{SEER}: Facilitating Structured Reasoning and Explanation via Reinforcement Learning", author = "Chen, Guoxin and Tang, Kexin and Yang, Chao and Ye, Fuying and Qiao, Yu and Qian, Yiming", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.321", pages = "5901--5921", abstract = "Elucidating the reasoning process with structured explanations from question to answer is crucial, as it significantly enhances the interpretability, traceability, and trustworthiness of question-answering (QA) systems. However, structured explanations demand models to perform intricately structured reasoning, which poses great challenges. Most existing methods focus on single-step reasoning through supervised learning, ignoring logical dependencies between steps. Moreover, existing reinforcement learning (RL) based methods overlook the structured relationships, underutilizing the potential of RL in structured reasoning. In this paper, we propose SEER, a novel method that maximizes a structure-based return to facilitate structured reasoning and explanation. Our proposed structure-based return precisely describes the hierarchical and branching structure inherent in structured reasoning, effectively capturing the intricate relationships between different reasoning steps. In addition, we introduce a fine-grained reward function to meticulously delineate diverse reasoning steps. Extensive experiments show that SEER significantly outperforms state-of-the-art methods, achieving an absolute improvement of 6.9{\%} over RL-based methods on EntailmentBank, a 4.4{\%} average improvement on STREET benchmark, and exhibiting outstanding efficiency and cross-dataset generalization performance.", }
Elucidating the reasoning process with structured explanations from question to answer is crucial, as it significantly enhances the interpretability, traceability, and trustworthiness of question-answering (QA) systems. However, structured explanations demand models to perform intricately structured reasoning, which poses great challenges. Most existing methods focus on single-step reasoning through supervised learning, ignoring logical dependencies between steps. Moreover, existing reinforcement learning (RL) based methods overlook the structured relationships, underutilizing the potential of RL in structured reasoning. In this paper, we propose SEER, a novel method that maximizes a structure-based return to facilitate structured reasoning and explanation. Our proposed structure-based return precisely describes the hierarchical and branching structure inherent in structured reasoning, effectively capturing the intricate relationships between different reasoning steps. In addition, we introduce a fine-grained reward function to meticulously delineate diverse reasoning steps. Extensive experiments show that SEER significantly outperforms state-of-the-art methods, achieving an absolute improvement of 6.9{\%} over RL-based methods on EntailmentBank, a 4.4{\%} average improvement on STREET benchmark, and exhibiting outstanding efficiency and cross-dataset generalization performance.
[ "Chen, Guoxin", "Tang, Kexin", "Yang, Chao", "Ye, Fuying", "Qiao, Yu", "Qian, Yiming" ]
SEER: Facilitating Structured Reasoning and Explanation via Reinforcement Learning
acl-long.321
Oral
2401.13246
[ "https://github.com/chen-gx/seer" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.321/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.322.bib
@inproceedings{kim-etal-2024-towards-robust, title = "Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning", author = "Kim, Yeachan and Kim, Junho and Lee, SangKeun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.322", pages = "5922--5936", abstract = "Parameter-efficient fine-tuning (PEFT) has enabled the efficient optimization of cumbersome language models in real-world settings. However, as datasets in such environments often contain noisy labels that adversely affect performance, PEFT methods are inevitably exposed to noisy labels. Despite this challenge, the adaptability of PEFT to noisy environments remains underexplored. To bridge this gap, we investigate various PEFT methods under noisy labels. Interestingly, our findings reveal that PEFT has difficulty in memorizing noisy labels due to its inherently limited capacity, resulting in robustness. However, we also find that such limited capacity simultaneously makes PEFT more vulnerable to interference of noisy labels, impeding the learning of clean samples. To address this issue, we propose Clean Routing (CleaR), a novel routing-based PEFT approach that adaptively activates PEFT modules. In CleaR, PEFT modules are preferentially exposed to clean data while bypassing the noisy ones, thereby minimizing the noisy influence. To verify the efficacy of CleaR, we perform extensive experiments on diverse configurations of noisy labels. The results convincingly demonstrate that CleaR leads to substantially improved performance in noisy environments", }
Parameter-efficient fine-tuning (PEFT) has enabled the efficient optimization of cumbersome language models in real-world settings. However, as datasets in such environments often contain noisy labels that adversely affect performance, PEFT methods are inevitably exposed to noisy labels. Despite this challenge, the adaptability of PEFT to noisy environments remains underexplored. To bridge this gap, we investigate various PEFT methods under noisy labels. Interestingly, our findings reveal that PEFT has difficulty in memorizing noisy labels due to its inherently limited capacity, resulting in robustness. However, we also find that such limited capacity simultaneously makes PEFT more vulnerable to interference of noisy labels, impeding the learning of clean samples. To address this issue, we propose Clean Routing (CleaR), a novel routing-based PEFT approach that adaptively activates PEFT modules. In CleaR, PEFT modules are preferentially exposed to clean data while bypassing the noisy ones, thereby minimizing the noisy influence. To verify the efficacy of CleaR, we perform extensive experiments on diverse configurations of noisy labels. The results convincingly demonstrate that CleaR leads to substantially improved performance in noisy environments
[ "Kim, Yeachan", "Kim, Junho", "Lee, SangKeun" ]
Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning
acl-long.322
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.322/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.323.bib
@inproceedings{kim-lee-2024-sparseflow, title = "{S}parse{F}low: Accelerating Transformers by Sparsifying Information Flows", author = "Kim, Yeachan and Lee, SangKeun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.323", pages = "5937--5948", abstract = "Transformers have become the de-facto standard for natural language processing. However, dense information flows within transformers pose significant challenges for real-time and resource-constrained devices, as computational complexity grows quadratically with sequence length. To counteract such dense information flows, we propose SparseFlow, a novel efficient method designed to sparsify the dense pathways of token representations across all transformer blocks. To this end, SparseFlow parameterizes the information flows linking token representations to transformer blocks. These parameterized information flows are optimized to be sparse, allowing only the salient information to pass through into the blocks. To validate the efficacy of SparseFlow, we conduct comprehensive experiments across diverse benchmarks (understanding and generation), scales (ranging from millions to billions), architectures (including encoders, decoders, and seq-to-seq models), and modalities (such as language-only and vision-language). The results convincingly demonstrate that sparsifying the dense information flows leads to substantial speedup gains without compromising task accuracy. For instance, SparseFlow reduces computational costs by half on average, without a significant loss in accuracy.", }
Transformers have become the de-facto standard for natural language processing. However, dense information flows within transformers pose significant challenges for real-time and resource-constrained devices, as computational complexity grows quadratically with sequence length. To counteract such dense information flows, we propose SparseFlow, a novel efficient method designed to sparsify the dense pathways of token representations across all transformer blocks. To this end, SparseFlow parameterizes the information flows linking token representations to transformer blocks. These parameterized information flows are optimized to be sparse, allowing only the salient information to pass through into the blocks. To validate the efficacy of SparseFlow, we conduct comprehensive experiments across diverse benchmarks (understanding and generation), scales (ranging from millions to billions), architectures (including encoders, decoders, and seq-to-seq models), and modalities (such as language-only and vision-language). The results convincingly demonstrate that sparsifying the dense information flows leads to substantial speedup gains without compromising task accuracy. For instance, SparseFlow reduces computational costs by half on average, without a significant loss in accuracy.
[ "Kim, Yeachan", "Lee, SangKeun" ]
SparseFlow: Accelerating Transformers by Sparsifying Information Flows
acl-long.323
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.323/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.324.bib
@inproceedings{liu-etal-2024-prott3, title = "{P}rot{T}3: Protein-to-Text Generation for Text-based Protein Understanding", author = "Liu, Zhiyuan and Zhang, An and Fei, Hao and Zhang, Enzhi and Wang, Xiang and Kawaguchi, Kenji and Chua, Tat-Seng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.324", pages = "5949--5966", abstract = "Language Models (LMs) excel in understanding textual descriptions of proteins, as evident in biomedical question-answering tasks. However, their capability falters with raw protein data, such as amino acid sequences, due to a deficit in pretraining on such data. Conversely, Protein Language Models (PLMs) can understand and convert protein data into high-quality representations, but struggle to process texts. To address their limitations, we introduce ProtT3, a framework for Protein-to-Text Generation for Text-based Protein Understanding. ProtT3 empowers an LM to understand protein sequences of amino acids by incorporating a PLM as its protein understanding module, enabling effective protein-to-text generation. This collaboration between PLM and LM is facilitated by a cross-modal projector (i.e., Q-Former) that bridges the modality gap between the PLM{'}s representation space and the LM{'}s input space. Unlike previous studies focusing on protein property prediction and protein-text retrieval, we delve into the largely unexplored field of protein-to-text generation. To facilitate comprehensive benchmarks and promote future research, we establish quantitative evaluations for protein-text modeling tasks, including protein captioning, protein question-answering, and protein-text retrieval. Our experiments show that ProtT3 substantially surpasses current baselines, with ablation studies further highlighting the efficacy of its core components. Our code is available at https://github.com/acharkq/ProtT3.", }
Language Models (LMs) excel in understanding textual descriptions of proteins, as evident in biomedical question-answering tasks. However, their capability falters with raw protein data, such as amino acid sequences, due to a deficit in pretraining on such data. Conversely, Protein Language Models (PLMs) can understand and convert protein data into high-quality representations, but struggle to process texts. To address their limitations, we introduce ProtT3, a framework for Protein-to-Text Generation for Text-based Protein Understanding. ProtT3 empowers an LM to understand protein sequences of amino acids by incorporating a PLM as its protein understanding module, enabling effective protein-to-text generation. This collaboration between PLM and LM is facilitated by a cross-modal projector (i.e., Q-Former) that bridges the modality gap between the PLM{'}s representation space and the LM{'}s input space. Unlike previous studies focusing on protein property prediction and protein-text retrieval, we delve into the largely unexplored field of protein-to-text generation. To facilitate comprehensive benchmarks and promote future research, we establish quantitative evaluations for protein-text modeling tasks, including protein captioning, protein question-answering, and protein-text retrieval. Our experiments show that ProtT3 substantially surpasses current baselines, with ablation studies further highlighting the efficacy of its core components. Our code is available at https://github.com/acharkq/ProtT3.
[ "Liu, Zhiyuan", "Zhang, An", "Fei, Hao", "Zhang, Enzhi", "Wang, Xiang", "Kawaguchi, Kenji", "Chua, Tat-Seng" ]
ProtT3: Protein-to-Text Generation for Text-based Protein Understanding
acl-long.324
Poster
2405.12564
[ "https://github.com/acharkq/prott3" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.324/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.325.bib
@inproceedings{yu-etal-2024-kieval, title = "{KIE}val: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models", author = "Yu, Zhuohao and Gao, Chang and Yao, Wenjin and Wang, Yidong and Ye, Wei and Wang, Jindong and Xie, Xing and Zhang, Yue and Zhang, Shikun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.325", pages = "5967--5985", abstract = "Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness. Existing strategies, which aim to detect contaminated texts, focus on quantifying contamination status instead of accurately gauging model performance. In this paper, we introduce KIEval, a Knowledge-grounded Interactive Evaluation framework, which incorporates an LLM-powered {``}interactor{''} role for the first time to accomplish a dynamic contamination-resilient evaluation. Starting with a question in a conventional LLM benchmark involving domain-specific knowledge, KIEval utilizes dynamically generated, multi-round, and knowledge-focused dialogues to determine whether a model{'}s response is merely a recall of benchmark answers or demonstrates a deep comprehension to apply knowledge in more complex conversations. Extensive experiments on seven leading LLMs across five datasets validate KIEval{'}s effectiveness and generalization. We also reveal that data contamination brings no contribution or even negative effect to models{'} real-world applicability and understanding, and existing contamination detection methods for LLMs can only identify contamination in pre-training but not during supervised fine-tuning.", }
Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness. Existing strategies, which aim to detect contaminated texts, focus on quantifying contamination status instead of accurately gauging model performance. In this paper, we introduce KIEval, a Knowledge-grounded Interactive Evaluation framework, which incorporates an LLM-powered {``}interactor{''} role for the first time to accomplish a dynamic contamination-resilient evaluation. Starting with a question in a conventional LLM benchmark involving domain-specific knowledge, KIEval utilizes dynamically generated, multi-round, and knowledge-focused dialogues to determine whether a model{'}s response is merely a recall of benchmark answers or demonstrates a deep comprehension to apply knowledge in more complex conversations. Extensive experiments on seven leading LLMs across five datasets validate KIEval{'}s effectiveness and generalization. We also reveal that data contamination brings no contribution or even negative effect to models{'} real-world applicability and understanding, and existing contamination detection methods for LLMs can only identify contamination in pre-training but not during supervised fine-tuning.
[ "Yu, Zhuohao", "Gao, Chang", "Yao, Wenjin", "Wang, Yidong", "Ye, Wei", "Wang, Jindong", "Xie, Xing", "Zhang, Yue", "Zhang, Shikun" ]
KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
acl-long.325
Poster
2402.15043
[ "https://github.com/zhuohaoyu/kieval" ]
https://huggingface.co/papers/2402.15043
1
1
0
9
https://aclanthology.org/2024.acl-long.325/
[ "sunatte/txt2sql" ]
[]
[ "Justinrune/LLaMA-Factory", "smarttang/blingsec" ]
1
https://aclanthology.org/2024.acl-long.326.bib
@inproceedings{sabour-etal-2024-emobench, title = "{E}mo{B}ench: Evaluating the Emotional Intelligence of Large Language Models", author = "Sabour, Sahand and Liu, Siyang and Zhang, Zheyuan and Liu, June and Zhou, Jinfeng and Sunaryo, Alvionna and Lee, Tatia and Mihalcea, Rada and Huang, Minlie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.326", pages = "5986--6004", abstract = "Recent advances in Large Language Models (LLMs) have highlighted the need for robust, comprehensive, and challenging benchmarks. Yet, research on evaluating their Emotional Intelligence (EI) is considerably limited. Existing benchmarks have two major shortcomings: first, they mainly focus on emotion recognition, neglecting essential EI capabilities such as emotion management and thought facilitation through emotion understanding; second, they are primarily constructed from existing datasets, which include frequent patterns, explicit information, and annotation errors, leading to unreliable evaluation. We propose EmoBench, a benchmark that draws upon established psychological theories and proposes a comprehensive definition for machine EI, including Emotional Understanding and Emotional Application. EmoBench includes a set of 400 hand-crafted questions in English and Chinese, which are meticulously designed to require thorough reasoning and understanding. Our findings reveal a considerable gap between the EI of existing LLMs and the average human, highlighting a promising direction for future research. Our code and data are publicly available at https://github.com/Sahandfer/EmoBench.", }
Recent advances in Large Language Models (LLMs) have highlighted the need for robust, comprehensive, and challenging benchmarks. Yet, research on evaluating their Emotional Intelligence (EI) is considerably limited. Existing benchmarks have two major shortcomings: first, they mainly focus on emotion recognition, neglecting essential EI capabilities such as emotion management and thought facilitation through emotion understanding; second, they are primarily constructed from existing datasets, which include frequent patterns, explicit information, and annotation errors, leading to unreliable evaluation. We propose EmoBench, a benchmark that draws upon established psychological theories and proposes a comprehensive definition for machine EI, including Emotional Understanding and Emotional Application. EmoBench includes a set of 400 hand-crafted questions in English and Chinese, which are meticulously designed to require thorough reasoning and understanding. Our findings reveal a considerable gap between the EI of existing LLMs and the average human, highlighting a promising direction for future research. Our code and data are publicly available at https://github.com/Sahandfer/EmoBench.
[ "Sabour, Sah", "", "Liu, Siyang", "Zhang, Zheyuan", "Liu, June", "Zhou, Jinfeng", "Sunaryo, Alvionna", "Lee, Tatia", "Mihalcea, Rada", "Huang, Minlie" ]
EmoBench: Evaluating the Emotional Intelligence of Large Language Models
acl-long.326
Poster
2402.12071
[ "https://github.com/sahandfer/emobench" ]
https://huggingface.co/papers/2402.12071
0
0
0
10
https://aclanthology.org/2024.acl-long.326/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.327.bib
@inproceedings{huang-etal-2024-ai, title = "Are {AI}-Generated Text Detectors Robust to Adversarial Perturbations?", author = "Huang, Guanhua and Zhang, Yuchen and Li, Zhe and You, Yongjian and Wang, Mingze and Yang, Zhouwang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.327", pages = "6005--6024", abstract = "The widespread use of large language models (LLMs) has sparked concerns about the potential misuse of AI-generated text, as these models can produce content that closely resembles human-generated text. Current detectors for AI-generated text (AIGT) lack robustness against adversarial perturbations, with even minor changes in characters or words causing a reversal in distinguishing between human-created and AI-generated text. This paper investigates the robustness of existing AIGT detection methods and introduces a novel detector, the Siamese Calibrated Reconstruction Network (SCRN). The SCRN employs a reconstruction network to add and remove noise from text, extracting a semantic representation that is robust to local perturbations. We also propose a siamese calibration technique to train the model to make equally confident predictions under different noise, which improves the model{'}s robustness against adversarial perturbations. Experiments on four publicly available datasets show that the SCRN outperforms all baseline methods, achieving 6.5{\%}-18.25{\%} absolute accuracy improvement over the best baseline method under adversarial attacks. Moreover, it exhibits superior generalizability in cross-domain, cross-genre, and mixed-source scenarios. The code is available at \url{https://github.com/CarlanLark/Robust-AIGC-Detector}.", }
The widespread use of large language models (LLMs) has sparked concerns about the potential misuse of AI-generated text, as these models can produce content that closely resembles human-generated text. Current detectors for AI-generated text (AIGT) lack robustness against adversarial perturbations, with even minor changes in characters or words causing a reversal in distinguishing between human-created and AI-generated text. This paper investigates the robustness of existing AIGT detection methods and introduces a novel detector, the Siamese Calibrated Reconstruction Network (SCRN). The SCRN employs a reconstruction network to add and remove noise from text, extracting a semantic representation that is robust to local perturbations. We also propose a siamese calibration technique to train the model to make equally confident predictions under different noise, which improves the model{'}s robustness against adversarial perturbations. Experiments on four publicly available datasets show that the SCRN outperforms all baseline methods, achieving 6.5{\%}-18.25{\%} absolute accuracy improvement over the best baseline method under adversarial attacks. Moreover, it exhibits superior generalizability in cross-domain, cross-genre, and mixed-source scenarios. The code is available at \url{https://github.com/CarlanLark/Robust-AIGC-Detector}.
[ "Huang, Guanhua", "Zhang, Yuchen", "Li, Zhe", "You, Yongjian", "Wang, Mingze", "Yang, Zhouwang" ]
Are AI-Generated Text Detectors Robust to Adversarial Perturbations?
acl-long.327
Poster
2406.01179
[ "https://github.com/carlanlark/robust-aigc-detector" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.327/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.328.bib
@inproceedings{chen-etal-2024-fintextqa, title = "{F}in{T}ext{QA}: A Dataset for Long-form Financial Question Answering", author = "Chen, Jian and Zhou, Peilin and Hua, Yining and Xin, Loh and Chen, Kehui and Li, Ziyuan and Zhu, Bing and Liang, Junwei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.328", pages = "6025--6047", abstract = "Accurate evaluation of financial question answering (QA) systems necessitates a comprehensive dataset encompassing diverse question types and contexts. However, current financial QA datasets lack scope diversity and question complexity. This work introduces FinTextQA, a novel dataset for long-form question answering (LFQA) in finance. FinTextQA comprises 1,262 high-quality, source-attributed QA pairs extracted and selected from finance textbooks and government agency websites.Moreover, we developed a Retrieval-Augmented Generation (RAG)-based LFQA system, comprising an embedder, retriever, reranker, and generator. A multi-faceted evaluation approach, including human ranking, automatic metrics, and GPT-4 scoring, was employed to benchmark the performance of different LFQA system configurations under heightened noisy conditions. The results indicate that: (1) Among all compared generators, Baichuan2-7B competes closely with GPT-3.5-turbo in accuracy score; (2) The most effective system configuration on our dataset involved setting the embedder, retriever, reranker, and generator as Ada2, Automated Merged Retrieval, Bge-Reranker-Base, and Baichuan2-7B, respectively; (3) models are less susceptible to noise after the length of contexts reaching a specific threshold. The dataset is publicly available at: https://huggingface.co/datasets/GPS-Lab/FinTextQA.", }
Accurate evaluation of financial question answering (QA) systems necessitates a comprehensive dataset encompassing diverse question types and contexts. However, current financial QA datasets lack scope diversity and question complexity. This work introduces FinTextQA, a novel dataset for long-form question answering (LFQA) in finance. FinTextQA comprises 1,262 high-quality, source-attributed QA pairs extracted and selected from finance textbooks and government agency websites.Moreover, we developed a Retrieval-Augmented Generation (RAG)-based LFQA system, comprising an embedder, retriever, reranker, and generator. A multi-faceted evaluation approach, including human ranking, automatic metrics, and GPT-4 scoring, was employed to benchmark the performance of different LFQA system configurations under heightened noisy conditions. The results indicate that: (1) Among all compared generators, Baichuan2-7B competes closely with GPT-3.5-turbo in accuracy score; (2) The most effective system configuration on our dataset involved setting the embedder, retriever, reranker, and generator as Ada2, Automated Merged Retrieval, Bge-Reranker-Base, and Baichuan2-7B, respectively; (3) models are less susceptible to noise after the length of contexts reaching a specific threshold. The dataset is publicly available at: https://huggingface.co/datasets/GPS-Lab/FinTextQA.
[ "Chen, Jian", "Zhou, Peilin", "Hua, Yining", "Xin, Loh", "Chen, Kehui", "Li, Ziyuan", "Zhu, Bing", "Liang, Junwei" ]
FinTextQA: A Dataset for Long-form Financial Question Answering
acl-long.328
Poster
2405.09980
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.328/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.329.bib
@inproceedings{parcalabescu-frank-2024-measuring, title = "On Measuring Faithfulness or Self-consistency of Natural Language Explanations", author = "Parcalabescu, Letitia and Frank, Anette", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.329", pages = "6048--6089", abstract = "Large language models (LLMs) can explain their predictions through post-hoc or Chain-of-Thought (CoT) explanations. But an LLM could make up reasonably sounding explanations that are unfaithful to its underlying reasoning. Recent work has designed tests that aim to judge the faithfulness of post-hoc or CoT explanations. In this work we argue that these faithfulness tests do not measure faithfulness to the models{'} inner workings {--} but rather their self-consistency at output level.Our contributions are three-fold: i) We clarify the status of faithfulness tests in view of model explainability, characterising them as self-consistency tests instead. This assessment we underline by ii) constructing a Comparative Consistency Bank for self-consistency tests that for the first time compares existing tests on a common suite of 11 open LLMs and 5 tasks {--} including iii) our new self-consistency measure CC-SHAP. CC-SHAP is a fine-grained measure (not a test) of LLM self-consistency. It compares how a model{'}s input contributes to the predicted answer and to generating the explanation. Our fine-grained CC-SHAP metric allows us iii) to compare LLM behaviour when making predictions and to analyse the effect of other consistency tests at a deeper level, which takes us one step further towards measuring faithfulness by bringing us closer to the internals of the model than strictly surface output-oriented tests.", }
Large language models (LLMs) can explain their predictions through post-hoc or Chain-of-Thought (CoT) explanations. But an LLM could make up reasonably sounding explanations that are unfaithful to its underlying reasoning. Recent work has designed tests that aim to judge the faithfulness of post-hoc or CoT explanations. In this work we argue that these faithfulness tests do not measure faithfulness to the models{'} inner workings {--} but rather their self-consistency at output level.Our contributions are three-fold: i) We clarify the status of faithfulness tests in view of model explainability, characterising them as self-consistency tests instead. This assessment we underline by ii) constructing a Comparative Consistency Bank for self-consistency tests that for the first time compares existing tests on a common suite of 11 open LLMs and 5 tasks {--} including iii) our new self-consistency measure CC-SHAP. CC-SHAP is a fine-grained measure (not a test) of LLM self-consistency. It compares how a model{'}s input contributes to the predicted answer and to generating the explanation. Our fine-grained CC-SHAP metric allows us iii) to compare LLM behaviour when making predictions and to analyse the effect of other consistency tests at a deeper level, which takes us one step further towards measuring faithfulness by bringing us closer to the internals of the model than strictly surface output-oriented tests.
[ "Parcalabescu, Letitia", "Frank, Anette" ]
On Measuring Faithfulness or Self-consistency of Natural Language Explanations
acl-long.329
Poster
2311.07466
[ "https://github.com/heidelberg-nlp/cc-shap" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.329/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.330.bib
@inproceedings{ren-etal-2024-learning, title = "Learning or Self-aligning? Rethinking Instruction Fine-tuning", author = "Ren, Mengjie and Cao, Boxi and Lin, Hongyu and Liu, Cao and Han, Xianpei and Zeng, Ke and Guanglu, Wan and Cai, Xunliang and Sun, Le", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.330", pages = "6090--6105", abstract = "Instruction Fine-tuning (IFT) is a crucial phase in building large language models (LLMs). Previous works mainly focus on the IFT{'}s role in the transfer of behavioral norms and the learning of additional world knowledge. However, the understanding of the underlying mechanisms of IFT remains significantly limited. In this paper, we design a knowledge intervention framework to decouple the potential underlying factors of IFT, thereby enabling individual analysis of different factors. Surprisingly, our experiments reveal that attempting to learn additional world knowledge through IFT often struggles to yield positive impacts and can even lead to markedly negative effects. Further, we discover that maintaining internal knowledge consistency before and after IFT is a critical factor for achieving successful IFT. Our findings reveal the underlying mechanisms of IFT and provide robust support for some very recent and potential future works.", }
Instruction Fine-tuning (IFT) is a crucial phase in building large language models (LLMs). Previous works mainly focus on the IFT{'}s role in the transfer of behavioral norms and the learning of additional world knowledge. However, the understanding of the underlying mechanisms of IFT remains significantly limited. In this paper, we design a knowledge intervention framework to decouple the potential underlying factors of IFT, thereby enabling individual analysis of different factors. Surprisingly, our experiments reveal that attempting to learn additional world knowledge through IFT often struggles to yield positive impacts and can even lead to markedly negative effects. Further, we discover that maintaining internal knowledge consistency before and after IFT is a critical factor for achieving successful IFT. Our findings reveal the underlying mechanisms of IFT and provide robust support for some very recent and potential future works.
[ "Ren, Mengjie", "Cao, Boxi", "Lin, Hongyu", "Liu, Cao", "Han, Xianpei", "Zeng, Ke", "Guanglu, Wan", "Cai, Xunliang", "Sun, Le" ]
Learning or Self-aligning? Rethinking Instruction Fine-tuning
acl-long.330
Poster
2402.18243
[ "https://github.com/renmengjie7/self-aligning" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.330/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.331.bib
@inproceedings{wang-etal-2024-rethinking-bounds, title = "Rethinking the Bounds of {LLM} Reasoning: Are Multi-Agent Discussions the Key?", author = "Wang, Qineng and Wang, Zihao and Su, Ying and Tong, Hanghang and Song, Yangqiu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.331", pages = "6106--6131", abstract = "Recent progress in LLMs discussion suggests that multi-agent discussion improves the reasoning abilities of LLMs. In this work, we reevaluate this claim through systematic experiments, where we propose a novel group discussion framework to enrich the set of discussion mechanisms. Interestingly, our results show that a single-agent LLM with strong prompts can achieve almost the same best performance as the best existing discussion approach on a wide range of reasoning tasks and backbone LLMs. We observed that the multi-agent discussion performs better than a single agent only when there is no demonstration in the prompt. Further study reveals the common interaction mechanisms of LLMs during the discussion. Our code can be found in \url{https://github.com/HKUST-KnowComp/LLM-discussion}.", }
Recent progress in LLMs discussion suggests that multi-agent discussion improves the reasoning abilities of LLMs. In this work, we reevaluate this claim through systematic experiments, where we propose a novel group discussion framework to enrich the set of discussion mechanisms. Interestingly, our results show that a single-agent LLM with strong prompts can achieve almost the same best performance as the best existing discussion approach on a wide range of reasoning tasks and backbone LLMs. We observed that the multi-agent discussion performs better than a single agent only when there is no demonstration in the prompt. Further study reveals the common interaction mechanisms of LLMs during the discussion. Our code can be found in \url{https://github.com/HKUST-KnowComp/LLM-discussion}.
[ "Wang, Qineng", "Wang, Zihao", "Su, Ying", "Tong, Hanghang", "Song, Yangqiu" ]
Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key?
acl-long.331
Poster
2402.18272
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.331/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.332.bib
@inproceedings{wang-etal-2024-soft-knowledge, title = "Soft Knowledge Prompt: Help External Knowledge Become a Better Teacher to Instruct {LLM} in Knowledge-based {VQA}", author = "Wang, Qunbo and Ji, Ruyi and Peng, Tianhao and Wu, Wenjun and Li, Zechao and Liu, Jing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.332", pages = "6132--6143", abstract = "LLM has achieved impressive performance on multi-modal tasks, which have received ever-increasing research attention. Recent research focuses on improving prediction performance and reliability (e.g., addressing the hallucination problem). They often prepend relevant external knowledge to the input text as an extra prompt. However, these methods would be affected by the noise in the knowledge and the context length limitation of LLM. In our work, we focus on making better use of external knowledge and propose a method to actively extract valuable information in the knowledge to produce the latent vector as a soft prompt, which is then fused with the image embedding to form a knowledge-enhanced context to instruct LLM. The experimental results on knowledge-based VQA benchmarks show that the proposed method enjoys better utilization of external knowledge and helps the model achieve better performance.", }
LLM has achieved impressive performance on multi-modal tasks, which have received ever-increasing research attention. Recent research focuses on improving prediction performance and reliability (e.g., addressing the hallucination problem). They often prepend relevant external knowledge to the input text as an extra prompt. However, these methods would be affected by the noise in the knowledge and the context length limitation of LLM. In our work, we focus on making better use of external knowledge and propose a method to actively extract valuable information in the knowledge to produce the latent vector as a soft prompt, which is then fused with the image embedding to form a knowledge-enhanced context to instruct LLM. The experimental results on knowledge-based VQA benchmarks show that the proposed method enjoys better utilization of external knowledge and helps the model achieve better performance.
[ "Wang, Qunbo", "Ji, Ruyi", "Peng, Tianhao", "Wu, Wenjun", "Li, Zechao", "Liu, Jing" ]
Soft Knowledge Prompt: Help External Knowledge Become a Better Teacher to Instruct LLM in Knowledge-based VQA
acl-long.332
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.332/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.333.bib
@inproceedings{wang-etal-2024-taste, title = "{T}as{T}e: Teaching Large Language Models to Translate through Self-Reflection", author = "Wang, Yutong and Zeng, Jiali and Liu, Xuebo and Meng, Fandong and Zhou, Jie and Zhang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.333", pages = "6144--6158", abstract = "Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks. Techniques like instruction tuning have effectively enhanced the proficiency of LLMs in the downstream task of machine translation. However, the existing approaches fail to yield satisfactory translation outputs that match the quality of supervised neural machine translation (NMT) systems. One plausible explanation for this discrepancy is that the straightforward prompts employed in these methodologies are unable to fully exploit the acquired instruction-following capabilities. To this end, we propose the $\textbf{TasTe}$ framework, which stands for translating through self-reflection. The self-reflection process includes two stages of inference. In the first stage, LLMs are instructed to generate preliminary translations and conduct self-assessments on these translations simultaneously. In the second stage, LLMs are tasked to refine these preliminary translations according to the evaluation results. The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods. Our work presents a promising approach to unleash the potential of LLMs and enhance their capabilities in MT. The codes and datasets are open-sourced at https://github.com/YutongWang1216/ReflectionLLMMT.", }
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks. Techniques like instruction tuning have effectively enhanced the proficiency of LLMs in the downstream task of machine translation. However, the existing approaches fail to yield satisfactory translation outputs that match the quality of supervised neural machine translation (NMT) systems. One plausible explanation for this discrepancy is that the straightforward prompts employed in these methodologies are unable to fully exploit the acquired instruction-following capabilities. To this end, we propose the $\textbf{TasTe}$ framework, which stands for translating through self-reflection. The self-reflection process includes two stages of inference. In the first stage, LLMs are instructed to generate preliminary translations and conduct self-assessments on these translations simultaneously. In the second stage, LLMs are tasked to refine these preliminary translations according to the evaluation results. The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods. Our work presents a promising approach to unleash the potential of LLMs and enhance their capabilities in MT. The codes and datasets are open-sourced at https://github.com/YutongWang1216/ReflectionLLMMT.
[ "Wang, Yutong", "Zeng, Jiali", "Liu, Xuebo", "Meng, F", "ong", "Zhou, Jie", "Zhang, Min" ]
TasTe: Teaching Large Language Models to Translate through Self-Reflection
acl-long.333
Poster
2406.08434
[ "https://github.com/yutongwang1216/reflectionllmmt" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.333/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.334.bib
@inproceedings{lu-etal-2024-experts, title = "Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models", author = "Lu, Xudong and Liu, Qi and Xu, Yuhui and Zhou, Aojun and Huang, Siyuan and Zhang, Bo and Yan, Junchi and Li, Hongsheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.334", pages = "6159--6172", abstract = "A pivotal advancement in the progress of large language models (LLMs) is the emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs, MoE LLMs can achieve higher performance with fewer active parameters, but it is still hard to deploy them due to their immense parameter sizes. Different from previous weight pruning methods that rely on specifically designed hardware, this paper mainly aims to enhance the deployment efficiency of MoE LLMs by introducing plug-and-play expert-level sparsification techniques. Specifically, we propose, for the first time to our best knowledge, post-training approaches for task-agnostic and task-specific expert pruning and skipping of MoE LLMs, tailored to improve deployment efficiency while maintaining model performance across a wide range of tasks. Extensive experiments show that our proposed methods can simultaneously reduce model sizes and increase the inference speed, while maintaining satisfactory performance. Code will be made available at https://github.com/Lucky-Lance/Expert{\_}Sparsity.", }
A pivotal advancement in the progress of large language models (LLMs) is the emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs, MoE LLMs can achieve higher performance with fewer active parameters, but it is still hard to deploy them due to their immense parameter sizes. Different from previous weight pruning methods that rely on specifically designed hardware, this paper mainly aims to enhance the deployment efficiency of MoE LLMs by introducing plug-and-play expert-level sparsification techniques. Specifically, we propose, for the first time to our best knowledge, post-training approaches for task-agnostic and task-specific expert pruning and skipping of MoE LLMs, tailored to improve deployment efficiency while maintaining model performance across a wide range of tasks. Extensive experiments show that our proposed methods can simultaneously reduce model sizes and increase the inference speed, while maintaining satisfactory performance. Code will be made available at https://github.com/Lucky-Lance/Expert{\_}Sparsity.
[ "Lu, Xudong", "Liu, Qi", "Xu, Yuhui", "Zhou, Aojun", "Huang, Siyuan", "Zhang, Bo", "Yan, Junchi", "Li, Hongsheng" ]
Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models
acl-long.334
Poster
2402.14800
[ "https://github.com/lucky-lance/expert_sparsity" ]
https://huggingface.co/papers/2402.14800
3
3
0
8
https://aclanthology.org/2024.acl-long.334/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.335.bib
@inproceedings{li-etal-2024-unimo, title = "{UNIMO}-{G}: Unified Image Generation through Multimodal Conditional Diffusion", author = "Li, Wei and Xu, Xue and Liu, Jiachen and Xiao, Xinyan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.335", pages = "6173--6188", abstract = "Existing text-to-image diffusion models primarily generate images from text prompts. However, the inherent conciseness of textual descriptions poses challenges in faithfully synthesizing images with intricate details, such as specific entities or scenes. This paper presents UNIMO-G, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation. UNIMO-G comprises two core components: a Multimodal Large Language Model (MLLM) for encoding multimodal prompts, and a conditional denoising diffusion network for generating images based on the encoded multimodal input. We leverage a two-stage training strategy to effectively train the framework: firstly pre-training on large-scale text-image pairs to develop conditional image generation capabilities, and then instruction tuning with multimodal prompts to achieve unified image generation proficiency. A well-designed data processing pipeline involving language grounding and image segmentation is employed to construct multi-modal prompts. UNIMO-G excels in both text-to-image generation and zero-shot subject-driven synthesis, and is notably effective in generating high-fidelity images from complex multimodal prompts involving multiple image entities.", }
Existing text-to-image diffusion models primarily generate images from text prompts. However, the inherent conciseness of textual descriptions poses challenges in faithfully synthesizing images with intricate details, such as specific entities or scenes. This paper presents UNIMO-G, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation. UNIMO-G comprises two core components: a Multimodal Large Language Model (MLLM) for encoding multimodal prompts, and a conditional denoising diffusion network for generating images based on the encoded multimodal input. We leverage a two-stage training strategy to effectively train the framework: firstly pre-training on large-scale text-image pairs to develop conditional image generation capabilities, and then instruction tuning with multimodal prompts to achieve unified image generation proficiency. A well-designed data processing pipeline involving language grounding and image segmentation is employed to construct multi-modal prompts. UNIMO-G excels in both text-to-image generation and zero-shot subject-driven synthesis, and is notably effective in generating high-fidelity images from complex multimodal prompts involving multiple image entities.
[ "Li, Wei", "Xu, Xue", "Liu, Jiachen", "Xiao, Xinyan" ]
UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion
acl-long.335
Oral
2401.13388
[ "" ]
https://huggingface.co/papers/2401.13388
1
10
3
4
https://aclanthology.org/2024.acl-long.335/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.336.bib
@inproceedings{stap-etal-2024-fine, title = "The Fine-Tuning Paradox: Boosting Translation Quality Without Sacrificing {LLM} Abilities", author = "Stap, David and Hasler, Eva and Byrne, Bill and Monz, Christof and Tran, Ke", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.336", pages = "6189--6206", abstract = "Fine-tuning large language models (LLMs) for machine translation has shown improvements in overall translation quality. However, it is unclear what is the impact of fine-tuning on desirable LLM behaviors that are not present in neural machine translation models, such as steerability, inherent document-level translation abilities, and the ability to produce less literal translations. We perform an extensive translation evaluation on the LLaMA and Falcon family of models with model size ranging from 7 billion up to 65 billion parameters.Our results show that while fine-tuning improves the general translation quality of LLMs, several abilities degrade. In particular, we observe a decline in the ability to perform formality steering, to produce technical translations through few-shot examples, and to perform document-level translation. On the other hand, we observe that the model produces less literal translations after fine-tuning on parallel data. We show that by including monolingual data as part of the fine-tuning data we can maintain the abilities while simultaneously enhancing overall translation quality. Our findings emphasize the need for fine-tuning strategies that preserve the benefits of LLMs for machine translation.", }
Fine-tuning large language models (LLMs) for machine translation has shown improvements in overall translation quality. However, it is unclear what is the impact of fine-tuning on desirable LLM behaviors that are not present in neural machine translation models, such as steerability, inherent document-level translation abilities, and the ability to produce less literal translations. We perform an extensive translation evaluation on the LLaMA and Falcon family of models with model size ranging from 7 billion up to 65 billion parameters.Our results show that while fine-tuning improves the general translation quality of LLMs, several abilities degrade. In particular, we observe a decline in the ability to perform formality steering, to produce technical translations through few-shot examples, and to perform document-level translation. On the other hand, we observe that the model produces less literal translations after fine-tuning on parallel data. We show that by including monolingual data as part of the fine-tuning data we can maintain the abilities while simultaneously enhancing overall translation quality. Our findings emphasize the need for fine-tuning strategies that preserve the benefits of LLMs for machine translation.
[ "Stap, David", "Hasler, Eva", "Byrne, Bill", "Monz, Christof", "Tran, Ke" ]
The Fine-Tuning Paradox: Boosting Translation Quality Without Sacrificing LLM Abilities
acl-long.336
Poster
2405.20089
[ "https://github.com/amazon-science/idioms-incontext-mt" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.336/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.337.bib
@inproceedings{tan-etal-2024-blinded, title = "Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?", author = "Tan, Hexiang and Sun, Fei and Yang, Wanli and Wang, Yuanzhuo and Cao, Qi and Cheng, Xueqi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.337", pages = "6207--6227", abstract = "While auxiliary information has become a key to enhancing Large Language Models (LLMs), relatively little is known about how LLMs merge these contexts, specifically contexts generated by LLMs and those retrieved from external sources.To investigate this, we formulate a systematic framework to identify whether LLMs{'} responses are attributed to either generated or retrieved contexts.To easily trace the origin of the response, we construct datasets with conflicting contexts, i.e., each question is paired with both generated and retrieved contexts, yet only one of them contains the correct answer.Our experiments reveal a significant bias in several LLMs (GPT-4/3.5 and Llama2) to favor generated contexts, even when they provide incorrect information.We further identify two key factors contributing to this bias: i) contexts generated by LLMs typically show greater similarity to the questions, increasing their likelihood of being selected; ii) the segmentation process used in retrieved contexts disrupts their completeness, thereby hindering their full utilization in LLMs.Our analysis enhances the understanding of how LLMs merge diverse contexts, offers valuable insights for advancing current LLM augmentation methods, and highlights the risk of generated misinformation for retrieval-augmented LLMs.", }
While auxiliary information has become a key to enhancing Large Language Models (LLMs), relatively little is known about how LLMs merge these contexts, specifically contexts generated by LLMs and those retrieved from external sources.To investigate this, we formulate a systematic framework to identify whether LLMs{'} responses are attributed to either generated or retrieved contexts.To easily trace the origin of the response, we construct datasets with conflicting contexts, i.e., each question is paired with both generated and retrieved contexts, yet only one of them contains the correct answer.Our experiments reveal a significant bias in several LLMs (GPT-4/3.5 and Llama2) to favor generated contexts, even when they provide incorrect information.We further identify two key factors contributing to this bias: i) contexts generated by LLMs typically show greater similarity to the questions, increasing their likelihood of being selected; ii) the segmentation process used in retrieved contexts disrupts their completeness, thereby hindering their full utilization in LLMs.Our analysis enhances the understanding of how LLMs merge diverse contexts, offers valuable insights for advancing current LLM augmentation methods, and highlights the risk of generated misinformation for retrieval-augmented LLMs.
[ "Tan, Hexiang", "Sun, Fei", "Yang, Wanli", "Wang, Yuanzhuo", "Cao, Qi", "Cheng, Xueqi" ]
Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?
acl-long.337
Poster
2401.11911
[ "https://github.com/tan-hexiang/retrieveorgenerated" ]
https://huggingface.co/papers/2401.11911
0
1
0
6
https://aclanthology.org/2024.acl-long.337/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.338.bib
@inproceedings{zhang-etal-2024-unveiling-linguistic, title = "Unveiling Linguistic Regions in Large Language Models", author = "Zhang, Zhihao and Zhao, Jun and Zhang, Qi and Gui, Tao and Huang, Xuanjing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.338", pages = "6228--6247", abstract = "Large Language Models (LLMs) have demonstrated considerable cross-lingual alignment and generalization ability. Current research primarily focuses on improving LLMs{'} cross-lingual generalization capabilities. However, there is still a lack of research on the intrinsic mechanisms of how LLMs achieve cross-lingual alignment. From the perspective of region partitioning, this paper conducts several investigations on the linguistic competence of LLMs. We discover a core region in LLMs that corresponds to linguistic competence, accounting for approximately 1{\%} of the total model parameters. Removing this core region by setting parameters to zero results in a significant performance decrease across 30 different languages. Furthermore, this core region exhibits significant dimensional dependence, perturbations to even a single parameter on specific dimensions leading to a loss of linguistic competence. Moreover, we discover that distinct monolingual regions exist for different languages, and disruption to these specific regions substantially reduces the LLMs{'} proficiency in those corresponding languages. Our research also indicates that freezing the core linguistic region during further pre-training can mitigate the issue of catastrophic forgetting (CF), a common phenomenon observed during further pre-training of LLMs. Overall, exploring the LLMs{'} functional regions provides insights into the foundation of their intelligence.", }
Large Language Models (LLMs) have demonstrated considerable cross-lingual alignment and generalization ability. Current research primarily focuses on improving LLMs{'} cross-lingual generalization capabilities. However, there is still a lack of research on the intrinsic mechanisms of how LLMs achieve cross-lingual alignment. From the perspective of region partitioning, this paper conducts several investigations on the linguistic competence of LLMs. We discover a core region in LLMs that corresponds to linguistic competence, accounting for approximately 1{\%} of the total model parameters. Removing this core region by setting parameters to zero results in a significant performance decrease across 30 different languages. Furthermore, this core region exhibits significant dimensional dependence, perturbations to even a single parameter on specific dimensions leading to a loss of linguistic competence. Moreover, we discover that distinct monolingual regions exist for different languages, and disruption to these specific regions substantially reduces the LLMs{'} proficiency in those corresponding languages. Our research also indicates that freezing the core linguistic region during further pre-training can mitigate the issue of catastrophic forgetting (CF), a common phenomenon observed during further pre-training of LLMs. Overall, exploring the LLMs{'} functional regions provides insights into the foundation of their intelligence.
[ "Zhang, Zhihao", "Zhao, Jun", "Zhang, Qi", "Gui, Tao", "Huang, Xuanjing" ]
Unveiling Linguistic Regions in Large Language Models
acl-long.338
Oral
2402.14700
[ "https://github.com/zzhang0179/Unveiling-Linguistic-Regions-in-LLMs" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.338/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.339.bib
@inproceedings{hong-etal-2024-text, title = "Text-to-Song: Towards Controllable Music Generation Incorporating Vocal and Accompaniment", author = "Hong, Zhiqing and Huang, Rongjie and Cheng, Xize and Wang, Yongqi and Li, Ruiqi and You, Fuming and Zhao, Zhou and Zhang, Zhimeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.339", pages = "6248--6261", abstract = "A song is a combination of singing voice and accompaniment. However, existing works focus on singing voice synthesis and music generation independently. Little attention was paid to exploring song synthesis. In this work, we propose a novel task called Text-to-Song synthesis which incorporates both vocal and accompaniment generation. We develop Melodist, a two-stage text-to-song method that consists of singing voice synthesis (SVS) and vocal-to-accompaniment (V2A) synthesis. Melodist leverages tri-tower contrastive pretraining to learn more effective text representation for controllable V2A synthesis. A Chinese song dataset mined from a music website is built to alleviate data scarcity for our research. The evaluation results on our dataset demonstrate that Melodist can synthesize songs with comparable quality and style consistency. Audio samples can be found in https://text2songMelodist.github.io/Sample/.", }
A song is a combination of singing voice and accompaniment. However, existing works focus on singing voice synthesis and music generation independently. Little attention was paid to exploring song synthesis. In this work, we propose a novel task called Text-to-Song synthesis which incorporates both vocal and accompaniment generation. We develop Melodist, a two-stage text-to-song method that consists of singing voice synthesis (SVS) and vocal-to-accompaniment (V2A) synthesis. Melodist leverages tri-tower contrastive pretraining to learn more effective text representation for controllable V2A synthesis. A Chinese song dataset mined from a music website is built to alleviate data scarcity for our research. The evaluation results on our dataset demonstrate that Melodist can synthesize songs with comparable quality and style consistency. Audio samples can be found in https://text2songMelodist.github.io/Sample/.
[ "Hong, Zhiqing", "Huang, Rongjie", "Cheng, Xize", "Wang, Yongqi", "Li, Ruiqi", "You, Fuming", "Zhao, Zhou", "Zhang, Zhimeng" ]
Text-to-Song: Towards Controllable Music Generation Incorporating Vocal and Accompaniment
acl-long.339
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.339/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.340.bib
@inproceedings{huang-etal-2024-fastfid, title = "{F}ast{F}i{D}: Improve Inference Efficiency of Open Domain Question Answering via Sentence Selection", author = "Huang, Yufei and Han, Xu and Sun, Maosong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.340", pages = "6262--6276", abstract = "Open Domain Question Answering (ODQA) has been advancing rapidly in recent times, driven by significant developments in dense passage retrieval and pretrained language models. State-of-the-art models typically incorporate the FiD framework, which is composed by a neural retriever alongside an encoder-decoder neural reader. In the answer generation process, the retriever will retrieve numerous passages (around 100 for instance), each of which is then individually encoded by the encoder. Subsequently, the decoder makes predictions based on these encoded passages. Nevertheless, this framework can be relatively time-consuming, particularly due to the extensive length of the gathered passages. To address this, we introduce FastFiD in this paper, a novel approach that executes sentence selection on the encoded passages. This aids in retaining valuable sentences while reducing the context length required for generating answers. Experiments on three commonly used datasets (Natural Questions, TriviaQA and ASQA) demonstrate that our method can enhance the inference speed by **2.3X-5.7X**, while simultaneously maintaining the model{'}s performance. Moreover, an in-depth analysis of the model{'}s attention reveals that the selected sentences indeed hold a substantial contribution towards the final answer. The codes are publicly available at https://github.com/thunlp/FastFiD.", }
Open Domain Question Answering (ODQA) has been advancing rapidly in recent times, driven by significant developments in dense passage retrieval and pretrained language models. State-of-the-art models typically incorporate the FiD framework, which is composed by a neural retriever alongside an encoder-decoder neural reader. In the answer generation process, the retriever will retrieve numerous passages (around 100 for instance), each of which is then individually encoded by the encoder. Subsequently, the decoder makes predictions based on these encoded passages. Nevertheless, this framework can be relatively time-consuming, particularly due to the extensive length of the gathered passages. To address this, we introduce FastFiD in this paper, a novel approach that executes sentence selection on the encoded passages. This aids in retaining valuable sentences while reducing the context length required for generating answers. Experiments on three commonly used datasets (Natural Questions, TriviaQA and ASQA) demonstrate that our method can enhance the inference speed by **2.3X-5.7X**, while simultaneously maintaining the model{'}s performance. Moreover, an in-depth analysis of the model{'}s attention reveals that the selected sentences indeed hold a substantial contribution towards the final answer. The codes are publicly available at https://github.com/thunlp/FastFiD.
[ "Huang, Yufei", "Han, Xu", "Sun, Maosong" ]
FastFiD: Improve Inference Efficiency of Open Domain Question Answering via Sentence Selection
acl-long.340
Poster
2408.06333
[ "https://github.com/thunlp/fastfid" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.340/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.341.bib
@inproceedings{miao-etal-2024-discursive, title = "Discursive Socratic Questioning: Evaluating the Faithfulness of Language Models{'} Understanding of Discourse Relations", author = "Miao, Yisong and Liu, Hongfu and Lei, Wenqiang and Chen, Nancy and Kan, Min-Yen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.341", pages = "6277--6295", abstract = "While large language models have significantly enhanced the effectiveness of discourse relation classifications, it remains unclear whether their comprehension is faithful and reliable. We provide DiSQ, a new method for evaluating the faithfulness of understanding discourse based on question answering. We first employ in-context learning to annotate the reasoning for discourse comprehension, based on the connections among key events within the discourse. Following this, DiSQ interrogates the model with a sequence of questions to assess its grasp of core event relations, its resilience to counterfactual queries, as well as its consistency to its previous responses. then evaluate language models with different architectural designs using DiSQ, finding: (1) DiSQ presents a significant challenge for all models, with the top-performing GPT model attaining only 41{\%} of the ideal performance in PDTB; (2) DiSQ is robust to domain shifts and paraphrase variations; (3) Open-source models generally lag behind their closed-source GPT counterparts, with notable exceptions being those enhanced with chat and code/math features; (4) Our analysis validates the effectiveness of explicitly signalled discourse connectives, the role of contextual information, and the benefits of using historical QA data.", }
While large language models have significantly enhanced the effectiveness of discourse relation classifications, it remains unclear whether their comprehension is faithful and reliable. We provide DiSQ, a new method for evaluating the faithfulness of understanding discourse based on question answering. We first employ in-context learning to annotate the reasoning for discourse comprehension, based on the connections among key events within the discourse. Following this, DiSQ interrogates the model with a sequence of questions to assess its grasp of core event relations, its resilience to counterfactual queries, as well as its consistency to its previous responses. then evaluate language models with different architectural designs using DiSQ, finding: (1) DiSQ presents a significant challenge for all models, with the top-performing GPT model attaining only 41{\%} of the ideal performance in PDTB; (2) DiSQ is robust to domain shifts and paraphrase variations; (3) Open-source models generally lag behind their closed-source GPT counterparts, with notable exceptions being those enhanced with chat and code/math features; (4) Our analysis validates the effectiveness of explicitly signalled discourse connectives, the role of contextual information, and the benefits of using historical QA data.
[ "Miao, Yisong", "Liu, Hongfu", "Lei, Wenqiang", "Chen, Nancy", "Kan, Min-Yen" ]
Discursive Socratic Questioning: Evaluating the Faithfulness of Language Models' Understanding of Discourse Relations
acl-long.341
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.341/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.342.bib
@inproceedings{trokhymovych-etal-2024-open, title = "An Open Multilingual System for Scoring Readability of {W}ikipedia", author = "Trokhymovych, Mykola and Sen, Indira and Gerlach, Martin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.342", pages = "6296--6311", abstract = "With over 60M articles, Wikipedia has become the largest platform for open and freely accessible knowledge. While it has more than 15B monthly visits, its content is believed to be inaccessible to many readers due to the lack of readability of its text. However, previous investigations of the readability of Wikipedia have been restricted to English only, and there are currently no systems supporting the automatic readability assessment of the 300+ languages in Wikipedia. To bridge this gap, we develop a multilingual model to score the readability of Wikipedia articles. To train and evaluate this model, we create a novel multilingual dataset spanning 14 languages, by matching articles from Wikipedia to simplified Wikipedia and online children encyclopedias. We show that our model performs well in a zero-shot scenario, yielding a ranking accuracy of more than 80{\%} across 14 languages and improving upon previous benchmarks. These results demonstrate the applicability of the model at scale for languages in which there is no ground-truth data available for model fine-tuning. Furthermore, we provide the first overview on the state of readability in Wikipedia beyond English.", }
With over 60M articles, Wikipedia has become the largest platform for open and freely accessible knowledge. While it has more than 15B monthly visits, its content is believed to be inaccessible to many readers due to the lack of readability of its text. However, previous investigations of the readability of Wikipedia have been restricted to English only, and there are currently no systems supporting the automatic readability assessment of the 300+ languages in Wikipedia. To bridge this gap, we develop a multilingual model to score the readability of Wikipedia articles. To train and evaluate this model, we create a novel multilingual dataset spanning 14 languages, by matching articles from Wikipedia to simplified Wikipedia and online children encyclopedias. We show that our model performs well in a zero-shot scenario, yielding a ranking accuracy of more than 80{\%} across 14 languages and improving upon previous benchmarks. These results demonstrate the applicability of the model at scale for languages in which there is no ground-truth data available for model fine-tuning. Furthermore, we provide the first overview on the state of readability in Wikipedia beyond English.
[ "Trokhymovych, Mykola", "Sen, Indira", "Gerlach, Martin" ]
An Open Multilingual System for Scoring Readability of Wikipedia
acl-long.342
Poster
2406.01835
[ "" ]
https://huggingface.co/papers/2406.01835
1
0
0
3
https://aclanthology.org/2024.acl-long.342/
[ "trokhymovych/TRank_readability" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.343.bib
@inproceedings{isonuma-titov-2024-unlearning, title = "Unlearning Traces the Influential Training Data of Language Models", author = "Isonuma, Masaru and Titov, Ivan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.343", pages = "6312--6325", abstract = "Identifying the training datasets that influence a language model{'}s outputs is essential for minimizing the generation of harmful content and enhancing its performance. Ideally, we can measure the influence of each dataset by removing it from training; however, it is prohibitively expensive to retrain a model multiple times. This paper presents UnTrac: unlearning traces the influence of a training dataset on the model{'}s performance. UnTrac is extremely simple; each training dataset is unlearned by gradient ascent, and we evaluate how much the model{'}s predictions change after unlearning. Furthermore, we propose a more scalable approach, UnTrac-Inv, which unlearns a test dataset and evaluates the unlearned model on training datasets. UnTrac-Inv resembles UnTrac, while being efficient for massive training datasets. In the experiments, we examine if our methods can assess the influence of pretraining datasets on generating toxic, biased, and untruthful content. Our methods estimate their influence much more accurately than existing methods while requiring neither excessive memory space nor multiple checkpoints.", }
Identifying the training datasets that influence a language model{'}s outputs is essential for minimizing the generation of harmful content and enhancing its performance. Ideally, we can measure the influence of each dataset by removing it from training; however, it is prohibitively expensive to retrain a model multiple times. This paper presents UnTrac: unlearning traces the influence of a training dataset on the model{'}s performance. UnTrac is extremely simple; each training dataset is unlearned by gradient ascent, and we evaluate how much the model{'}s predictions change after unlearning. Furthermore, we propose a more scalable approach, UnTrac-Inv, which unlearns a test dataset and evaluates the unlearned model on training datasets. UnTrac-Inv resembles UnTrac, while being efficient for massive training datasets. In the experiments, we examine if our methods can assess the influence of pretraining datasets on generating toxic, biased, and untruthful content. Our methods estimate their influence much more accurately than existing methods while requiring neither excessive memory space nor multiple checkpoints.
[ "Isonuma, Masaru", "Titov, Ivan" ]
Unlearning Traces the Influential Training Data of Language Models
acl-long.343
Poster
2401.15241
[ "https://github.com/misonuma/untrac" ]
https://huggingface.co/papers/2401.15241
0
0
0
2
https://aclanthology.org/2024.acl-long.343/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.344.bib
@inproceedings{mousi-etal-2024-exploring, title = "Exploring Alignment in Shared Cross-lingual Spaces", author = "Mousi, Basel and Durrani, Nadir and Dalvi, Fahim and Hawasly, Majd and Abdelali, Ahmed", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.344", pages = "6326--6348", abstract = "Despite their remarkable ability to capture linguistic nuances across diverse languages, questions persist regarding the degree of alignment between languages in multilingual embeddings. Drawing inspiration from research on high-dimensional representations in neural language models, we employ clustering to uncover latent concepts within multilingual models. Our analysis focuses on quantifying the alignment and overlap of these concepts across various languages within the latent space. To this end, we introduce two metrics CALIGN and COLAP aimed at quantifying these aspects, enabling a deeper exploration of multilingual embeddings. Our study encompasses three multilingual models (mT5, mBERT, and XLM-R) and three downstream tasks (Machine Translation, Named Entity Recognition, and Sentiment Analysis). Key findings from our analysis include: i) deeper layers in the network demonstrate increased cross-lingual alignment due to the presence of language-agnostic concepts, ii) fine-tuning of the models enhances alignment within the latent space, and iii) such task-specific calibration helps in explaining the emergence of zero-shot capabilities in the models.", }
Despite their remarkable ability to capture linguistic nuances across diverse languages, questions persist regarding the degree of alignment between languages in multilingual embeddings. Drawing inspiration from research on high-dimensional representations in neural language models, we employ clustering to uncover latent concepts within multilingual models. Our analysis focuses on quantifying the alignment and overlap of these concepts across various languages within the latent space. To this end, we introduce two metrics CALIGN and COLAP aimed at quantifying these aspects, enabling a deeper exploration of multilingual embeddings. Our study encompasses three multilingual models (mT5, mBERT, and XLM-R) and three downstream tasks (Machine Translation, Named Entity Recognition, and Sentiment Analysis). Key findings from our analysis include: i) deeper layers in the network demonstrate increased cross-lingual alignment due to the presence of language-agnostic concepts, ii) fine-tuning of the models enhances alignment within the latent space, and iii) such task-specific calibration helps in explaining the emergence of zero-shot capabilities in the models.
[ "Mousi, Basel", "Durrani, Nadir", "Dalvi, Fahim", "Hawasly, Majd", "Abdelali, Ahmed" ]
Exploring Alignment in Shared Cross-lingual Spaces
acl-long.344
Poster
2405.14535
[ "https://github.com/qcri/multilingual-latent-concepts" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.344/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.345.bib
@inproceedings{wang-etal-2024-countries, title = "Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models", author = "Wang, Wenxuan and Jiao, Wenxiang and Huang, Jingyuan and Dai, Ruyi and Huang, Jen-tse and Tu, Zhaopeng and Lyu, Michael", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.345", pages = "6349--6384", abstract = "This paper identifies a cultural dominance issue within large language models (LLMs) due to the predominant use of English data in model training (e.g., ChatGPT). LLMs often provide inappropriate English-culture-related answers that are not relevant to the expected culture when users ask in non-English languages. To systematically evaluate the cultural dominance issue, we build a benchmark of concrete (e.g., holidays and songs) and abstract (e.g., values and opinions) cultural objects. Empirical results show that the representative GPT models suffer from the culture dominance problem, where GPT-4 is the most affected while text-davinci-003 suffers the least from this problem. Our study emphasizes the need to critically examine cultural dominance and ethical considerations in their development and deployment. We show that two straightforward methods in model development (i.e., pretraining on more diverse data) and deployment (e.g., culture-aware prompting) can significantly mitigate the cultural dominance issue in LLMs.", }
This paper identifies a cultural dominance issue within large language models (LLMs) due to the predominant use of English data in model training (e.g., ChatGPT). LLMs often provide inappropriate English-culture-related answers that are not relevant to the expected culture when users ask in non-English languages. To systematically evaluate the cultural dominance issue, we build a benchmark of concrete (e.g., holidays and songs) and abstract (e.g., values and opinions) cultural objects. Empirical results show that the representative GPT models suffer from the culture dominance problem, where GPT-4 is the most affected while text-davinci-003 suffers the least from this problem. Our study emphasizes the need to critically examine cultural dominance and ethical considerations in their development and deployment. We show that two straightforward methods in model development (i.e., pretraining on more diverse data) and deployment (e.g., culture-aware prompting) can significantly mitigate the cultural dominance issue in LLMs.
[ "Wang, Wenxuan", "Jiao, Wenxiang", "Huang, Jingyuan", "Dai, Ruyi", "Huang, Jen-tse", "Tu, Zhaopeng", "Lyu, Michael" ]
Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models
acl-long.345
Poster
2310.12481
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.345/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.346.bib
@inproceedings{gao-etal-2024-self-evolving, title = "Self-Evolving {GPT}: A Lifelong Autonomous Experiential Learner", author = "Gao, Jinglong and Ding, Xiao and Cui, Yiming and Zhao, Jianbai and Wang, Hepeng and Liu, Ting and Qin, Bing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.346", pages = "6385--6432", abstract = "To improve the performance of large language models (LLMs), researchers have explored providing LLMs with textual task-solving experience via prompts. However, they rely on manual efforts to acquire and apply such experience for each task, which is not feasible for the growing demand for LLMs and the variety of user questions.To address this issue, we design a lifelong autonomous experiential learning framework based on LLMs to explore whether LLMs can imitate human ability for learning and utilizing experience. It autonomously learns and accumulates experience through experience transfer and induction, categorizing the types of input questions to select which accumulated experience to employ for them.Experimental results on six widely used NLP datasets show that our framework performs reliably in each intermediate step and effectively improves the performance of GPT-3.5 and GPT-4. This validates the feasibility of using LLMs to mimic human experiential learning and application capabilities, offering a new path worth further exploration for the evolution of machine intelligence. Additionally, we provide a detailed analysis of the behavior of our framework at each step.We will open source codes after the acceptance, fostering open research in the NLP community and beyond.", }
To improve the performance of large language models (LLMs), researchers have explored providing LLMs with textual task-solving experience via prompts. However, they rely on manual efforts to acquire and apply such experience for each task, which is not feasible for the growing demand for LLMs and the variety of user questions.To address this issue, we design a lifelong autonomous experiential learning framework based on LLMs to explore whether LLMs can imitate human ability for learning and utilizing experience. It autonomously learns and accumulates experience through experience transfer and induction, categorizing the types of input questions to select which accumulated experience to employ for them.Experimental results on six widely used NLP datasets show that our framework performs reliably in each intermediate step and effectively improves the performance of GPT-3.5 and GPT-4. This validates the feasibility of using LLMs to mimic human experiential learning and application capabilities, offering a new path worth further exploration for the evolution of machine intelligence. Additionally, we provide a detailed analysis of the behavior of our framework at each step.We will open source codes after the acceptance, fostering open research in the NLP community and beyond.
[ "Gao, Jinglong", "Ding, Xiao", "Cui, Yiming", "Zhao, Jianbai", "Wang, Hepeng", "Liu, Ting", "Qin, Bing" ]
Self-Evolving GPT: A Lifelong Autonomous Experiential Learner
acl-long.346
Poster
2407.08937
[ "" ]
https://huggingface.co/papers/2407.08937
0
0
0
7
https://aclanthology.org/2024.acl-long.346/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.347.bib
@inproceedings{tan-etal-2024-wrp, title = "{WRP}: Weight Recover Prune for Structured Sparsity", author = "Tan, Zhendong and Zhang, Xingjun and Wei, Zheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.347", pages = "6433--6443", abstract = "As the scale of Large Language Models (LLMs) increases, it is necessary to compress the models to reduce the substantial demand on computational resources. Network pruning significantly reduces the model size by converting the weight matrix from dense to sparse data format. Current methodologies advocate for one-shot pruning to avoid the expense of retraining, ensuring the maintenance of model performance under conditions of 50{\%}-60{\%} unstructured pruning. Nevertheless, matrices characterized by this level of sparsity could not be treated as sparse matrices, because the indices would incur significant costs. To mitigate this problem, NVIDIA introduced the 2:4 structured sparsity. However, we observe a notable decline in model performance when adopting 2:4 structured sparsity due to group constraints. In this paper, we introduce the Weight Recover Prune (WRP) approach. By recovering a minimal set of critical weights, WRP aims to enhance model performance while maintaining the efficiency of the compression. Our evaluation of the WRP method on the LLAMA2 and OPT models shows that it outperforms other 2:4 pattern one-shot pruning methods. Meanwhile, WRP can guarantee that the size of the pruned model is about 60{\%} of the dense model. Our code is available at: https://github.com/TanZhendong/WRP.", }
As the scale of Large Language Models (LLMs) increases, it is necessary to compress the models to reduce the substantial demand on computational resources. Network pruning significantly reduces the model size by converting the weight matrix from dense to sparse data format. Current methodologies advocate for one-shot pruning to avoid the expense of retraining, ensuring the maintenance of model performance under conditions of 50{\%}-60{\%} unstructured pruning. Nevertheless, matrices characterized by this level of sparsity could not be treated as sparse matrices, because the indices would incur significant costs. To mitigate this problem, NVIDIA introduced the 2:4 structured sparsity. However, we observe a notable decline in model performance when adopting 2:4 structured sparsity due to group constraints. In this paper, we introduce the Weight Recover Prune (WRP) approach. By recovering a minimal set of critical weights, WRP aims to enhance model performance while maintaining the efficiency of the compression. Our evaluation of the WRP method on the LLAMA2 and OPT models shows that it outperforms other 2:4 pattern one-shot pruning methods. Meanwhile, WRP can guarantee that the size of the pruned model is about 60{\%} of the dense model. Our code is available at: https://github.com/TanZhendong/WRP.
[ "Tan, Zhendong", "Zhang, Xingjun", "Wei, Zheng" ]
WRP: Weight Recover Prune for Structured Sparsity
acl-long.347
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.347/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.348.bib
@inproceedings{michot-etal-2024-error, title = "Error-preserving Automatic Speech Recognition of Young {E}nglish Learners{'} Language", author = {Michot, Janick and H{\"u}rlimann, Manuela and Deriu, Jan and Sauer, Luzia and Mlynchyk, Katsiaryna and Cieliebak, Mark}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.348", pages = "6444--6454", abstract = "One of the central skills that language learners need to practice is speaking the language. Currently, students in school do not get enough speaking opportunities and lack conversational practice. The recent advances in speech technology and natural language processing allow the creation of novel tools to practice their speaking skills. In this work, we tackle the first component of such a pipeline, namely, the automated speech recognition module (ASR). State-of-the-art models are often trained on adult read-aloud data by native speakers and do not transfer well to young language learners{'} speech. Second, most ASR systems contain a powerful language model, which smooths out mistakes made by the speakers. To give corrective feedback, which is a crucial part of language learning, the ASR systems in our setting need to preserve the mistakes made by the language learners. In this work, we build an ASR system that satisfies these requirements: it works on spontaneous speech by young language learners and preserves their mistakes. For this, we collected a corpus containing around 85 hours of English audio spoken by Swiss learners from grades 4 to 6 on different language learning tasks, which we used to train an ASR model. Our experiments show that our model benefits from direct fine-tuning of children{'}s voices and has a much higher error preservation rate.", }
One of the central skills that language learners need to practice is speaking the language. Currently, students in school do not get enough speaking opportunities and lack conversational practice. The recent advances in speech technology and natural language processing allow the creation of novel tools to practice their speaking skills. In this work, we tackle the first component of such a pipeline, namely, the automated speech recognition module (ASR). State-of-the-art models are often trained on adult read-aloud data by native speakers and do not transfer well to young language learners{'} speech. Second, most ASR systems contain a powerful language model, which smooths out mistakes made by the speakers. To give corrective feedback, which is a crucial part of language learning, the ASR systems in our setting need to preserve the mistakes made by the language learners. In this work, we build an ASR system that satisfies these requirements: it works on spontaneous speech by young language learners and preserves their mistakes. For this, we collected a corpus containing around 85 hours of English audio spoken by Swiss learners from grades 4 to 6 on different language learning tasks, which we used to train an ASR model. Our experiments show that our model benefits from direct fine-tuning of children{'}s voices and has a much higher error preservation rate.
[ "Michot, Janick", "H{\\\"u}rlimann, Manuela", "Deriu, Jan", "Sauer, Luzia", "Mlynchyk, Katsiaryna", "Cieliebak, Mark" ]
Error-preserving Automatic Speech Recognition of Young English Learners' Language
acl-long.348
Poster
2406.03235
[ "https://github.com/mict-zhaw/chall_e2e_stt" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.348/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.349.bib
@inproceedings{cai-etal-2024-difinet, title = "{D}i{F}i{N}et: Boundary-Aware Semantic Differentiation and Filtration Network for Nested Named Entity Recognition", author = "Cai, Yuxiang and Liu, Qiao and Gan, Yanglei and Lin, Run and Li, Changlin and Liu, Xueyi and Luo, Da and JiayeYang, JiayeYang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.349", pages = "6455--6471", abstract = "Nested Named Entity Recognition (Nested NER) entails identifying and classifying entity spans within the text, including the detection of named entities that are embedded within external entities. Prior approaches primarily employ span-based techniques, utilizing the power of exhaustive searches to address the challenge of overlapping entities. Nonetheless, these methods often grapple with the absence of explicit guidance for boundary detection, resulting insensitivity in discerning minor variations within nested spans. To this end, we propose a Boundary-aware Semantic $\underline{Di}$fferentiation and $\underline{Fi}$ltration $\underline{Net}$work (DiFiNet) tailored for nested NER. Specifically, DiFiNet leverages a biaffine attention mechanism to generate a span representation matrix. This matrix undergoes further refinement through a self-adaptive semantic differentiation module, specifically engineered to discern semantic variances across spans. Furthermore, DiFiNet integrates a boundary filtration module, designed to mitigate the impact of non-entity noise by leveraging semantic relations among spans. Extensive experiments on three benchmark datasets demonstrate our model yields a new state-of-the-art performance.", }
Nested Named Entity Recognition (Nested NER) entails identifying and classifying entity spans within the text, including the detection of named entities that are embedded within external entities. Prior approaches primarily employ span-based techniques, utilizing the power of exhaustive searches to address the challenge of overlapping entities. Nonetheless, these methods often grapple with the absence of explicit guidance for boundary detection, resulting insensitivity in discerning minor variations within nested spans. To this end, we propose a Boundary-aware Semantic $\underline{Di}$fferentiation and $\underline{Fi}$ltration $\underline{Net}$work (DiFiNet) tailored for nested NER. Specifically, DiFiNet leverages a biaffine attention mechanism to generate a span representation matrix. This matrix undergoes further refinement through a self-adaptive semantic differentiation module, specifically engineered to discern semantic variances across spans. Furthermore, DiFiNet integrates a boundary filtration module, designed to mitigate the impact of non-entity noise by leveraging semantic relations among spans. Extensive experiments on three benchmark datasets demonstrate our model yields a new state-of-the-art performance.
[ "Cai, Yuxiang", "Liu, Qiao", "Gan, Yanglei", "Lin, Run", "Li, Changlin", "Liu, Xueyi", "Luo, Da", "JiayeYang, JiayeYang" ]
DiFiNet: Boundary-Aware Semantic Differentiation and Filtration Network for Nested Named Entity Recognition
acl-long.349
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.349/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.350.bib
@inproceedings{feng-etal-2024-legal, title = "Legal Case Retrieval: A Survey of the State of the Art", author = "Feng, Yi and Li, Chuanyi and Ng, Vincent", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.350", pages = "6472--6485", abstract = "Recent years have seen increasing attention on Legal Case Retrieval (LCR), a key task in the area of Legal AI that concerns the retrieval of cases from a large legal database of historical cases that are similar to a given query. This paper presents a survey of the major milestones made in LCR research, targeting researchers who are finding their way into the field and seek a brief account of the relevant datasets and the recent neural models and their performances.", }
Recent years have seen increasing attention on Legal Case Retrieval (LCR), a key task in the area of Legal AI that concerns the retrieval of cases from a large legal database of historical cases that are similar to a given query. This paper presents a survey of the major milestones made in LCR research, targeting researchers who are finding their way into the field and seek a brief account of the relevant datasets and the recent neural models and their performances.
[ "Feng, Yi", "Li, Chuanyi", "Ng, Vincent" ]
Legal Case Retrieval: A Survey of the State of the Art
acl-long.350
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.350/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.351.bib
@inproceedings{zhong-etal-2024-benchmarking, title = "Benchmarking and Improving Compositional Generalization of Multi-aspect Controllable Text Generation", author = "Zhong, Tianqi and Li, Zhaoyi and Wang, Quan and Song, Linqi and Wei, Ying and Lian, Defu and Mao, Zhendong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.351", pages = "6486--6517", abstract = "Compositional generalization, representing the model{'}s ability to generate text with new attribute combinations obtained by recombining single attributes from the training data, is a crucial property for multi-aspect controllable text generation (MCTG) methods. Nonetheless, a comprehensive compositional generalization evaluation benchmark of MCTG is still lacking. We propose CompMCTG, a benchmark encompassing diverse multi-aspect labeled datasets and a crafted three-dimensional evaluation protocol, to holistically evaluate the compositional generalization of MCTG approaches. We observe that existing MCTG works generally confront a noticeable performance drop in compositional testing. To mitigate this issue, we introduce Meta-MCTG, a training framework incorporating meta-learning, where we enable models to learn how to generalize by simulating compositional generalization scenarios in the training phase. We demonstrate the effectiveness of Meta-MCTG through achieving obvious improvement (by at most 3.64{\%}) for compositional testing performance in 94.4{\%}.", }
Compositional generalization, representing the model{'}s ability to generate text with new attribute combinations obtained by recombining single attributes from the training data, is a crucial property for multi-aspect controllable text generation (MCTG) methods. Nonetheless, a comprehensive compositional generalization evaluation benchmark of MCTG is still lacking. We propose CompMCTG, a benchmark encompassing diverse multi-aspect labeled datasets and a crafted three-dimensional evaluation protocol, to holistically evaluate the compositional generalization of MCTG approaches. We observe that existing MCTG works generally confront a noticeable performance drop in compositional testing. To mitigate this issue, we introduce Meta-MCTG, a training framework incorporating meta-learning, where we enable models to learn how to generalize by simulating compositional generalization scenarios in the training phase. We demonstrate the effectiveness of Meta-MCTG through achieving obvious improvement (by at most 3.64{\%}) for compositional testing performance in 94.4{\%}.
[ "Zhong, Tianqi", "Li, Zhaoyi", "Wang, Quan", "Song, Linqi", "Wei, Ying", "Lian, Defu", "Mao, Zhendong" ]
Benchmarking and Improving Compositional Generalization of Multi-aspect Controllable Text Generation
acl-long.351
Poster
2404.04232
[ "https://github.com/tqzhong/cg4mctg" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.351/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.352.bib
@inproceedings{wu-etal-2024-llama, title = "{LL}a{MA} Pro: Progressive {LL}a{MA} with Block Expansion", author = "Wu, Chengyue and Gan, Yukang and Ge, Yixiao and Lu, Zeyu and Wang, Jiahao and Feng, Ye and Shan, Ying and Luo, Ping", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.352", pages = "6518--6537", abstract = "Humans generally acquire new skills without compromising the old; however, the opposite holds for Large Language Models (LLMs), e.g., from LLaMA to CodeLLaMA. To this end, we propose a new post-pretraining method for LLMs with an expansion of Transformer blocks. We tune the expanded blocks using only new corpus, efficiently and effectively improving the model{'}s knowledge while mitigating forgetting. In this paper, we experiment on the corpus of code and math, yielding LLaMA Pro-8.3B, a versatile foundation model initialized from LLaMA2-7B, excelling in general tasks, programming, and mathematics. LLaMA Pro and its instruction-following counterpart (LLaMA Pro - Instruct) achieve advanced performance among various benchmarks, demonstrating superiority over existing open models in the LLaMA family and the immense potential of reasoning and addressing diverse tasks as an intelligent agent. Our findings provide valuable insights into integrating natural and programming languages, laying a solid foundation for developing advanced language agents that operate effectively in various environments.", }
Humans generally acquire new skills without compromising the old; however, the opposite holds for Large Language Models (LLMs), e.g., from LLaMA to CodeLLaMA. To this end, we propose a new post-pretraining method for LLMs with an expansion of Transformer blocks. We tune the expanded blocks using only new corpus, efficiently and effectively improving the model{'}s knowledge while mitigating forgetting. In this paper, we experiment on the corpus of code and math, yielding LLaMA Pro-8.3B, a versatile foundation model initialized from LLaMA2-7B, excelling in general tasks, programming, and mathematics. LLaMA Pro and its instruction-following counterpart (LLaMA Pro - Instruct) achieve advanced performance among various benchmarks, demonstrating superiority over existing open models in the LLaMA family and the immense potential of reasoning and addressing diverse tasks as an intelligent agent. Our findings provide valuable insights into integrating natural and programming languages, laying a solid foundation for developing advanced language agents that operate effectively in various environments.
[ "Wu, Chengyue", "Gan, Yukang", "Ge, Yixiao", "Lu, Zeyu", "Wang, Jiahao", "Feng, Ye", "Shan, Ying", "Luo, Ping" ]
LLaMA Pro: Progressive LLaMA with Block Expansion
acl-long.352
Poster
2401.02415
[ "https://github.com/tencentarc/llama-pro" ]
https://huggingface.co/papers/2401.02415
1
53
3
8
https://aclanthology.org/2024.acl-long.352/
[ "DavidAU/Meta-Llama-3.1-Instruct-12.2B-BRAINSTORM-20x-FORM-8-GGUF", "trollek/NinjaMouse-2.4B-32L-danube", "TencentARC/MetaMath-Mistral-Pro", "DavidAU/Psyonic-Cetacean-Ultra-Quality-21.3B-22.5B-BRAINSTORM-5x-10x-GGUF", "DavidAU/L3-SthenoMaidBlackroot-9.56B-V1-BRAINSTORM-8x-GGUF", "DavidAU/L3-Stheno-V3.2-8.9B-BRAINSTORM-5x-FORM-11-GGUF", "DavidAU/Mistral-Nemo-Instruct-2407-13.35B-BRAINSTORM-5x-FORM-11-GGUF", "DavidAU/L3-Grand-HORROR-17.4B-V1.7-STABLE-Kiss-Of-Death-GGUF", "arcee-ai/Mistral-7B-Instruct-v0.2-expanded-sec-1.6B-tokens", "DavidAU/L3-SthenoMaidBlackroot-8.9B-V1-BRAINSTORM-5x-GGUF", "DavidAU/L3-Stheno-v3.2-8.9B-BRAINSTORM-5x-FORM-2-GGUF", "DavidAU/L3-Stheno-V3.2-12.2B-BRAINSTORM-20x-FORM-2-GGUF", "Pretergeek/OpenChat-3.5-0106_8.11B-36Layers-Last", "Pretergeek/OpenChat-3.5-0106_BlockExpansion-40Layers-End", "Pretergeek/OpenChat-3.5-0106_BlockExpansion-44Layers-End", "Pretergeek/OpenChat-3.5-0106_BlockExpansion-48Layers-End", "DavidAU/Mistral-Nemo-Instruct-2407-14.7B-BRAINSTORM-10x-FORM-3-GGUF", "DavidAU/L3-Grand-HORROR-25B-V2-STABLE-Godzillas-Wicked-Sister-GGUF", "DavidAU/L3-SthenoMaidBlackroot-8.68B-V1-BRAINSTORM-4x-Multi-GGUF", "DavidAU/Meta-Llama-3-Instruct-8.9B-BRAINSTORM-5x-FORM-11-GGUF", "DavidAU/Meta-Llama-3-Instruct-12.2B-BRAINSTORM-20x-FORM-8-GGUF", "DavidAU/Meta-Llama-3-Instruct-8.9B-BRAINSTORM-5x-FORM-7-GGUF", "DavidAU/L3-Stheno-V3.2-8.47B-BRAINSTORM-4x-FORM-1-GGUF", "DavidAU/L3-Stheno-v3.2-8.9B-BRAINSTORM-5x-FORM-6-GGUF", "DavidAU/L3-Stheno-v3.2-9.99B-BRAINSTORM-10x-FORM-3-GGUF", "DavidAU/L3-SthenoMaidBlackroot-10.4B-V1-BRAINSTORM-4x-Multi-3x-GGUF", "DavidAU/L3-SthenoMaidBlackroot-10.4B-V1-BRAINSTORM-4x-Multi-3x-1-GGUF", "DavidAU/L3-SthenoMaidBlackroot-10.4B-V1-BRAINSTORM-4x-Multi-3x-2-GGUF", "DavidAU/Meta-Llama-3.1-Instruct-9.99B-BRAINSTORM-10x-FORM-3-GGUF", "DavidAU/Meta-Llama-3.1-Instruct-8.9B-BRAINSTORM-5x-FORM-11-GGUF", "DavidAU/Mistral-Nemo-Instruct-2407-17.5B-BRAINSTORM-20x-FORM-8-GGUF", "DavidAU/WestLake-12.7B-v2-Brainiac-GGUF", "DavidAU/L3-Grand-HORROR-18.5B-V1.8-STABLE-10-Gates-of-Hell-GGUF", "DavidAU/L3-Grand-HORROR-20.7B-V1.9-STABLE-Hathors-Revenge-GGUF", "DavidAU/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF", "cgus/NinjaMouse-2.4B-32L-danube-exl2", "DavidAU/L3-Grand-HORROR-17.4B-V1.7-STABLE-Kiss-Of-Death", "DavidAU/L3-Grand-HORROR-18.5B-V1.8-STABLE-10-Gates-of-Hell" ]
[]
[ "John6666/votepurchase-multiple-model", "John6666/text2tag-llm", "TencentARC/MetaMath-Mistral-Pro", "arcee-ai/arcee-model-testing" ]
1
https://aclanthology.org/2024.acl-long.353.bib
@inproceedings{mu-li-2024-generating, title = "Generating Contrastive Narratives Using the Brownian Bridge Process for Narrative Coherence Learning", author = "Mu, Feiteng and Li, Wenjie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.353", pages = "6538--6555", abstract = "A major challenge for narrative reasoning is to learn narrative coherence. Existing works mainly follow the contrastive learning paradigm. However, the negative samples in their methods can be easily distinguished, which makes their methods unsatisfactory. In this work, we devise two strategies for mining hard negatives, including (1) crisscrossing a narrative and its contrastive variants; and (2) event-level replacement. To obtain contrastive variants, we utilize the Brownian Bridge process to guarantee the quality of generated contrastive narratives. We evaluate our model on several tasks. The result proves the effectiveness of our method, and shows that our method is applicable to many applications.", }
A major challenge for narrative reasoning is to learn narrative coherence. Existing works mainly follow the contrastive learning paradigm. However, the negative samples in their methods can be easily distinguished, which makes their methods unsatisfactory. In this work, we devise two strategies for mining hard negatives, including (1) crisscrossing a narrative and its contrastive variants; and (2) event-level replacement. To obtain contrastive variants, we utilize the Brownian Bridge process to guarantee the quality of generated contrastive narratives. We evaluate our model on several tasks. The result proves the effectiveness of our method, and shows that our method is applicable to many applications.
[ "Mu, Feiteng", "Li, Wenjie" ]
Generating Contrastive Narratives Using the Brownian Bridge Process for Narrative Coherence Learning
acl-long.353
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.353/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.354.bib
@inproceedings{mu-li-2024-causal, title = "A Causal Approach for Counterfactual Reasoning in Narratives", author = "Mu, Feiteng and Li, Wenjie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.354", pages = "6556--6569", abstract = "Counterfactual reasoning in narratives requires predicting how alternative conditions, contrary to what actually happened, might have resulted in different outcomes.One major challenge is to maintain the causality between the counterfactual condition and the generated counterfactual outcome. In this paper, we propose a basic VAE module for counterfactual reasoning in narratives. We further introduce a pre-trained classifier and external event commonsense to mitigate the posterior collapse problem in the VAE approach, and improve the causality between the counterfactual condition and the generated counterfactual outcome. We evaluate our method on two public benchmarks. Experiments show that our method is effective.", }
Counterfactual reasoning in narratives requires predicting how alternative conditions, contrary to what actually happened, might have resulted in different outcomes.One major challenge is to maintain the causality between the counterfactual condition and the generated counterfactual outcome. In this paper, we propose a basic VAE module for counterfactual reasoning in narratives. We further introduce a pre-trained classifier and external event commonsense to mitigate the posterior collapse problem in the VAE approach, and improve the causality between the counterfactual condition and the generated counterfactual outcome. We evaluate our method on two public benchmarks. Experiments show that our method is effective.
[ "Mu, Feiteng", "Li, Wenjie" ]
A Causal Approach for Counterfactual Reasoning in Narratives
acl-long.354
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.354/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.355.bib
@inproceedings{lindemann-etal-2024-sip, title = "{SIP}: Injecting a Structural Inductive Bias into a {S}eq2{S}eq Model by Simulation", author = "Lindemann, Matthias and Koller, Alexander and Titov, Ivan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.355", pages = "6570--6587", abstract = "Strong inductive biases enable learning from little data and help generalization outside the training distribution. Popular neural architectures such as Transformers lack strong structural inductive biases for seq2seq NLP tasks on their own. Consequently, they struggle with systematic generalization beyond the training distribution, e.g. with extrapolating to longer inputs, even when pre-trained on large amounts of text.We show how a structural inductive bias can be efficiently injected into a seq2seq model by pre-training it to simulate structural transformations on synthetic data. Specifically, we inject an inductive bias towards Finite State Transducers (FSTs) into a Transformer by pre-training it to simulate FSTs given their descriptions. Our experiments show that our method imparts the desired inductive bias, resulting in improved systematic generalization and better few-shot learning for FST-like tasks. Our analysis shows that fine-tuned models accurately capture the state dynamics of the unseen underlying FSTs, suggesting that the simulation process is internalized by the fine-tuned model.", }
Strong inductive biases enable learning from little data and help generalization outside the training distribution. Popular neural architectures such as Transformers lack strong structural inductive biases for seq2seq NLP tasks on their own. Consequently, they struggle with systematic generalization beyond the training distribution, e.g. with extrapolating to longer inputs, even when pre-trained on large amounts of text.We show how a structural inductive bias can be efficiently injected into a seq2seq model by pre-training it to simulate structural transformations on synthetic data. Specifically, we inject an inductive bias towards Finite State Transducers (FSTs) into a Transformer by pre-training it to simulate FSTs given their descriptions. Our experiments show that our method imparts the desired inductive bias, resulting in improved systematic generalization and better few-shot learning for FST-like tasks. Our analysis shows that fine-tuned models accurately capture the state dynamics of the unseen underlying FSTs, suggesting that the simulation process is internalized by the fine-tuned model.
[ "Lindemann, Matthias", "Koller, Alex", "er", "Titov, Ivan" ]
SIP: Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation
acl-long.355
Oral
2310.00796
[ "https://github.com/namednil/sip" ]
https://huggingface.co/papers/2310.00796
0
0
0
3
https://aclanthology.org/2024.acl-long.355/
[ "namednil/sip-fst-tokenizer" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.356.bib
@inproceedings{alabi-etal-2024-hidden, title = "The Hidden Space of Transformer Language Adapters", author = "Alabi, Jesujoba and Mosbach, Marius and Eyal, Matan and Klakow, Dietrich and Geva, Mor", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.356", pages = "6588--6607", abstract = "We analyze the operation of transformer language adapters, which are small modules trained on top of a frozen language model to adapt its predictions to new target languages. We show that adapted predictions mostly evolve in the source language the model was trained on, while the target language becomes pronounced only in the very last layers of the model. Moreover, the adaptation process is gradual and distributed across layers, where it is possible to skip small groups of adapters without decreasing adaptation performance. Last, we show that adapters operate on top of the model{'}s frozen representation space while largely preserving its structure, rather than on an isolated subspace. Our findings provide a deeper view into the adaptation process of language models to new languages, showcasing the constraints imposed on it by the underlying model and introduces practical implications to enhance its efficiency.", }
We analyze the operation of transformer language adapters, which are small modules trained on top of a frozen language model to adapt its predictions to new target languages. We show that adapted predictions mostly evolve in the source language the model was trained on, while the target language becomes pronounced only in the very last layers of the model. Moreover, the adaptation process is gradual and distributed across layers, where it is possible to skip small groups of adapters without decreasing adaptation performance. Last, we show that adapters operate on top of the model{'}s frozen representation space while largely preserving its structure, rather than on an isolated subspace. Our findings provide a deeper view into the adaptation process of language models to new languages, showcasing the constraints imposed on it by the underlying model and introduces practical implications to enhance its efficiency.
[ "Alabi, Jesujoba", "Mosbach, Marius", "Eyal, Matan", "Klakow, Dietrich", "Geva, Mor" ]
The Hidden Space of Transformer Language Adapters
acl-long.356
Poster
2402.13137
[ "https://github.com/uds-lsv/hidden-space-adapters" ]
https://huggingface.co/papers/2402.13137
0
0
0
5
https://aclanthology.org/2024.acl-long.356/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.357.bib
@inproceedings{tripto-etal-2024-ship, title = "A Ship of Theseus: Curious Cases of Paraphrasing in {LLM}-Generated Texts", author = "Tripto, Nafis Irtiza and Venkatraman, Saranya and Macko, Dominik and Moro, Robert and Srba, Ivan and Uchendu, Adaku and Le, Thai and Lee, Dongwon", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.357", pages = "6608--6625", abstract = "In the realm of text manipulation and linguistic transformation, the question of authorship has been a subject of fascination and philosophical inquiry. Much like the Ship of Theseus paradox, which ponders whether a ship remains the same when each of its original planks is replaced, our research delves into an intriguing question: Does a text retain its original authorship when it undergoes numerous paraphrasing iterations? Specifically, since Large Language Models (LLMs) have demonstrated remarkable proficiency in both the generation of original content and the modification of human-authored texts, a pivotal question emerges concerning the determination of authorship in instances where LLMs or similar paraphrasing tools are employed to rephrase the text{--}i.e., whether authorship should be attributed to the original human author or the AI-powered tool. Therefore, we embark on a philosophical voyage through the seas of language and authorship to unravel this intricate puzzle. Using a computational approach, we discover that the diminishing performance in text classification models, with each successive paraphrasing iteration, is closely associated with the extent of deviation from the original author{'}s style, thus provoking a reconsideration of the current notion of authorship.", }
In the realm of text manipulation and linguistic transformation, the question of authorship has been a subject of fascination and philosophical inquiry. Much like the Ship of Theseus paradox, which ponders whether a ship remains the same when each of its original planks is replaced, our research delves into an intriguing question: Does a text retain its original authorship when it undergoes numerous paraphrasing iterations? Specifically, since Large Language Models (LLMs) have demonstrated remarkable proficiency in both the generation of original content and the modification of human-authored texts, a pivotal question emerges concerning the determination of authorship in instances where LLMs or similar paraphrasing tools are employed to rephrase the text{--}i.e., whether authorship should be attributed to the original human author or the AI-powered tool. Therefore, we embark on a philosophical voyage through the seas of language and authorship to unravel this intricate puzzle. Using a computational approach, we discover that the diminishing performance in text classification models, with each successive paraphrasing iteration, is closely associated with the extent of deviation from the original author{'}s style, thus provoking a reconsideration of the current notion of authorship.
[ "Tripto, Nafis Irtiza", "Venkatraman, Saranya", "Macko, Dominik", "Moro, Robert", "Srba, Ivan", "Uchendu, Adaku", "Le, Thai", "Lee, Dongwon" ]
A Ship of Theseus: Curious Cases of Paraphrasing in LLM-Generated Texts
acl-long.357
Poster
2311.08374
[ "https://github.com/tripto03/ship_of_theseus_paraphrased_copus" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.357/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.358.bib
@inproceedings{lin-etal-2024-advancing, title = "Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations", author = "Lin, Guan-Ting and Chiang, Cheng-Han and Lee, Hung-yi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.358", pages = "6626--6642", abstract = "In spoken dialogue, even if two current turns are the same sentence, their responses might still differ when they are spoken in different styles. The spoken styles, containing paralinguistic and prosodic information, mark the most significant difference between text and speech modality. When using text-only LLMs to model spoken dialogue, text-only LLMs cannot give different responses based on the speaking style of the current turn. In this paper, we focus on enabling LLMs to listen to the speaking styles and respond properly. Our goal is to teach the LLM that {``}even if the sentences are identical if they are spoken in different styles, their corresponding responses might be different{''}. Since there is no suitable dataset for achieving this goal, we collect a speech-to-speech dataset, StyleTalk, with the following desired characteristics: when two current speeches have the same content but are spoken in different styles, their responses will be different. To teach LLMs to understand and respond properly to the speaking styles, we propose the \textbf{Spoken-LLM} framework that can model the linguistic content and the speaking styles. We train Spoken-LLM using the StyleTalk dataset and devise a two-stage training pipeline to help the Spoken-LLM better learn the speaking styles. Based on extensive experiments, we show that Spoken-LLM outperforms text-only baselines and prior speech LLMs methods.", }
In spoken dialogue, even if two current turns are the same sentence, their responses might still differ when they are spoken in different styles. The spoken styles, containing paralinguistic and prosodic information, mark the most significant difference between text and speech modality. When using text-only LLMs to model spoken dialogue, text-only LLMs cannot give different responses based on the speaking style of the current turn. In this paper, we focus on enabling LLMs to listen to the speaking styles and respond properly. Our goal is to teach the LLM that {``}even if the sentences are identical if they are spoken in different styles, their corresponding responses might be different{''}. Since there is no suitable dataset for achieving this goal, we collect a speech-to-speech dataset, StyleTalk, with the following desired characteristics: when two current speeches have the same content but are spoken in different styles, their responses will be different. To teach LLMs to understand and respond properly to the speaking styles, we propose the \textbf{Spoken-LLM} framework that can model the linguistic content and the speaking styles. We train Spoken-LLM using the StyleTalk dataset and devise a two-stage training pipeline to help the Spoken-LLM better learn the speaking styles. Based on extensive experiments, we show that Spoken-LLM outperforms text-only baselines and prior speech LLMs methods.
[ "Lin, Guan-Ting", "Chiang, Cheng-Han", "Lee, Hung-yi" ]
Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations
acl-long.358
Poster
2402.12786
[ "https://github.com/daniellin94144/styletalk" ]
https://huggingface.co/papers/2402.12786
0
0
0
3
https://aclanthology.org/2024.acl-long.358/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.359.bib
@inproceedings{faldu-etal-2024-retinaqa, title = "{R}etina{QA}: A Robust Knowledge Base Question Answering Model for both Answerable and Unanswerable Questions", author = "Faldu, Prayushi and Bhattacharya, Indrajit and ., Mausam", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.359", pages = "6643--6656", abstract = "An essential requirement for a real-world Knowledge Base Question Answering (KBQA) system is the ability to detect the answerability of questions when generating logical forms. However, state-of-the-art KBQA models assume all questions to be answerable. Recent research has found that such models, when superficially adapted to detect answerability, struggle to satisfactorily identify the different categories of unanswerable questions, and simultaneously preserve good performance for answerable questions. Towards addressing this issue, we propose RetinaQA, a new KBQA model that unifies two key ideas in a single KBQA architecture: (a) discrimination over candidate logical forms, rather than generating these, for handling schema-related unanswerability, and (b) sketch-filling-based construction of candidate logical forms for handling data-related unaswerability. Our results show that RetinaQA significantly outperforms adaptations of state-of-the-art KBQA models in handling both answerable and unanswerable questions and demonstrates robustness across all categories of unanswerability. Notably, RetinaQA also sets a new state-of-the-art for answerable KBQA, surpassing existing models. We release our code base for further research: https://github.com/dair-iitd/RetinaQA.", }
An essential requirement for a real-world Knowledge Base Question Answering (KBQA) system is the ability to detect the answerability of questions when generating logical forms. However, state-of-the-art KBQA models assume all questions to be answerable. Recent research has found that such models, when superficially adapted to detect answerability, struggle to satisfactorily identify the different categories of unanswerable questions, and simultaneously preserve good performance for answerable questions. Towards addressing this issue, we propose RetinaQA, a new KBQA model that unifies two key ideas in a single KBQA architecture: (a) discrimination over candidate logical forms, rather than generating these, for handling schema-related unanswerability, and (b) sketch-filling-based construction of candidate logical forms for handling data-related unaswerability. Our results show that RetinaQA significantly outperforms adaptations of state-of-the-art KBQA models in handling both answerable and unanswerable questions and demonstrates robustness across all categories of unanswerability. Notably, RetinaQA also sets a new state-of-the-art for answerable KBQA, surpassing existing models. We release our code base for further research: https://github.com/dair-iitd/RetinaQA.
[ "Faldu, Prayushi", "Bhattacharya, Indrajit", "., Mausam" ]
RetinaQA: A Robust Knowledge Base Question Answering Model for both Answerable and Unanswerable Questions
acl-long.359
Poster
2403.10849
[ "https://github.com/dair-iitd/retinaqa" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.359/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.360.bib
@inproceedings{li-etal-2024-groundinggpt, title = "{G}rounding{GPT}: Language Enhanced Multi-modal Grounding Model", author = "Li, Zhaowei and Xu, Qi and Zhang, Dong and Song, Hang and Cai, YiQing and Qi, Qi and Zhou, Ran and Pan, Junting and Li, Zefeng and Tu, Vu and Huang, Zhida and Wang, Tao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.360", pages = "6657--6678", abstract = "Multi-modal large language models (MLLMs) have demonstrated remarkable performance across various tasks. However, these models often prioritize capturing global information and overlook the importance of perceiving local information. This limitation hinders their ability to effectively understand fine-grained details and handle grounding tasks that necessitate nuanced comprehension. Although some recent works have made strides in this, they have primarily focused on single-modality inputs. Therefore, we propose \textbf{GroundingGPT}, an end-to-end language enhanced multi-modal grounding model. It is designed to perform fine-grained grounding tasks for three modalities: image, video and audio. To enhance the model{'}s performance, we adopt a coarse-to-fine training strategy, utilizing a three-stage training approach to progressively enhance the model{'}s semantic awareness and fine-grained understanding capabilities. Additionally, we employ a diversified stage-specific dataset construction pipeline, developing a multi-modal, multi-granularity dataset tailored for training the model in different stages. Extensive experiments conducted on multiple multi-modal benchmarks demonstrate that our model achieves impressive fine-grained understanding of multi-modal inputs on grounding tasks while maintaining or improving its global comprehension capabilities. Our code, model, and dataset are available at https://github.com/lzw-lzw/GroundingGPT.", }
Multi-modal large language models (MLLMs) have demonstrated remarkable performance across various tasks. However, these models often prioritize capturing global information and overlook the importance of perceiving local information. This limitation hinders their ability to effectively understand fine-grained details and handle grounding tasks that necessitate nuanced comprehension. Although some recent works have made strides in this, they have primarily focused on single-modality inputs. Therefore, we propose \textbf{GroundingGPT}, an end-to-end language enhanced multi-modal grounding model. It is designed to perform fine-grained grounding tasks for three modalities: image, video and audio. To enhance the model{'}s performance, we adopt a coarse-to-fine training strategy, utilizing a three-stage training approach to progressively enhance the model{'}s semantic awareness and fine-grained understanding capabilities. Additionally, we employ a diversified stage-specific dataset construction pipeline, developing a multi-modal, multi-granularity dataset tailored for training the model in different stages. Extensive experiments conducted on multiple multi-modal benchmarks demonstrate that our model achieves impressive fine-grained understanding of multi-modal inputs on grounding tasks while maintaining or improving its global comprehension capabilities. Our code, model, and dataset are available at https://github.com/lzw-lzw/GroundingGPT.
[ "Li, Zhaowei", "Xu, Qi", "Zhang, Dong", "Song, Hang", "Cai, YiQing", "Qi, Qi", "Zhou, Ran", "Pan, Junting", "Li, Zefeng", "Tu, Vu", "Huang, Zhida", "Wang, Tao" ]
GroundingGPT: Language Enhanced Multi-modal Grounding Model
acl-long.360
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.360/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.361.bib
@inproceedings{eldifrawi-etal-2024-automated, title = "Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches", author = "Eldifrawi, Islam and Wang, Shengrui and Trabelsi, Amine", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.361", pages = "6679--6692", abstract = "Automated Fact-Checking (AFC) is the automated verification of claim accuracy. AFC is crucial in discerning truth from misinformation, especially given the huge amounts of content are generated online daily. Current research focuses on predicting claim veracity through metadata analysis and language scrutiny, with an emphasis on justifying verdicts. This paper surveys recent methodologies, proposinga comprehensive taxonomy and presenting the evolution of research in that landscape. A comparative analysis of methodologies and futuredirections for improving fact-checking explainability are also discussed.", }
Automated Fact-Checking (AFC) is the automated verification of claim accuracy. AFC is crucial in discerning truth from misinformation, especially given the huge amounts of content are generated online daily. Current research focuses on predicting claim veracity through metadata analysis and language scrutiny, with an emphasis on justifying verdicts. This paper surveys recent methodologies, proposinga comprehensive taxonomy and presenting the evolution of research in that landscape. A comparative analysis of methodologies and futuredirections for improving fact-checking explainability are also discussed.
[ "Eldifrawi, Islam", "Wang, Shengrui", "Trabelsi, Amine" ]
Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches
acl-long.361
Poster
2407.12853
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.361/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.362.bib
@inproceedings{mullov-etal-2024-decoupled, title = "Decoupled Vocabulary Learning Enables Zero-Shot Translation from Unseen Languages", author = "Mullov, Carlos and Pham, Quan and Waibel, Alexander", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.362", pages = "6693--6709", abstract = "Multilingual neural machine translation systems learn to map sentences of different languages into a common representation space. Intuitively, with a growing number of seen languages the encoder sentence representation grows more flexible and easily adaptable to new languages. In this work, we test this hypothesis by zero-shot translating from unseen languages. To deal with unknown vocabularies from unknown languages we propose a setup where we decouple learning of vocabulary and syntax, i.e. for each language we learn word representations in a separate step (using cross-lingual word embeddings), and then train to translate while keeping those word representations frozen. We demonstrate that this setup enables zero-shot translation from entirely unseen languages. Zero-shot translating with a model trained on Germanic and Romance languages we achieve scores of 42.6 BLEU for Portuguese-English and 20.7 BLEU for Russian-English on TED domain. We explore how this zero-shot translation capability develops with varying number of languages seen by the encoder. Lastly, we explore the effectiveness of our decoupled learning strategy for unsupervised machine translation. By exploiting our model{'}s zero-shot translation capability for iterative back-translation we attain near parity with a supervised setting.", }
Multilingual neural machine translation systems learn to map sentences of different languages into a common representation space. Intuitively, with a growing number of seen languages the encoder sentence representation grows more flexible and easily adaptable to new languages. In this work, we test this hypothesis by zero-shot translating from unseen languages. To deal with unknown vocabularies from unknown languages we propose a setup where we decouple learning of vocabulary and syntax, i.e. for each language we learn word representations in a separate step (using cross-lingual word embeddings), and then train to translate while keeping those word representations frozen. We demonstrate that this setup enables zero-shot translation from entirely unseen languages. Zero-shot translating with a model trained on Germanic and Romance languages we achieve scores of 42.6 BLEU for Portuguese-English and 20.7 BLEU for Russian-English on TED domain. We explore how this zero-shot translation capability develops with varying number of languages seen by the encoder. Lastly, we explore the effectiveness of our decoupled learning strategy for unsupervised machine translation. By exploiting our model{'}s zero-shot translation capability for iterative back-translation we attain near parity with a supervised setting.
[ "Mullov, Carlos", "Pham, Quan", "Waibel, Alex", "er" ]
Decoupled Vocabulary Learning Enables Zero-Shot Translation from Unseen Languages
acl-long.362
Poster
2408.02290
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.362/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.363.bib
@inproceedings{kong-etal-2024-swapmoe, title = "{S}wap{M}o{E}: Serving Off-the-shelf {M}o{E}-based Large Language Models with Tunable Memory Budget", author = "Kong, Rui and Li, Yuanchun and Feng, Qingtian and Wang, Weijun and Ye, Xiaozhou and Ouyang, Ye and Kong, Linghe and Liu, Yunxin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.363", pages = "6710--6720", abstract = "Mixture of experts (MoE) is a popular technique to improve capacity of Large Language Models (LLMs) with conditionally-activated parallel experts. However, serving MoE models on memory-constrained devices is challenging due to the large parameter size. Typical solutions such as memory swapping or expert pruning may lead to significantly higher latency or severe accuracy loss.In this paper, we introduce SwapMoE, a framework for efficient serving of MoE-based large language models with tunable memory budgets. The main idea of SwapMoE is to keep a small dynamic set of important experts, namely Virtual Experts, in the main memory for inference, while seamlessly maintaining how the Virtual Experts map to the actual experts. Experiments have shown that SwapMoE can reduce the memory footprint while maintaining reasonable accuracy. For example, on text summarization tasks with Switch Transformer, SwapMoE can reduce the memory consumption from 14.2 GiB to 4.7 GiB, together with 50{\%} latency reduction and a slight Rouge-2 score drop of 0.041.", }
Mixture of experts (MoE) is a popular technique to improve capacity of Large Language Models (LLMs) with conditionally-activated parallel experts. However, serving MoE models on memory-constrained devices is challenging due to the large parameter size. Typical solutions such as memory swapping or expert pruning may lead to significantly higher latency or severe accuracy loss.In this paper, we introduce SwapMoE, a framework for efficient serving of MoE-based large language models with tunable memory budgets. The main idea of SwapMoE is to keep a small dynamic set of important experts, namely Virtual Experts, in the main memory for inference, while seamlessly maintaining how the Virtual Experts map to the actual experts. Experiments have shown that SwapMoE can reduce the memory footprint while maintaining reasonable accuracy. For example, on text summarization tasks with Switch Transformer, SwapMoE can reduce the memory consumption from 14.2 GiB to 4.7 GiB, together with 50{\%} latency reduction and a slight Rouge-2 score drop of 0.041.
[ "Kong, Rui", "Li, Yuanchun", "Feng, Qingtian", "Wang, Weijun", "Ye, Xiaozhou", "Ouyang, Ye", "Kong, Linghe", "Liu, Yunxin" ]
SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget
acl-long.363
Poster
2308.15030
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.363/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.364.bib
@inproceedings{alonso-etal-2024-pixt3, title = "{P}ix{T}3: Pixel-based Table-To-Text Generation", author = "Alonso, I{\~n}igo and Agirre, Eneko and Lapata, Mirella", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.364", pages = "6721--6736", abstract = "Table-to-text generation involves generating appropriate textual descriptions given structured tabular data. It has attracted increasing attention in recent years thanks to the popularity of neural network models and the availability of large-scale datasets. A common feature across existing methods is their treatment of the input as a string, i.e., by employing linearization techniques that do not always preserve information in the table, are verbose, and lack space efficiency. We propose to rethink data-to-text generation as a visual recognition task, removing the need for rendering the input in a string format. We present PixT3, a multimodal table-to-text model that overcomes the challenges of linearization and input size limitations encountered by existing models. PixT3 is trained with a new self-supervised learning objective to reinforce table structure awareness and is applicable to open-ended and controlled generation settings. Experiments on the ToTTo and Logic2Text benchmarks show that PixT3 is competitive and, in some settings, superior to generators that operate solely on text.", }
Table-to-text generation involves generating appropriate textual descriptions given structured tabular data. It has attracted increasing attention in recent years thanks to the popularity of neural network models and the availability of large-scale datasets. A common feature across existing methods is their treatment of the input as a string, i.e., by employing linearization techniques that do not always preserve information in the table, are verbose, and lack space efficiency. We propose to rethink data-to-text generation as a visual recognition task, removing the need for rendering the input in a string format. We present PixT3, a multimodal table-to-text model that overcomes the challenges of linearization and input size limitations encountered by existing models. PixT3 is trained with a new self-supervised learning objective to reinforce table structure awareness and is applicable to open-ended and controlled generation settings. Experiments on the ToTTo and Logic2Text benchmarks show that PixT3 is competitive and, in some settings, superior to generators that operate solely on text.
[ "Alonso, I{\\~n}igo", "Agirre, Eneko", "Lapata, Mirella" ]
PixT3: Pixel-based Table-To-Text Generation
acl-long.364
Poster
2311.09808
[ "https://github.com/alonsoapp/pixt3" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.364/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.365.bib
@inproceedings{yona-etal-2024-narrowing, title = "Narrowing the Knowledge Evaluation Gap: Open-Domain Question Answering with Multi-Granularity Answers", author = "Yona, Gal and Aharoni, Roee and Geva, Mor", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.365", pages = "6737--6751", abstract = "Factual questions typically can be answered correctly at different levels of granularity. For example, both {``}August 4, 1961{''} and {``}1961{''} are correct answers to the question {``}When was Barack Obama born?{''}. Standard question answering (QA) evaluation protocols, however, do not explicitly take this into account and compare a predicted answer against answers of a single granularity level. In this work, we propose GRANOLA QA, a novel evaluation setting where a predicted answer is evaluated in terms of accuracy and informativeness against a set of multi-granularity answers. We present a simple methodology for enriching existing datasets with multi-granularity answers, and create GRANOLA-EQ, a multi-granularity version of the EntityQuestions dataset. We evaluate a range of decoding methods on GRANOLA-EQ, including a new algorithm, called Decoding with Response Aggregation (DRAG), that is geared towards aligning the response granularity with the model{'}s uncertainty. Our experiments show that large language models with standard decoding tend to generate specific answers, which are often incorrect. In contrast, when evaluated on multi-granularity answers, DRAG yields a nearly 20 point increase in accuracy on average, which further increases for rare entities. Overall, this reveals that standard evaluation and decoding schemes may significantly underestimate the knowledge encapsulated in LMs.", }
Factual questions typically can be answered correctly at different levels of granularity. For example, both {``}August 4, 1961{''} and {``}1961{''} are correct answers to the question {``}When was Barack Obama born?{''}. Standard question answering (QA) evaluation protocols, however, do not explicitly take this into account and compare a predicted answer against answers of a single granularity level. In this work, we propose GRANOLA QA, a novel evaluation setting where a predicted answer is evaluated in terms of accuracy and informativeness against a set of multi-granularity answers. We present a simple methodology for enriching existing datasets with multi-granularity answers, and create GRANOLA-EQ, a multi-granularity version of the EntityQuestions dataset. We evaluate a range of decoding methods on GRANOLA-EQ, including a new algorithm, called Decoding with Response Aggregation (DRAG), that is geared towards aligning the response granularity with the model{'}s uncertainty. Our experiments show that large language models with standard decoding tend to generate specific answers, which are often incorrect. In contrast, when evaluated on multi-granularity answers, DRAG yields a nearly 20 point increase in accuracy on average, which further increases for rare entities. Overall, this reveals that standard evaluation and decoding schemes may significantly underestimate the knowledge encapsulated in LMs.
[ "Yona, Gal", "Aharoni, Roee", "Geva, Mor" ]
Narrowing the Knowledge Evaluation Gap: Open-Domain Question Answering with Multi-Granularity Answers
acl-long.365
Poster
2401.04695
[ "" ]
https://huggingface.co/papers/2401.04695
3
9
0
3
https://aclanthology.org/2024.acl-long.365/
[]
[ "google/granola-entity-questions" ]
[]
1
https://aclanthology.org/2024.acl-long.366.bib
@inproceedings{rice-etal-2024-tams, title = "{TAMS}: Translation-Assisted Morphological Segmentation", author = "Rice, Enora and Marashian, Ali and Gessler, Luke and Palmer, Alexis and Wense, Katharina", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.366", pages = "6752--6765", abstract = "Canonical morphological segmentation is the process of analyzing words into the standard (\textit{aka} underlying) forms of their constituent morphemes.This is a core task in endangered language documentation, and NLP systems have the potential to dramatically speed up this process. In typical language documentation settings, training data for canonical morpheme segmentation is scarce, making it difficult to train high quality models. However, translation data is often much more abundant, and, in this work, we present a method that attempts to leverage translation data in the canonical segmentation task. We propose a character-level sequence-to-sequence model that incorporates representations of translations obtained from pretrained high-resource monolingual language models as an additional signal. Our model outperforms the baseline in a super-low resource setting but yields mixed results on training splits with more data. Additionally, we find that we can achieve strong performance even without needing difficult-to-obtain word level alignments. While further work is needed to make translations useful in higher-resource settings, our model shows promise in severely resource-constrained settings.", }
Canonical morphological segmentation is the process of analyzing words into the standard (\textit{aka} underlying) forms of their constituent morphemes.This is a core task in endangered language documentation, and NLP systems have the potential to dramatically speed up this process. In typical language documentation settings, training data for canonical morpheme segmentation is scarce, making it difficult to train high quality models. However, translation data is often much more abundant, and, in this work, we present a method that attempts to leverage translation data in the canonical segmentation task. We propose a character-level sequence-to-sequence model that incorporates representations of translations obtained from pretrained high-resource monolingual language models as an additional signal. Our model outperforms the baseline in a super-low resource setting but yields mixed results on training splits with more data. Additionally, we find that we can achieve strong performance even without needing difficult-to-obtain word level alignments. While further work is needed to make translations useful in higher-resource settings, our model shows promise in severely resource-constrained settings.
[ "Rice, Enora", "Marashian, Ali", "Gessler, Luke", "Palmer, Alexis", "Wense, Katharina" ]
TAMS: Translation-Assisted Morphological Segmentation
acl-long.366
Poster
2403.14840
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.366/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.367.bib
@inproceedings{khan-etal-2024-xcodeeval, title = "{XC}ode{E}val: An Execution-based Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval", author = "Khan, Mohammad Abdullah Matin and Bari, M Saiful and Long, Do and Wang, Weishi and Parvez, Md Rizwan and Joty, Shafiq", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.367", pages = "6766--6805", abstract = "Recently, pre-trained large language models (LLMs) have shown impressive abilities in generating codes from natural language descriptions, repairing buggy codes, translating codes between languages, and retrieving relevant code segments. However, the evaluation of these models has often been performed in a scattered way on only one or two specific tasks, in a few languages, at a partial granularity (e.g., function) level, and in many cases without proper training data. Even more concerning is that in most cases the evaluation of generated codes has been done in terms of mere lexical overlap with a reference code rather than actual execution. We introduce *xCodeEval*, the largest executable multilingual multitask benchmark to date consisting of 25 M document-level coding examples (16.5 B tokens) from about 7.5 K unique problems covering up to 11 programming languages with execution-level parallelism. It features a total of 7 tasks involving code understanding, generation, translation and retrieval. *xCodeEval* adopts an execution-based evaluation and offers a multilingual code execution engine, *ExecEval* that supports unit test based execution in all the 11 languages. To address the challenge of balancing the distributions of text-code samples over multiple attributes in validation/test sets, we propose a novel data splitting and a data selection schema based on the geometric mean and graph-theoretic principle. Our experiments with OpenAI{'}s LLMs (zero-shot) and open-LLMs (zero-shot and fine-tuned) on the tasks and languages demonstrate to be quite challenging as per the current advancements in language models.", }
Recently, pre-trained large language models (LLMs) have shown impressive abilities in generating codes from natural language descriptions, repairing buggy codes, translating codes between languages, and retrieving relevant code segments. However, the evaluation of these models has often been performed in a scattered way on only one or two specific tasks, in a few languages, at a partial granularity (e.g., function) level, and in many cases without proper training data. Even more concerning is that in most cases the evaluation of generated codes has been done in terms of mere lexical overlap with a reference code rather than actual execution. We introduce *xCodeEval*, the largest executable multilingual multitask benchmark to date consisting of 25 M document-level coding examples (16.5 B tokens) from about 7.5 K unique problems covering up to 11 programming languages with execution-level parallelism. It features a total of 7 tasks involving code understanding, generation, translation and retrieval. *xCodeEval* adopts an execution-based evaluation and offers a multilingual code execution engine, *ExecEval* that supports unit test based execution in all the 11 languages. To address the challenge of balancing the distributions of text-code samples over multiple attributes in validation/test sets, we propose a novel data splitting and a data selection schema based on the geometric mean and graph-theoretic principle. Our experiments with OpenAI{'}s LLMs (zero-shot) and open-LLMs (zero-shot and fine-tuned) on the tasks and languages demonstrate to be quite challenging as per the current advancements in language models.
[ "Khan, Mohammad Abdullah Matin", "Bari, M Saiful", "Long, Do", "Wang, Weishi", "Parvez, Md Rizwan", "Joty, Shafiq" ]
XCodeEval: An Execution-based Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval
acl-long.367
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.367/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.368.bib
@inproceedings{tan-etal-2024-proxyqa, title = "{P}roxy{QA}: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models", author = "Tan, Haochen and Guo, Zhijiang and Shi, Zhan and Xu, Lu and Liu, Zhili and Feng, Yunlong and Li, Xiaoguang and Wang, Yasheng and Shang, Lifeng and Liu, Qun and Song, Linqi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.368", pages = "6806--6827", abstract = "Large Language Models (LLMs) have succeeded remarkably in understanding long-form contents. However, exploring their capability for generating long-form contents, such as reports and articles, has been relatively unexplored and inadequately assessed by existing benchmarks. The prevalent evaluation methods, which predominantly rely on crowdsourcing, are recognized for their labor-intensive nature and lack of efficiency, whereas automated metrics, such as the ROUGE score, demonstrate discordance with human judgment criteria. In this paper, we propose ProxyQA, an innovative framework dedicated to assessing long-text generation. ProxyQA comprises in-depth human-curated meta-questions spanning various domains, each accompanied by specific proxy-questions with pre-annotated answers. LLMs are tasked to generate extensive content in response to these meta-questions, by engaging an evaluator and incorporating the generated texts as contextual background, ProxyQA assesses the generated content{'}s quality through the evaluator{'}s accuracy in addressing the proxy-questions. We examine multiple LLMs, emphasizing ProxyQA{'}s demanding nature as a high-quality assessment tool. Human evaluation demonstrates that the proxy-question method is notably self-consistent and aligns closely with human evaluative standards. The dataset and leaderboard is available at \url{https://proxy-qa.com}.", }
Large Language Models (LLMs) have succeeded remarkably in understanding long-form contents. However, exploring their capability for generating long-form contents, such as reports and articles, has been relatively unexplored and inadequately assessed by existing benchmarks. The prevalent evaluation methods, which predominantly rely on crowdsourcing, are recognized for their labor-intensive nature and lack of efficiency, whereas automated metrics, such as the ROUGE score, demonstrate discordance with human judgment criteria. In this paper, we propose ProxyQA, an innovative framework dedicated to assessing long-text generation. ProxyQA comprises in-depth human-curated meta-questions spanning various domains, each accompanied by specific proxy-questions with pre-annotated answers. LLMs are tasked to generate extensive content in response to these meta-questions, by engaging an evaluator and incorporating the generated texts as contextual background, ProxyQA assesses the generated content{'}s quality through the evaluator{'}s accuracy in addressing the proxy-questions. We examine multiple LLMs, emphasizing ProxyQA{'}s demanding nature as a high-quality assessment tool. Human evaluation demonstrates that the proxy-question method is notably self-consistent and aligns closely with human evaluative standards. The dataset and leaderboard is available at \url{https://proxy-qa.com}.
[ "Tan, Haochen", "Guo, Zhijiang", "Shi, Zhan", "Xu, Lu", "Liu, Zhili", "Feng, Yunlong", "Li, Xiaoguang", "Wang, Yasheng", "Shang, Lifeng", "Liu, Qun", "Song, Linqi" ]
ProxyQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models
acl-long.368
Poster
2401.15042
[ "https://github.com/namco0816/proxyqa" ]
https://huggingface.co/papers/2401.15042
1
0
0
10
https://aclanthology.org/2024.acl-long.368/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.369.bib
@inproceedings{monea-etal-2024-glitch, title = "A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia", author = "Monea, Giovanni and Peyrard, Maxime and Josifoski, Martin and Chaudhary, Vishrav and Eisner, Jason and Kiciman, Emre and Palangi, Hamid and Patra, Barun and West, Robert", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.369", pages = "6828--6844", abstract = "Large language models (LLMs) have an impressive ability to draw on novel information supplied in their context. Yet the mechanisms underlying this contextual grounding remain unknown, especially in situations where contextual information contradicts factual knowledge stored in the parameters, which LLMs also excel at recalling. Favoring the contextual information is critical for retrieval-augmented generation methods, which enrich the context with up-to-date information, hoping that grounding can rectify outdated or noisy stored knowledge. We present a novel method to study grounding abilities using Fakepedia, a novel dataset of counterfactual texts constructed to clash with a model{'}s internal parametric knowledge. In this study, we introduce Fakepedia, a counterfactual dataset designed to evaluate grounding abilities when the internal parametric knowledge clashes with the contextual information. We benchmark various LLMs with Fakepedia and conduct a causal mediation analysis of LLM components when answering Fakepedia queries, based on our Masked Grouped Causal Tracing (MGCT) method. Through this analysis, we identify distinct computational patterns between grounded and ungrounded responses. We finally demonstrate that distinguishing grounded from ungrounded responses is achievable through computational analysis alone. Our results, together with existing findings about factual recall mechanisms, provide a coherent narrative of how grounding and factual recall mechanisms interact within LLMs.", }
Large language models (LLMs) have an impressive ability to draw on novel information supplied in their context. Yet the mechanisms underlying this contextual grounding remain unknown, especially in situations where contextual information contradicts factual knowledge stored in the parameters, which LLMs also excel at recalling. Favoring the contextual information is critical for retrieval-augmented generation methods, which enrich the context with up-to-date information, hoping that grounding can rectify outdated or noisy stored knowledge. We present a novel method to study grounding abilities using Fakepedia, a novel dataset of counterfactual texts constructed to clash with a model{'}s internal parametric knowledge. In this study, we introduce Fakepedia, a counterfactual dataset designed to evaluate grounding abilities when the internal parametric knowledge clashes with the contextual information. We benchmark various LLMs with Fakepedia and conduct a causal mediation analysis of LLM components when answering Fakepedia queries, based on our Masked Grouped Causal Tracing (MGCT) method. Through this analysis, we identify distinct computational patterns between grounded and ungrounded responses. We finally demonstrate that distinguishing grounded from ungrounded responses is achievable through computational analysis alone. Our results, together with existing findings about factual recall mechanisms, provide a coherent narrative of how grounding and factual recall mechanisms interact within LLMs.
[ "Monea, Giovanni", "Peyrard, Maxime", "Josifoski, Martin", "Chaudhary, Vishrav", "Eisner, Jason", "Kiciman, Emre", "Palangi, Hamid", "Patra, Barun", "West, Robert" ]
A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
acl-long.369
Poster
2312.02073
[ "https://github.com/epfl-dlab/llm-grounding-analysis" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.369/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.370.bib
@inproceedings{fan-etal-2024-muffin, title = "Muffin or {C}hihuahua? Challenging Multimodal Large Language Models with Multipanel {VQA}", author = "Fan, Yue and Gu, Jing and Zhou, Kaiwen and Yan, Qianqi and Jiang, Shan and Kuo, Ching-Chen and Zhao, Yang and Guan, Xinze and Wang, Xin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.370", pages = "6845--6863", abstract = "Multipanel images, commonly seen as web screenshots, posters, etc., pervade our daily lives. These images, characterized by their composition of multiple subfigures in distinct layouts, effectively convey information to people. Toward building advanced multimodal AI applications, such as agents that understand complex scenes and navigate through webpages, the skill of multipanel visual reasoning is essential, and a comprehensive evaluation of models in this regard is important. Therefore, we introduce Multipanel Visual Question Answering (MultipanelVQA), a novel benchmark comprising 6,600 triplets of questions, answers, and multipanel images that specifically challenge models in comprehending multipanel images. Our evaluation shows that questions in the MultipanelVQA benchmark pose significant challenges to the state-of-the-art Multimodal Large Language Models (MLLMs) tested, even though humans can attain approximately 99{\%} accuracy on these questions. Distinctively, the MultipanelVQA benchmark features synthetically generated multipanel images specifically crafted to isolate and assess the impact of various factors, such as the layout, on MLLMs{'} multipanel image comprehension abilities. As a result, in addition to benchmarking the capabilities of MLLMs in understanding multipanel images, we analyze various factors of the multipanel image that affect MLLMs{'} performance with synthetic data and offer insights for enhancement.", }
Multipanel images, commonly seen as web screenshots, posters, etc., pervade our daily lives. These images, characterized by their composition of multiple subfigures in distinct layouts, effectively convey information to people. Toward building advanced multimodal AI applications, such as agents that understand complex scenes and navigate through webpages, the skill of multipanel visual reasoning is essential, and a comprehensive evaluation of models in this regard is important. Therefore, we introduce Multipanel Visual Question Answering (MultipanelVQA), a novel benchmark comprising 6,600 triplets of questions, answers, and multipanel images that specifically challenge models in comprehending multipanel images. Our evaluation shows that questions in the MultipanelVQA benchmark pose significant challenges to the state-of-the-art Multimodal Large Language Models (MLLMs) tested, even though humans can attain approximately 99{\%} accuracy on these questions. Distinctively, the MultipanelVQA benchmark features synthetically generated multipanel images specifically crafted to isolate and assess the impact of various factors, such as the layout, on MLLMs{'} multipanel image comprehension abilities. As a result, in addition to benchmarking the capabilities of MLLMs in understanding multipanel images, we analyze various factors of the multipanel image that affect MLLMs{'} performance with synthetic data and offer insights for enhancement.
[ "Fan, Yue", "Gu, Jing", "Zhou, Kaiwen", "Yan, Qianqi", "Jiang, Shan", "Kuo, Ching-Chen", "Zhao, Yang", "Guan, Xinze", "Wang, Xin" ]
Muffin or Chihuahua? Challenging Multimodal Large Language Models with Multipanel VQA
acl-long.370
Poster
2401.15847
[ "" ]
https://huggingface.co/papers/2401.15847
2
2
0
8
https://aclanthology.org/2024.acl-long.370/
[]
[ "yfan1997/MultipanelVQA_real-world", "yfan1997/MultipanelVQA_synthetic" ]
[]
1
https://aclanthology.org/2024.acl-long.371.bib
@inproceedings{he-etal-2024-webvoyager, title = "{W}eb{V}oyager: Building an End-to-End Web Agent with Large Multimodal Models", author = "He, Hongliang and Yao, Wenlin and Ma, Kaixin and Yu, Wenhao and Dai, Yong and Zhang, Hongming and Lan, Zhenzhong and Yu, Dong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.371", pages = "6864--6890", abstract = "The rapid advancement of large language models (LLMs) has led to a new era marked by the development of autonomous applications in real-world scenarios, which drives innovation in creating advanced web agents. Existing web agents typically only handle one input modality and are evaluated only in simplified web simulators or static web snapshots, greatly limiting their applicability in real-world scenarios. To bridge this gap, we introduce WebVoyager, an innovative Large Multimodal Model (LMM) powered web agent that can complete user instructions end-to-end by interacting with real-world websites. Moreover, we establish a new benchmark by compiling real-world tasks from 15 popular websites and introduce an automatic evaluation protocol leveraging multimodal understanding abilities of GPT-4V to evaluate open-ended web agents. We show that WebVoyager achieves a 59.1{\%} task success rate on our benchmark, significantly surpassing the performance of both GPT-4 (All Tools) and the WebVoyager (text-only) setups, underscoring the exceptional capability of WebVoyager. The proposed automatic evaluation metric achieves 85.3{\%} agreement with human judgment, indicating its effectiveness in providing reliable and accurate assessments of web agents.", }
The rapid advancement of large language models (LLMs) has led to a new era marked by the development of autonomous applications in real-world scenarios, which drives innovation in creating advanced web agents. Existing web agents typically only handle one input modality and are evaluated only in simplified web simulators or static web snapshots, greatly limiting their applicability in real-world scenarios. To bridge this gap, we introduce WebVoyager, an innovative Large Multimodal Model (LMM) powered web agent that can complete user instructions end-to-end by interacting with real-world websites. Moreover, we establish a new benchmark by compiling real-world tasks from 15 popular websites and introduce an automatic evaluation protocol leveraging multimodal understanding abilities of GPT-4V to evaluate open-ended web agents. We show that WebVoyager achieves a 59.1{\%} task success rate on our benchmark, significantly surpassing the performance of both GPT-4 (All Tools) and the WebVoyager (text-only) setups, underscoring the exceptional capability of WebVoyager. The proposed automatic evaluation metric achieves 85.3{\%} agreement with human judgment, indicating its effectiveness in providing reliable and accurate assessments of web agents.
[ "He, Hongliang", "Yao, Wenlin", "Ma, Kaixin", "Yu, Wenhao", "Dai, Yong", "Zhang, Hongming", "Lan, Zhenzhong", "Yu, Dong" ]
WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models
acl-long.371
Poster
2401.13919
[ "https://github.com/minorjerry/webvoyager" ]
https://huggingface.co/papers/2401.13919
5
23
3
8
https://aclanthology.org/2024.acl-long.371/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.372.bib
@inproceedings{li-etal-2024-translation, title = "Translation-based Lexicalization Generation and Lexical Gap Detection: Application to Kinship Terms", author = "Li, Senyu and Hauer, Bradley and Shi, Ning and Kondrak, Grzegorz", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.372", pages = "6891--6900", abstract = "Constructing lexicons with explicitly identified lexical gaps is a vital part of building multilingual lexical resources. Prior work has leveraged bilingual dictionaries and linguistic typologies for semi-automatic identification of lexical gaps. Instead, we propose a generally-applicable algorithmic method to automatically generate concept lexicalizations, which is based on machine translation and hypernymy relations between concepts. The absence of a lexicalization implies a lexical gap. We apply our method to kinship terms, which make a suitable case study because of their explicit definitions and regular structure. Empirical evaluations demonstrate that our approach yields higher accuracy than BabelNet and ChatGPT. Our error analysis indicates that enhancing the quality of translations can further improve the accuracy of our method.", }
Constructing lexicons with explicitly identified lexical gaps is a vital part of building multilingual lexical resources. Prior work has leveraged bilingual dictionaries and linguistic typologies for semi-automatic identification of lexical gaps. Instead, we propose a generally-applicable algorithmic method to automatically generate concept lexicalizations, which is based on machine translation and hypernymy relations between concepts. The absence of a lexicalization implies a lexical gap. We apply our method to kinship terms, which make a suitable case study because of their explicit definitions and regular structure. Empirical evaluations demonstrate that our approach yields higher accuracy than BabelNet and ChatGPT. Our error analysis indicates that enhancing the quality of translations can further improve the accuracy of our method.
[ "Li, Senyu", "Hauer, Bradley", "Shi, Ning", "Kondrak, Grzegorz" ]
Translation-based Lexicalization Generation and Lexical Gap Detection: Application to Kinship Terms
acl-long.372
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.372/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.373.bib
@inproceedings{dutt-etal-2024-leveraging, title = "Leveraging Machine-Generated Rationales to Facilitate Social Meaning Detection in Conversations", author = "Dutt, Ritam and Wu, Zhen and Shi, Jiaxin and Sheth, Divyanshu and Gupta, Prakhar and Rose, Carolyn", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.373", pages = "6901--6929", abstract = "We present a generalizable classification approach that leverages Large Language Models (LLMs) to facilitate the detection of implicitly encoded social meaning in conversations. We design a multi-faceted prompt to extract a textual explanation of the reasoning that connects visible cues to underlying social meanings. These extracted explanations or rationales serve as augmentations to the conversational text to facilitate dialogue understanding and transfer. Our empirical results over 2,340 experimental settings demonstrate the significant positive impact of adding these rationales. Our findings hold true for in-domain classification, zero-shot, and few-shot domain transfer for two different social meaning detection tasks, each spanning two different corpora.", }
We present a generalizable classification approach that leverages Large Language Models (LLMs) to facilitate the detection of implicitly encoded social meaning in conversations. We design a multi-faceted prompt to extract a textual explanation of the reasoning that connects visible cues to underlying social meanings. These extracted explanations or rationales serve as augmentations to the conversational text to facilitate dialogue understanding and transfer. Our empirical results over 2,340 experimental settings demonstrate the significant positive impact of adding these rationales. Our findings hold true for in-domain classification, zero-shot, and few-shot domain transfer for two different social meaning detection tasks, each spanning two different corpora.
[ "Dutt, Ritam", "Wu, Zhen", "Shi, Jiaxin", "Sheth, Divyanshu", "Gupta, Prakhar", "Rose, Carolyn" ]
Leveraging Machine-Generated Rationales to Facilitate Social Meaning Detection in Conversations
acl-long.373
Poster
2406.19545
[ "https://github.com/shorit/ratdial" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.373/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.374.bib
@inproceedings{devasier-etal-2024-robust, title = "Robust Frame-Semantic Models with Lexical Unit Trees and Negative Samples", author = "Devasier, Jacob and Gurjar, Yogesh and Li, Chengkai", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.374", pages = "6930--6941", abstract = "We present novel advancements in frame-semantic parsing, specifically focusing on target identification and frame identification. Our target identification model employs a novel prefix tree modification to enable robust support for multi-word lexical units, resulting in a coverage of 99.4{\%} of the targets in the FrameNet 1.7 fulltext annotations. It utilizes a RoBERTa-based filter to achieve an F1 score of 0.775, surpassing the previous state-of-the-art solution by +0.012. For frame identification, we introduce a modification to the standard multiple-choice classification paradigm by incorporating additional negative frames for targets with limited candidate frames, resulting in a +0.014 accuracy improvement over the frame-only model of FIDO, the previous state-of-the-art system, and +0.002 over its full system. Our approach significantly enhances performance on rare frames, exhibiting an improvement of +0.044 over FIDO{'}s accuracy on frames with 5 or fewer samples, and on under-utilized frames, with an improvement of +0.139 on targets with a single candidate frame. Overall, our contributions address critical challenges and advance the state-of-the-art in frame-semantic parsing.", }
We present novel advancements in frame-semantic parsing, specifically focusing on target identification and frame identification. Our target identification model employs a novel prefix tree modification to enable robust support for multi-word lexical units, resulting in a coverage of 99.4{\%} of the targets in the FrameNet 1.7 fulltext annotations. It utilizes a RoBERTa-based filter to achieve an F1 score of 0.775, surpassing the previous state-of-the-art solution by +0.012. For frame identification, we introduce a modification to the standard multiple-choice classification paradigm by incorporating additional negative frames for targets with limited candidate frames, resulting in a +0.014 accuracy improvement over the frame-only model of FIDO, the previous state-of-the-art system, and +0.002 over its full system. Our approach significantly enhances performance on rare frames, exhibiting an improvement of +0.044 over FIDO{'}s accuracy on frames with 5 or fewer samples, and on under-utilized frames, with an improvement of +0.139 on targets with a single candidate frame. Overall, our contributions address critical challenges and advance the state-of-the-art in frame-semantic parsing.
[ "Devasier, Jacob", "Gurjar, Yogesh", "Li, Chengkai" ]
Robust Frame-Semantic Models with Lexical Unit Trees and Negative Samples
acl-long.374
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.374/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.375.bib
@inproceedings{yang-etal-2024-harnessing, title = "Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation", author = "Yang, Yuan and Xiong, Siheng and Payani, Ali and Shareghi, Ehsan and Fekri, Faramarz", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.375", pages = "6942--6959", abstract = "Advancements in logical reasoning, utilizing LLMs to convert natural language into logical symbolism, combined with the use of external theorem provers, have repositioned the symbolic approach as a central point of interest. The main challenge within this paradigm lies in the LLMs{'} capability to accurately translate natural language (NL) statements into first-order-logic (FOL) expressions. Although LLMs have shown notable success, there remains a gap in understanding the limitations and challenges they encounter in NL-FOL translation. This is primarily due to the absence of datasets and evaluation test beds at the required fine-grained level. We present MALLS, a dataset of 28K diverse and verified sentence-level NL-FOL pairs collected from GPT4. We utilize a combined strategy of FOL rule parsing, human annotation, and automatic filtering to ensure quality. We also present LogicLLaMA, a LLaMA2-7B/13B fine-tuned on MALLS for NL-FOL translation, which can be used standalone or to correct previously generated rules by GPT3.5 after being further fine-tuned via a novel reinforcement learning with human feedback (RLHF) framework. We benchmark a wide range of LLMs on MALLS and previous datasets, highlighting weaknesses in them in NL-FOL translation and demonstrating the advantages of MALLS. We also show that LogicLLaMA achieves GPT4-level performance and can generalize to other datasets. Project repo is available at https://github.com/gblackout/LogicLLaMA", }
Advancements in logical reasoning, utilizing LLMs to convert natural language into logical symbolism, combined with the use of external theorem provers, have repositioned the symbolic approach as a central point of interest. The main challenge within this paradigm lies in the LLMs{'} capability to accurately translate natural language (NL) statements into first-order-logic (FOL) expressions. Although LLMs have shown notable success, there remains a gap in understanding the limitations and challenges they encounter in NL-FOL translation. This is primarily due to the absence of datasets and evaluation test beds at the required fine-grained level. We present MALLS, a dataset of 28K diverse and verified sentence-level NL-FOL pairs collected from GPT4. We utilize a combined strategy of FOL rule parsing, human annotation, and automatic filtering to ensure quality. We also present LogicLLaMA, a LLaMA2-7B/13B fine-tuned on MALLS for NL-FOL translation, which can be used standalone or to correct previously generated rules by GPT3.5 after being further fine-tuned via a novel reinforcement learning with human feedback (RLHF) framework. We benchmark a wide range of LLMs on MALLS and previous datasets, highlighting weaknesses in them in NL-FOL translation and demonstrating the advantages of MALLS. We also show that LogicLLaMA achieves GPT4-level performance and can generalize to other datasets. Project repo is available at https://github.com/gblackout/LogicLLaMA
[ "Yang, Yuan", "Xiong, Siheng", "Payani, Ali", "Shareghi, Ehsan", "Fekri, Faramarz" ]
Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation
acl-long.375
Poster
2305.15541
[ "https://github.com/gblackout/logicllama" ]
https://huggingface.co/papers/2305.15541
0
0
0
5
https://aclanthology.org/2024.acl-long.375/
[ "yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.376.bib
@inproceedings{jain-etal-2024-lightweight, title = "Lightweight reranking for language model generations", author = "Jain, Siddhartha and Ma, Xiaofei and Deoras, Anoop and Xiang, Bing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.376", pages = "6960--6984", abstract = "Large Language Models (LLMs) can exhibit considerable variation in the quality of their sampled outputs. Reranking and selecting the best generation from the sampled set is a popular way of obtaining strong gains in generation quality. In this paper, we present a novel approach for reranking LLM generations. Unlike other techniques that might involve additional inferences or training a specialized reranker, our approach relies on easy to compute pairwise statistics between the generations that have minimal compute overhead. We show that our approach can be formalized as an extension of self-consistency and analyze its performance in that framework, theoretically as well as via simulations. We show strong improvements for selecting the best k generations for code generation tasks as well as robust improvements for the best generation for the tasks of autoformalization, summarization, and translation. While our approach only assumes black-box access to LLMs, we show that additional access to token probabilities can improve performance even further.", }
Large Language Models (LLMs) can exhibit considerable variation in the quality of their sampled outputs. Reranking and selecting the best generation from the sampled set is a popular way of obtaining strong gains in generation quality. In this paper, we present a novel approach for reranking LLM generations. Unlike other techniques that might involve additional inferences or training a specialized reranker, our approach relies on easy to compute pairwise statistics between the generations that have minimal compute overhead. We show that our approach can be formalized as an extension of self-consistency and analyze its performance in that framework, theoretically as well as via simulations. We show strong improvements for selecting the best k generations for code generation tasks as well as robust improvements for the best generation for the tasks of autoformalization, summarization, and translation. While our approach only assumes black-box access to LLMs, we show that additional access to token probabilities can improve performance even further.
[ "Jain, Siddhartha", "Ma, Xiaofei", "Deoras, Anoop", "Xiang, Bing" ]
Lightweight reranking for language model generations
acl-long.376
Oral
2307.06857
[ "" ]
https://huggingface.co/papers/2307.06857
0
9
0
4
https://aclanthology.org/2024.acl-long.376/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.377.bib
@inproceedings{darcy-etal-2024-aries, title = "{ARIES}: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews", author = "D{'}Arcy, Mike and Ross, Alexis and Bransom, Erin and Kuehl, Bailey and Bragg, Jonathan and Hope, Tom and Downey, Doug", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.377", pages = "6985--7001", abstract = "We introduce the task of automatically revising scientific papers based on peer feedback and release ARIES, a dataset of review comments and their corresponding paper edits. The data is drawn from real reviewer-author interactions from computer science, and we provide labels linking each reviewer comment to the specific paper edits made by the author in response. We automatically create a high-precision silver training set, as well as an expert-labeled test set that shows high inter-annotator agreement. In experiments with 10 models covering the state of the art, we find that they struggle even to identify which edits correspond to a comment{---}especially when the relationship between the edit and the comment is indirect and requires reasoning to uncover. We also extensively analyze GPT-4{'}s ability to generate edits given a comment and the original paper. We find that it often succeeds on a superficial level, but tends to rigidly follow the wording of the feedback rather than the underlying intent, and lacks technical details compared to human-written edits.", }
We introduce the task of automatically revising scientific papers based on peer feedback and release ARIES, a dataset of review comments and their corresponding paper edits. The data is drawn from real reviewer-author interactions from computer science, and we provide labels linking each reviewer comment to the specific paper edits made by the author in response. We automatically create a high-precision silver training set, as well as an expert-labeled test set that shows high inter-annotator agreement. In experiments with 10 models covering the state of the art, we find that they struggle even to identify which edits correspond to a comment{---}especially when the relationship between the edit and the comment is indirect and requires reasoning to uncover. We also extensively analyze GPT-4{'}s ability to generate edits given a comment and the original paper. We find that it often succeeds on a superficial level, but tends to rigidly follow the wording of the feedback rather than the underlying intent, and lacks technical details compared to human-written edits.
[ "D{'}Arcy, Mike", "Ross, Alexis", "Bransom, Erin", "Kuehl, Bailey", "Bragg, Jonathan", "Hope, Tom", "Downey, Doug" ]
ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews
acl-long.377
Poster
2306.12587
[ "https://github.com/allenai/aries" ]
https://huggingface.co/papers/2306.12587
0
1
0
7
https://aclanthology.org/2024.acl-long.377/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.378.bib
@inproceedings{hase-etal-2024-unreasonable, title = "The Unreasonable Effectiveness of Easy Training Data for Hard Tasks", author = "Hase, Peter and Bansal, Mohit and Clark, Peter and Wiegreffe, Sarah", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.378", pages = "7002--7024", abstract = "How can we train models to perform well on hard test data when hard training data is by definition difficult to label correctly? This question has been termed the scalable oversight problem and has drawn increasing attention as language models have continually improved. In this paper, we present the surprising conclusion that current pretrained language models often generalize relatively well from easy to hard data, even performing as well as oracle models finetuned on hard data. We demonstrate this kind of easy-to-hard generalization using simple finetuning methods like in-context learning, linear classifier heads, and QLoRA for seven different measures of datapoint hardness, including six empirically diverse human hardness measures (like grade level) and one model-based measure (loss-based). Furthermore, we show that even if one cares most about model performance on hard data, it can be better to collect easy data rather than hard data for finetuning, since hard data is generally noisier and costlier to collect. Our experiments use open models up to 70b in size and four publicly available question-answering datasets with questions ranging in difficulty from 3rd grade science questions to college level STEM questions and general-knowledge trivia. We conclude that easy-to-hard generalization in LMs is surprisingly strong for the tasks studied.", }
How can we train models to perform well on hard test data when hard training data is by definition difficult to label correctly? This question has been termed the scalable oversight problem and has drawn increasing attention as language models have continually improved. In this paper, we present the surprising conclusion that current pretrained language models often generalize relatively well from easy to hard data, even performing as well as oracle models finetuned on hard data. We demonstrate this kind of easy-to-hard generalization using simple finetuning methods like in-context learning, linear classifier heads, and QLoRA for seven different measures of datapoint hardness, including six empirically diverse human hardness measures (like grade level) and one model-based measure (loss-based). Furthermore, we show that even if one cares most about model performance on hard data, it can be better to collect easy data rather than hard data for finetuning, since hard data is generally noisier and costlier to collect. Our experiments use open models up to 70b in size and four publicly available question-answering datasets with questions ranging in difficulty from 3rd grade science questions to college level STEM questions and general-knowledge trivia. We conclude that easy-to-hard generalization in LMs is surprisingly strong for the tasks studied.
[ "Hase, Peter", "Bansal, Mohit", "Clark, Peter", "Wiegreffe, Sarah" ]
The Unreasonable Effectiveness of Easy Training Data for Hard Tasks
acl-long.378
Poster
2401.06751
[ "https://github.com/allenai/easy-to-hard-generalization" ]
https://huggingface.co/papers/2401.06751
2
1
0
4
https://aclanthology.org/2024.acl-long.378/
[]
[ "jdpressman/retro-weave-eval-jdp-v0.1" ]
[]
1
https://aclanthology.org/2024.acl-long.379.bib
@inproceedings{zhang-etal-2024-plug, title = "{PLUG}: Leveraging Pivot Language in Cross-Lingual Instruction Tuning", author = "Zhang, Zhihan and Lee, Dong-Ho and Fang, Yuwei and Yu, Wenhao and Jia, Mengzhao and Jiang, Meng and Barbieri, Francesco", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.379", pages = "7025--7046", abstract = "Instruction tuning has remarkably advanced large language models (LLMs) in understanding and responding to diverse human instructions. Despite the success in high-resource languages, its application in lower-resource ones faces challenges due to the imbalanced foundational abilities of LLMs across different languages, stemming from the uneven language distribution in their pre-training data. To tackle this issue, we propose pivot language guided generation (PLUG), an approach that utilizes a high-resource language, primarily English, as the pivot to enhance instruction tuning in lower-resource languages. It trains the model to first process instructions in the pivot language, and then produce responses in the target language. To evaluate our approach, we introduce a benchmark, X-AlpacaEval, of instructions in 4 languages (Chinese, Korean, Italian, and Spanish), each annotated by professional translators. Our approach demonstrates a significant improvement in the instruction-following abilities of LLMs by 29{\%} on average, compared to directly responding in the target language alone. Further experiments validate the versatility of our approach by employing alternative pivot languages beyond English to assist languages where LLMs exhibit lower proficiency. Code and data are available at https://github.com/ytyz1307zzh/PLUG.", }
Instruction tuning has remarkably advanced large language models (LLMs) in understanding and responding to diverse human instructions. Despite the success in high-resource languages, its application in lower-resource ones faces challenges due to the imbalanced foundational abilities of LLMs across different languages, stemming from the uneven language distribution in their pre-training data. To tackle this issue, we propose pivot language guided generation (PLUG), an approach that utilizes a high-resource language, primarily English, as the pivot to enhance instruction tuning in lower-resource languages. It trains the model to first process instructions in the pivot language, and then produce responses in the target language. To evaluate our approach, we introduce a benchmark, X-AlpacaEval, of instructions in 4 languages (Chinese, Korean, Italian, and Spanish), each annotated by professional translators. Our approach demonstrates a significant improvement in the instruction-following abilities of LLMs by 29{\%} on average, compared to directly responding in the target language alone. Further experiments validate the versatility of our approach by employing alternative pivot languages beyond English to assist languages where LLMs exhibit lower proficiency. Code and data are available at https://github.com/ytyz1307zzh/PLUG.
[ "Zhang, Zhihan", "Lee, Dong-Ho", "Fang, Yuwei", "Yu, Wenhao", "Jia, Mengzhao", "Jiang, Meng", "Barbieri, Francesco" ]
PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning
acl-long.379
Poster
2311.08711
[ "https://github.com/ytyz1307zzh/plug" ]
https://huggingface.co/papers/2311.08711
2
0
0
7
https://aclanthology.org/2024.acl-long.379/
[]
[ "zhihz0535/X-AlpacaEval", "zhihz0535/X-SVAMP_en_zh_ko_it_es", "zhihz0535/X-TruthfulQA_en_zh_ko_it_es" ]
[]
1
https://aclanthology.org/2024.acl-long.380.bib
@inproceedings{nair-wang-2024-midgard, title = "{MIDGARD}: Self-Consistency Using Minimum Description Length for Structured Commonsense Reasoning", author = "Nair, Inderjeet and Wang, Lu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.380", pages = "7047--7065", abstract = "We study the task of conducting structured reasoning as generating a reasoning graph from natural language input using large language models (LLMs). Previous approaches have explored various prompting schemes, yet they suffer from error propagation due to the autoregressive nature and single-pass-based decoding, which lack error correction capability. Additionally, relying solely on a single sample may result in the omission of true nodes and edges. To counter this, we draw inspiration from self-consistency (SC), which involves sampling a diverse set of reasoning chains and taking the majority vote as the final answer. To tackle the substantial challenge of applying SC on generated graphs, we propose MIDGARD (MInimum Description length Guided Aggregation of Reasoning in Directed acyclic graph) that leverages Minimum Description Length (MDL)-based formulation to identify consistent properties among the different graph samples generated by an LLM. This formulation helps reject properties that appear in only a few samples, which are likely to be erroneous, while enabling the inclusion of missing elements without compromising precision. Our method demonstrates superior performance than comparisons across various structured reasoning tasks, including argument structure extraction, explanation graph generation, inferring dependency relations among actions for everyday tasks, and semantic graph generation from natural texts.", }
We study the task of conducting structured reasoning as generating a reasoning graph from natural language input using large language models (LLMs). Previous approaches have explored various prompting schemes, yet they suffer from error propagation due to the autoregressive nature and single-pass-based decoding, which lack error correction capability. Additionally, relying solely on a single sample may result in the omission of true nodes and edges. To counter this, we draw inspiration from self-consistency (SC), which involves sampling a diverse set of reasoning chains and taking the majority vote as the final answer. To tackle the substantial challenge of applying SC on generated graphs, we propose MIDGARD (MInimum Description length Guided Aggregation of Reasoning in Directed acyclic graph) that leverages Minimum Description Length (MDL)-based formulation to identify consistent properties among the different graph samples generated by an LLM. This formulation helps reject properties that appear in only a few samples, which are likely to be erroneous, while enabling the inclusion of missing elements without compromising precision. Our method demonstrates superior performance than comparisons across various structured reasoning tasks, including argument structure extraction, explanation graph generation, inferring dependency relations among actions for everyday tasks, and semantic graph generation from natural texts.
[ "Nair, Inderjeet", "Wang, Lu" ]
MIDGARD: Self-Consistency Using Minimum Description Length for Structured Commonsense Reasoning
acl-long.380
Oral
2405.05189
[ "https://github.com/launchnlp/midgard" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.380/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.381.bib
@inproceedings{chen-etal-2024-reconcile, title = "{R}e{C}oncile: Round-Table Conference Improves Reasoning via Consensus among Diverse {LLM}s", author = "Chen, Justin and Saha, Swarnadeep and Bansal, Mohit", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.381", pages = "7066--7085", abstract = "Large Language Models (LLMs) still struggle with natural language reasoning tasks. Motivated by the society of minds (Minsky, 1988), we propose ReConcile, a multi-model multi-agent framework designed as a round table conference among diverse LLM agents. ReConcile enhances collaborative reasoning between LLM agents via multiple rounds of discussion, learning to convince other agents to improve their answers, and employing a confidence-weighted voting mechanism that leads to a better consensus. In each round, ReConcile initiates discussion between agents via a {`}discussion prompt{'} that consists of (a) grouped answers and explanations generated by each agent in the previous round, (b) their confidence scores, and (c) demonstrations of answer-rectifying human explanations, used for convincing other agents. Experiments on seven benchmarks demonstrate that ReConcile significantly improves LLMs{'} reasoning {--} both individually and as a team {--} surpassing prior single-agent and multi-agent baselines by up to 11.4{\%} and even outperforming GPT-4 on three datasets. ReConcile also flexibly incorporates different combinations of agents, including API-based, open-source, and domain-specific models, leading to an 8{\%} improvement on MATH. Finally, we analyze the individual components of ReConcile, demonstrating that the diversity originating from different models is critical to its superior performance.", }
Large Language Models (LLMs) still struggle with natural language reasoning tasks. Motivated by the society of minds (Minsky, 1988), we propose ReConcile, a multi-model multi-agent framework designed as a round table conference among diverse LLM agents. ReConcile enhances collaborative reasoning between LLM agents via multiple rounds of discussion, learning to convince other agents to improve their answers, and employing a confidence-weighted voting mechanism that leads to a better consensus. In each round, ReConcile initiates discussion between agents via a {`}discussion prompt{'} that consists of (a) grouped answers and explanations generated by each agent in the previous round, (b) their confidence scores, and (c) demonstrations of answer-rectifying human explanations, used for convincing other agents. Experiments on seven benchmarks demonstrate that ReConcile significantly improves LLMs{'} reasoning {--} both individually and as a team {--} surpassing prior single-agent and multi-agent baselines by up to 11.4{\%} and even outperforming GPT-4 on three datasets. ReConcile also flexibly incorporates different combinations of agents, including API-based, open-source, and domain-specific models, leading to an 8{\%} improvement on MATH. Finally, we analyze the individual components of ReConcile, demonstrating that the diversity originating from different models is critical to its superior performance.
[ "Chen, Justin", "Saha, Swarnadeep", "Bansal, Mohit" ]
ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs
acl-long.381
Poster
2309.13007
[ "https://github.com/dinobby/reconcile" ]
https://huggingface.co/papers/2309.13007
1
1
0
3
https://aclanthology.org/2024.acl-long.381/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.382.bib
@inproceedings{yan-etal-2024-mirror, title = "Mirror: Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning", author = "Yan, Hanqi and Zhu, Qinglin and Wang, Xinyu and Gui, Lin and He, Yulan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.382", pages = "7086--7103", abstract = "While Large language models (LLMs) have the capability to iteratively reflect on their own outputs, recent studies have observed their struggles with knowledge-rich problems without access to external resources. In addition to the inefficiency of LLMs in self-assessment, we also observe that LLMs struggle to revisit their predictions despite receiving explicit negative feedback. Therefore, We propose Mirror, a Multiple-perspective self-reflection method for knowledge-rich reasoning, to avoid getting stuck at a particular reflection iteration. Mirror enables LLMs to reflect from multiple-perspective clues, achieved through a heuristic interaction between a Navigator and a Reasoner. It guides agents toward diverse yet plausibly reliable reasoning trajectory without access to ground truth by encouraging (1) diversity of directions generated by Navigator and (2) agreement among strategically induced perturbations in responses generated by the Reasoner. The experiments on five reasoning datasets demonstrate that Mirror{'}s superiority over several contemporary self-reflection approaches. Additionally, the ablation study studies clearly indicate that our strategies alleviate the aforementioned challenges.", }
While Large language models (LLMs) have the capability to iteratively reflect on their own outputs, recent studies have observed their struggles with knowledge-rich problems without access to external resources. In addition to the inefficiency of LLMs in self-assessment, we also observe that LLMs struggle to revisit their predictions despite receiving explicit negative feedback. Therefore, We propose Mirror, a Multiple-perspective self-reflection method for knowledge-rich reasoning, to avoid getting stuck at a particular reflection iteration. Mirror enables LLMs to reflect from multiple-perspective clues, achieved through a heuristic interaction between a Navigator and a Reasoner. It guides agents toward diverse yet plausibly reliable reasoning trajectory without access to ground truth by encouraging (1) diversity of directions generated by Navigator and (2) agreement among strategically induced perturbations in responses generated by the Reasoner. The experiments on five reasoning datasets demonstrate that Mirror{'}s superiority over several contemporary self-reflection approaches. Additionally, the ablation study studies clearly indicate that our strategies alleviate the aforementioned challenges.
[ "Yan, Hanqi", "Zhu, Qinglin", "Wang, Xinyu", "Gui, Lin", "He, Yulan" ]
Mirror: Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning
acl-long.382
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.382/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.383.bib
@inproceedings{antoniak-etal-2024-people, title = "Where Do People Tell Stories Online? Story Detection Across Online Communities", author = "Antoniak, Maria and Mire, Joel and Sap, Maarten and Ash, Elliott and Piper, Andrew", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.383", pages = "7104--7130", abstract = "Story detection in online communities is a challenging task as stories are scattered across communities and interwoven with non-storytelling spans within a single text. We address this challenge by building and releasing the StorySeeker toolkit, including a richly annotated dataset of 502 Reddit posts and comments, a detailed codebook adapted to the social media context, and models to predict storytelling at the document and span levels. Our dataset is sampled from hundreds of popular English-language Reddit communities ranging across 33 topic categories, and it contains fine-grained expert annotations, including binary story labels, story spans, and event spans. We evaluate a range of detection methods using our data, and we identify the distinctive textual features of online storytelling, focusing on storytelling spans, which we introduce as a new task. We illuminate distributional characteristics of storytelling on a large community-centric social media platform, and we also conduct a case study on r/ChangeMyView, where storytelling is used as one of many persuasive strategies, illustrating that our data and models can be used for both inter- and intra-community research. Finally, we discuss implications of our tools and analyses for narratology and the study of online communities.", }
Story detection in online communities is a challenging task as stories are scattered across communities and interwoven with non-storytelling spans within a single text. We address this challenge by building and releasing the StorySeeker toolkit, including a richly annotated dataset of 502 Reddit posts and comments, a detailed codebook adapted to the social media context, and models to predict storytelling at the document and span levels. Our dataset is sampled from hundreds of popular English-language Reddit communities ranging across 33 topic categories, and it contains fine-grained expert annotations, including binary story labels, story spans, and event spans. We evaluate a range of detection methods using our data, and we identify the distinctive textual features of online storytelling, focusing on storytelling spans, which we introduce as a new task. We illuminate distributional characteristics of storytelling on a large community-centric social media platform, and we also conduct a case study on r/ChangeMyView, where storytelling is used as one of many persuasive strategies, illustrating that our data and models can be used for both inter- and intra-community research. Finally, we discuss implications of our tools and analyses for narratology and the study of online communities.
[ "Antoniak, Maria", "Mire, Joel", "Sap, Maarten", "Ash, Elliott", "Piper, Andrew" ]
Where Do People Tell Stories Online? Story Detection Across Online Communities
acl-long.383
Poster
2311.09675
[ "https://github.com/maria-antoniak/storyseeker" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.383/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.384.bib
@inproceedings{tian-etal-2024-large, title = "Large Language Models Are No Longer Shallow Parsers", author = "Tian, Yuanhe and Xia, Fei and Song, Yan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.384", pages = "7131--7142", abstract = "The development of large language models (LLMs) brings significant changes to the field of natural language processing (NLP), enabling remarkable performance in various high-level tasks, such as machine translation, question-answering, dialogue generation, etc., under end-to-end settings without requiring much training data. Meanwhile, fundamental NLP tasks, particularly syntactic parsing, are also essential for language study as well as evaluating the capability of LLMs for instruction understanding and usage. In this paper, we focus on analyzing and improving the capability of current state-of-the-art LLMs on a classic fundamental task, namely constituency parsing, which is the representative syntactic task in both linguistics and natural language processing. We observe that these LLMs are effective in shallow parsing but struggle with creating correct full parse trees. To improve the performance of LLMs on deep syntactic parsing, we propose a three-step approach that firstly prompts LLMs for chunking, then filters out low-quality chunks, and finally adds the remaining chunks to prompts to instruct LLMs for parsing, with later enhancement by chain-of-thought prompting. Experimental results on English and Chinese benchmark datasets demonstrate the effectiveness of our approach on improving LLMs{'} performance on constituency parsing.", }
The development of large language models (LLMs) brings significant changes to the field of natural language processing (NLP), enabling remarkable performance in various high-level tasks, such as machine translation, question-answering, dialogue generation, etc., under end-to-end settings without requiring much training data. Meanwhile, fundamental NLP tasks, particularly syntactic parsing, are also essential for language study as well as evaluating the capability of LLMs for instruction understanding and usage. In this paper, we focus on analyzing and improving the capability of current state-of-the-art LLMs on a classic fundamental task, namely constituency parsing, which is the representative syntactic task in both linguistics and natural language processing. We observe that these LLMs are effective in shallow parsing but struggle with creating correct full parse trees. To improve the performance of LLMs on deep syntactic parsing, we propose a three-step approach that firstly prompts LLMs for chunking, then filters out low-quality chunks, and finally adds the remaining chunks to prompts to instruct LLMs for parsing, with later enhancement by chain-of-thought prompting. Experimental results on English and Chinese benchmark datasets demonstrate the effectiveness of our approach on improving LLMs{'} performance on constituency parsing.
[ "Tian, Yuanhe", "Xia, Fei", "Song, Yan" ]
Large Language Models Are No Longer Shallow Parsers
acl-long.384
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.384/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.385.bib
@inproceedings{tian-etal-2024-dialogue, title = "Dialogue Summarization with Mixture of Experts based on Large Language Models", author = "Tian, Yuanhe and Xia, Fei and Song, Yan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.385", pages = "7143--7155", abstract = "Dialogue summarization is an important task that requires to generate highlights for a conversation from different aspects (e.g., content of various speakers). While several studies successfully employ large language models (LLMs) and achieve satisfying results, they are limited by using one model at a time or treat it as a black box, which makes it hard to discriminatively learn essential content in a dialogue from different aspects, therefore may lead to anticipation bias and potential loss of information in the produced summaries. In this paper, we propose an LLM-based approach with role-oriented routing and fusion generation to utilize mixture of experts (MoE) for dialogue summarization. Specifically, the role-oriented routing is an LLM-based module that selects appropriate experts to process different information; fusion generation is another LLM-based module to locate salient information and produce finalized dialogue summaries. The proposed approach offers an alternative solution to employing multiple LLMs for dialogue summarization by leveraging their capabilities of in-context processing and generation in an effective manner. We run experiments on widely used benchmark datasets for this task, where the results demonstrate the superiority of our approach in producing informative and accurate dialogue summarization.", }
Dialogue summarization is an important task that requires to generate highlights for a conversation from different aspects (e.g., content of various speakers). While several studies successfully employ large language models (LLMs) and achieve satisfying results, they are limited by using one model at a time or treat it as a black box, which makes it hard to discriminatively learn essential content in a dialogue from different aspects, therefore may lead to anticipation bias and potential loss of information in the produced summaries. In this paper, we propose an LLM-based approach with role-oriented routing and fusion generation to utilize mixture of experts (MoE) for dialogue summarization. Specifically, the role-oriented routing is an LLM-based module that selects appropriate experts to process different information; fusion generation is another LLM-based module to locate salient information and produce finalized dialogue summaries. The proposed approach offers an alternative solution to employing multiple LLMs for dialogue summarization by leveraging their capabilities of in-context processing and generation in an effective manner. We run experiments on widely used benchmark datasets for this task, where the results demonstrate the superiority of our approach in producing informative and accurate dialogue summarization.
[ "Tian, Yuanhe", "Xia, Fei", "Song, Yan" ]
Dialogue Summarization with Mixture of Experts based on Large Language Models
acl-long.385
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.385/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.386.bib
@inproceedings{tian-etal-2024-chimed, title = "{C}hi{M}ed-{GPT}: A {C}hinese Medical Large Language Model with Full Training Regime and Better Alignment to Human Preferences", author = "Tian, Yuanhe and Gan, Ruyi and Song, Yan and Zhang, Jiaxing and Zhang, Yongdong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.386", pages = "7156--7173", abstract = "Recently, the increasing demand for superior medical services has highlighted the discrepancies in the medical infrastructure. With big data, especially texts, forming the foundation of medical services, there is an exigent need for effective natural language processing (NLP) solutions tailored to the healthcare domain. Conventional approaches leveraging pre-trained models present promising results in this domain and current large language models (LLMs) offer advanced foundation for medical text processing. However, most medical LLMs are trained only with supervised fine-tuning (SFT), even though it efficiently empowers LLMs to understand and respond to medical instructions but is ineffective in learning domain knowledge and aligning with human preference. In this work, we propose ChiMed-GPT, a new benchmark LLM designed explicitly for Chinese medical domain, and undergoes a comprehensive training regime with pre-training, SFT, and RLHF. Evaluations on tasks including information extraction, question answering, and dialogue generation demonstrate ChiMed-GPT{'}s superior performance over general domain LLMs. Furthermore, we analyze possible biases through prompting ChiMed-GPT to perform attitude scales regarding discrimination of patients, so as to contribute to further responsible development of LLMs in the medical domain.", }
Recently, the increasing demand for superior medical services has highlighted the discrepancies in the medical infrastructure. With big data, especially texts, forming the foundation of medical services, there is an exigent need for effective natural language processing (NLP) solutions tailored to the healthcare domain. Conventional approaches leveraging pre-trained models present promising results in this domain and current large language models (LLMs) offer advanced foundation for medical text processing. However, most medical LLMs are trained only with supervised fine-tuning (SFT), even though it efficiently empowers LLMs to understand and respond to medical instructions but is ineffective in learning domain knowledge and aligning with human preference. In this work, we propose ChiMed-GPT, a new benchmark LLM designed explicitly for Chinese medical domain, and undergoes a comprehensive training regime with pre-training, SFT, and RLHF. Evaluations on tasks including information extraction, question answering, and dialogue generation demonstrate ChiMed-GPT{'}s superior performance over general domain LLMs. Furthermore, we analyze possible biases through prompting ChiMed-GPT to perform attitude scales regarding discrimination of patients, so as to contribute to further responsible development of LLMs in the medical domain.
[ "Tian, Yuanhe", "Gan, Ruyi", "Song, Yan", "Zhang, Jiaxing", "Zhang, Yongdong" ]
ChiMed-GPT: A Chinese Medical Large Language Model with Full Training Regime and Better Alignment to Human Preferences
acl-long.386
Poster
2311.06025
[ "https://github.com/synlp/chimed-gpt" ]
https://huggingface.co/papers/2311.06025
1
1
0
5
https://aclanthology.org/2024.acl-long.386/
[ "SYNLP/ChiMed-GPT-1.0" ]
[]
[ "samirkhan/SYNLP-ChiMed-GPT-1.0" ]
1
https://aclanthology.org/2024.acl-long.387.bib
@inproceedings{rai-yao-2024-investigation, title = "An Investigation of Neuron Activation as a Unified Lens to Explain Chain-of-Thought Eliciting Arithmetic Reasoning of {LLM}s", author = "Rai, Daking and Yao, Ziyu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.387", pages = "7174--7193", abstract = "Large language models (LLMs) have shown strong arithmetic reasoning capabilities when prompted with Chain-of-Thought (CoT) prompts. However, we have only a limited understanding of how they are processed by LLMs. To demystify it, prior work has primarily focused on ablating different components in the CoT prompt and empirically observing their resulting LLM performance change. Yet, the reason why these components are important to LLM reasoning is not explored. To fill this gap, in this work, we investigate {``}neuron activation{''} as a lens to provide a unified explanation to observations made by prior work. Specifically, we look into neurons within the feed-forward layers of LLMs that may have activated their arithmetic reasoning capabilities, using Llama2 as an example. To facilitate this investigation, we also propose an approach based on GPT-4 to automatically identify neurons that imply arithmetic reasoning. Our analyses revealed that the activation of reasoning neurons in the feed-forward layers of an LLM can explain the importance of various components in a CoT prompt, and future research can extend it for a more complete understanding.", }
Large language models (LLMs) have shown strong arithmetic reasoning capabilities when prompted with Chain-of-Thought (CoT) prompts. However, we have only a limited understanding of how they are processed by LLMs. To demystify it, prior work has primarily focused on ablating different components in the CoT prompt and empirically observing their resulting LLM performance change. Yet, the reason why these components are important to LLM reasoning is not explored. To fill this gap, in this work, we investigate {``}neuron activation{''} as a lens to provide a unified explanation to observations made by prior work. Specifically, we look into neurons within the feed-forward layers of LLMs that may have activated their arithmetic reasoning capabilities, using Llama2 as an example. To facilitate this investigation, we also propose an approach based on GPT-4 to automatically identify neurons that imply arithmetic reasoning. Our analyses revealed that the activation of reasoning neurons in the feed-forward layers of an LLM can explain the importance of various components in a CoT prompt, and future research can extend it for a more complete understanding.
[ "Rai, Daking", "Yao, Ziyu" ]
An Investigation of Neuron Activation as a Unified Lens to Explain Chain-of-Thought Eliciting Arithmetic Reasoning of LLMs
acl-long.387
Poster
2406.12288
[ "https://github.com/dakingrai/neuron-analysis-cot-arithmetic-reasoning" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.387/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.388.bib
@inproceedings{jiang-etal-2024-leveraging, title = "Leveraging Large Language Models for Learning Complex Legal Concepts through Storytelling", author = "Jiang, Hang and Zhang, Xiajie and Mahari, Robert and Kessler, Daniel and Ma, Eric and August, Tal and Li, Irene and Pentland, Alex and Kim, Yoon and Roy, Deb and Kabbara, Jad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.388", pages = "7194--7219", abstract = "Making legal knowledge accessible to non-experts is crucial for enhancing general legal literacy and encouraging civic participation in democracy. However, legal documents are often challenging to understand for people without legal backgrounds. In this paper, we present a novel application of large language models (LLMs) in legal education to help non-experts learn intricate legal concepts through storytelling, an effective pedagogical tool in conveying complex and abstract concepts. We also introduce a new dataset LegalStories, which consists of 294 complex legal doctrines, each accompanied by a story and a set of multiple-choice questions generated by LLMs. To construct the dataset, we experiment with various LLMs to generate legal stories explaining these concepts. Furthermore, we use an expert-in-the-loop approach to iteratively design multiple-choice questions. Then, we evaluate the effectiveness of storytelling with LLMs through randomized controlled trials (RCTs) with legal novices on 10 samples from the dataset. We find that LLM-generated stories enhance comprehension of legal concepts and interest in law among non-native speakers compared to only definitions. Moreover, stories consistently help participants relate legal concepts to their lives. Finally, we find that learning with stories shows a higher retention rate for non-native speakers in the follow-up assessment. Our work has strong implications for using LLMs in promoting teaching and learning in the legal field and beyond.", }
Making legal knowledge accessible to non-experts is crucial for enhancing general legal literacy and encouraging civic participation in democracy. However, legal documents are often challenging to understand for people without legal backgrounds. In this paper, we present a novel application of large language models (LLMs) in legal education to help non-experts learn intricate legal concepts through storytelling, an effective pedagogical tool in conveying complex and abstract concepts. We also introduce a new dataset LegalStories, which consists of 294 complex legal doctrines, each accompanied by a story and a set of multiple-choice questions generated by LLMs. To construct the dataset, we experiment with various LLMs to generate legal stories explaining these concepts. Furthermore, we use an expert-in-the-loop approach to iteratively design multiple-choice questions. Then, we evaluate the effectiveness of storytelling with LLMs through randomized controlled trials (RCTs) with legal novices on 10 samples from the dataset. We find that LLM-generated stories enhance comprehension of legal concepts and interest in law among non-native speakers compared to only definitions. Moreover, stories consistently help participants relate legal concepts to their lives. Finally, we find that learning with stories shows a higher retention rate for non-native speakers in the follow-up assessment. Our work has strong implications for using LLMs in promoting teaching and learning in the legal field and beyond.
[ "Jiang, Hang", "Zhang, Xiajie", "Mahari, Robert", "Kessler, Daniel", "Ma, Eric", "August, Tal", "Li, Irene", "Pentl", ", Alex", "Kim, Yoon", "Roy, Deb", "Kabbara, Jad" ]
Leveraging Large Language Models for Learning Complex Legal Concepts through Storytelling
acl-long.388
Poster
2402.17019
[ "https://github.com/hjian42/legalstories" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.388/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.389.bib
@inproceedings{chen-etal-2024-intrinsic, title = "Intrinsic Task-based Evaluation for Referring Expression Generation", author = "Chen, Guanyi and Same, Fahime and Van Deemter, Kees", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.389", pages = "7220--7231", abstract = "Recently, a human evaluation study of Referring Expression Generation (REG) models had an unexpected conclusion: on WEBNLG, Referring Expressions (REs) generated by the state-of-the-art neural models were not only indistinguishable from the REs in WEBNLG but also from the REs generated by a simple rule-based system. Here, we argue that this limitation could stem from the use of a purely ratings-based human evaluation (which is a common practice in Natural Language Generation). To investigate these issues, we propose an intrinsic task-based evaluation for REG models, in which, in addition to rating the quality of REs, participants were asked to accomplish two meta-level tasks. One of these tasks concerns the referential success of each RE; the other task asks participants to suggest a better alternative for each RE. The outcomes suggest that, in comparison to previous evaluations, the new evaluation protocol assesses the performance of each REG model more comprehensively and makes the participants{'} ratings more reliable and discriminable.", }
Recently, a human evaluation study of Referring Expression Generation (REG) models had an unexpected conclusion: on WEBNLG, Referring Expressions (REs) generated by the state-of-the-art neural models were not only indistinguishable from the REs in WEBNLG but also from the REs generated by a simple rule-based system. Here, we argue that this limitation could stem from the use of a purely ratings-based human evaluation (which is a common practice in Natural Language Generation). To investigate these issues, we propose an intrinsic task-based evaluation for REG models, in which, in addition to rating the quality of REs, participants were asked to accomplish two meta-level tasks. One of these tasks concerns the referential success of each RE; the other task asks participants to suggest a better alternative for each RE. The outcomes suggest that, in comparison to previous evaluations, the new evaluation protocol assesses the performance of each REG model more comprehensively and makes the participants{'} ratings more reliable and discriminable.
[ "Chen, Guanyi", "Same, Fahime", "Van Deemter, Kees" ]
Intrinsic Task-based Evaluation for Referring Expression Generation
acl-long.389
Poster
2402.07432
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.389/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.390.bib
@inproceedings{hu-etal-2024-moments, title = "From Moments to Milestones: Incremental Timeline Summarization Leveraging Large Language Models", author = "Hu, Qisheng and Moon, Geonsik and Ng, Hwee Tou", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.390", pages = "7232--7246", abstract = "Timeline summarization (TLS) is essential for distilling coherent narratives from a vast collection of texts, tracing the progression of events and topics over time. Prior research typically focuses on either event or topic timeline summarization, neglecting the potential synergy of these two forms. In this study, we bridge this gap by introducing a novel approach that leverages large language models (LLMs) for generating both event and topic timelines. Our approach diverges from conventional TLS by prioritizing event detection, leveraging LLMs as pseudo-oracles for incremental event clustering and the construction of timelines from a text stream. As a result, it produces a more interpretable pipeline. Empirical evaluation across four TLS benchmarks reveals that our approach outperforms the best prior published approaches, highlighting the potential of LLMs in timeline summarization for real-world applications.", }
Timeline summarization (TLS) is essential for distilling coherent narratives from a vast collection of texts, tracing the progression of events and topics over time. Prior research typically focuses on either event or topic timeline summarization, neglecting the potential synergy of these two forms. In this study, we bridge this gap by introducing a novel approach that leverages large language models (LLMs) for generating both event and topic timelines. Our approach diverges from conventional TLS by prioritizing event detection, leveraging LLMs as pseudo-oracles for incremental event clustering and the construction of timelines from a text stream. As a result, it produces a more interpretable pipeline. Empirical evaluation across four TLS benchmarks reveals that our approach outperforms the best prior published approaches, highlighting the potential of LLMs in timeline summarization for real-world applications.
[ "Hu, Qisheng", "Moon, Geonsik", "Ng, Hwee Tou" ]
From Moments to Milestones: Incremental Timeline Summarization Leveraging Large Language Models
acl-long.390
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.390/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.391.bib
@inproceedings{qi-etal-2024-end, title = "End-to-end Learning of Logical Rules for Enhancing Document-level Relation Extraction", author = "Qi, Kunxun and Du, Jianfeng and Wan, Hai", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.391", pages = "7247--7263", abstract = "Document-level relation extraction (DocRE) aims to extract relations between entities in a whole document. One of the pivotal challenges of DocRE is to capture the intricate interdependencies between relations of entity pairs. Previous methods have shown that logical rules can explicitly help capture such interdependencies. These methods either learn logical rules to refine the output of a trained DocRE model, or first learn logical rules from annotated data and then inject the learnt rules into a DocRE model using an auxiliary training objective. However, these learning pipelines may suffer from the issue of error propagation. To mitigate this issue, we propose \textit{Joint Modeling Relation extraction and Logical rules} or \textit{JMRL} for short, a novel rule-based framework that jointly learns both a DocRE model and logical rules in an end-to-end fashion. Specifically, we parameterize a rule reasoning module in JMRL to simulate the inference of logical rules, thereby explicitly modeling the reasoning process. We also introduce an auxiliary loss and a residual connection mechanism in JMRL to better reconcile the DocRE model and the rule reasoning module. Experimental results on four benchmark datasets demonstrate that our proposed JMRL framework is consistently superior to existing rule-based frameworks, improving five baseline models for DocRE by a significant margin.", }
Document-level relation extraction (DocRE) aims to extract relations between entities in a whole document. One of the pivotal challenges of DocRE is to capture the intricate interdependencies between relations of entity pairs. Previous methods have shown that logical rules can explicitly help capture such interdependencies. These methods either learn logical rules to refine the output of a trained DocRE model, or first learn logical rules from annotated data and then inject the learnt rules into a DocRE model using an auxiliary training objective. However, these learning pipelines may suffer from the issue of error propagation. To mitigate this issue, we propose \textit{Joint Modeling Relation extraction and Logical rules} or \textit{JMRL} for short, a novel rule-based framework that jointly learns both a DocRE model and logical rules in an end-to-end fashion. Specifically, we parameterize a rule reasoning module in JMRL to simulate the inference of logical rules, thereby explicitly modeling the reasoning process. We also introduce an auxiliary loss and a residual connection mechanism in JMRL to better reconcile the DocRE model and the rule reasoning module. Experimental results on four benchmark datasets demonstrate that our proposed JMRL framework is consistently superior to existing rule-based frameworks, improving five baseline models for DocRE by a significant margin.
[ "Qi, Kunxun", "Du, Jianfeng", "Wan, Hai" ]
End-to-end Learning of Logical Rules for Enhancing Document-level Relation Extraction
acl-long.391
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.391/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.392.bib
@inproceedings{fang-etal-2024-achieve, title = "Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data?", author = "Fang, Qingkai and Zhang, Shaolei and Ma, Zhengrui and Zhang, Min and Feng, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.392", pages = "7264--7277", abstract = "Recently proposed two-pass direct speech-to-speech translation (S2ST) models decompose the task into speech-to-text translation (S2TT) and text-to-speech (TTS) within an end-to-end model, yielding promising results. However, the training of these models still relies on parallel speech data, which is extremely challenging to collect. In contrast, S2TT and TTS have accumulated a large amount of data and pretrained models, which have not been fully utilized in the development of S2ST models. Inspired by this, in this paper, we first introduce a composite S2ST model named ComSpeech, which can seamlessly integrate any pretrained S2TT and TTS models into a direct S2ST model. Furthermore, to eliminate the reliance on parallel speech data, we propose a novel training method ComSpeech-ZS that solely utilizes S2TT and TTS data. It aligns representations in the latent space through contrastive learning, enabling the speech synthesis capability learned from the TTS data to generalize to S2ST in a zero-shot manner. Experimental results on the CVSS dataset show that when the parallel speech data is available, ComSpeech surpasses previous two-pass models like UnitY and Translatotron 2 in both translation quality and decoding speed. When there is no parallel speech data, ComSpeech-ZS lags behind by only 0.7 ASR-BLEU and outperforms the cascaded models.", }
Recently proposed two-pass direct speech-to-speech translation (S2ST) models decompose the task into speech-to-text translation (S2TT) and text-to-speech (TTS) within an end-to-end model, yielding promising results. However, the training of these models still relies on parallel speech data, which is extremely challenging to collect. In contrast, S2TT and TTS have accumulated a large amount of data and pretrained models, which have not been fully utilized in the development of S2ST models. Inspired by this, in this paper, we first introduce a composite S2ST model named ComSpeech, which can seamlessly integrate any pretrained S2TT and TTS models into a direct S2ST model. Furthermore, to eliminate the reliance on parallel speech data, we propose a novel training method ComSpeech-ZS that solely utilizes S2TT and TTS data. It aligns representations in the latent space through contrastive learning, enabling the speech synthesis capability learned from the TTS data to generalize to S2ST in a zero-shot manner. Experimental results on the CVSS dataset show that when the parallel speech data is available, ComSpeech surpasses previous two-pass models like UnitY and Translatotron 2 in both translation quality and decoding speed. When there is no parallel speech data, ComSpeech-ZS lags behind by only 0.7 ASR-BLEU and outperforms the cascaded models.
[ "Fang, Qingkai", "Zhang, Shaolei", "Ma, Zhengrui", "Zhang, Min", "Feng, Yang" ]
Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data?
acl-long.392
Poster
2406.07289
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.392/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.393.bib
@inproceedings{wang-etal-2024-enhancing-eeg, title = "Enhancing {EEG}-to-Text Decoding through Transferable Representations from Pre-trained Contrastive {EEG}-Text Masked Autoencoder", author = "Wang, Jiaqi and Song, Zhenxi and Ma, Zhengyu and Qiu, Xipeng and Zhang, Min and Zhang, Zhiguo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.393", pages = "7278--7292", abstract = "Reconstructing natural language from non-invasive electroencephalography (EEG) holds great promise as a language decoding technology for brain-computer interfaces (BCIs). However, EEG-based language decoding is still in its nascent stages, facing several technical issues such as: 1) Absence of a hybrid strategy that can effectively integrate cross-modality (between EEG and text) self-learning with intra-modality self-reconstruction of EEG features or textual sequences; 2) Under-utilization of large language models (LLMs) to enhance EEG-based language decoding. To address above issues, we propose the Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that orchestrates compound self-supervised learning across and within EEG and text through a dedicated multi-stream encoder. Furthermore, we develop a framework called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable Representations), which leverages pre-trained modules alongside the EEG stream from CET-MAE and further enables an LLM (specifically BART) to decode text from EEG sequences. Comprehensive experiments conducted on the popular text-evoked EEG database, ZuCo, demonstrate the superiority of E2T-PTR, which outperforms the baseline framework in ROUGE-1 F1 and BLEU-4 scores by 8.34{\%} and 32.21{\%}, respectively. Our proposed pre-trained EEG-Text model shows the potential to improve downstream tasks involving EEG and text. This opens up promising avenues for its application in inner speech BCI paradigms, meriting further investigation.", }
Reconstructing natural language from non-invasive electroencephalography (EEG) holds great promise as a language decoding technology for brain-computer interfaces (BCIs). However, EEG-based language decoding is still in its nascent stages, facing several technical issues such as: 1) Absence of a hybrid strategy that can effectively integrate cross-modality (between EEG and text) self-learning with intra-modality self-reconstruction of EEG features or textual sequences; 2) Under-utilization of large language models (LLMs) to enhance EEG-based language decoding. To address above issues, we propose the Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that orchestrates compound self-supervised learning across and within EEG and text through a dedicated multi-stream encoder. Furthermore, we develop a framework called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable Representations), which leverages pre-trained modules alongside the EEG stream from CET-MAE and further enables an LLM (specifically BART) to decode text from EEG sequences. Comprehensive experiments conducted on the popular text-evoked EEG database, ZuCo, demonstrate the superiority of E2T-PTR, which outperforms the baseline framework in ROUGE-1 F1 and BLEU-4 scores by 8.34{\%} and 32.21{\%}, respectively. Our proposed pre-trained EEG-Text model shows the potential to improve downstream tasks involving EEG and text. This opens up promising avenues for its application in inner speech BCI paradigms, meriting further investigation.
[ "Wang, Jiaqi", "Song, Zhenxi", "Ma, Zhengyu", "Qiu, Xipeng", "Zhang, Min", "Zhang, Zhiguo" ]
Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder
acl-long.393
Poster
2402.17433
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.393/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.394.bib
@inproceedings{zou-etal-2024-cqil, title = "{CQIL}: Inference Latency Optimization with Concurrent Computation of Quasi-Independent Layers", author = "Zou, Longwei and Wang, Qingyang and Zhao, Han and Jiangangkong, Jiangangkong and Yang, Yi and Deng, Yangdong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.394", pages = "7293--7307", abstract = "The fast-growing large scale language models are delivering unprecedented performance on almost all natural language processing tasks. However, the effectiveness of large language models are reliant on an exponentially increasing number of parameters. The overwhelming computation complexity incurs a high inference latency that negatively affects user experience. Existing methods to improve inference efficiency, such as tensor parallelism and quantization, target to reduce per-layer computing latency, yet overlook the cumulative latency due to the number of layers. Recent works on reducing the cumulative latency through layer removing, however, lead to significant performance drop. Motivated by the similarity of inputs among adjacent layers, we propose to identify quasi-independent layers, which can be concurrently computed to significantly decrease inference latency. We also introduce a bypassing technique to mitigate the effect of information loss. Empirical experiments of the proposed approach on the LLaMA models confirm that Concurrent Computation of Quasi-Independent Layers (CQIL) can reduce latency by up to 48.3{\%} on LLaMA-33B, while maintaining a close level of performance.", }
The fast-growing large scale language models are delivering unprecedented performance on almost all natural language processing tasks. However, the effectiveness of large language models are reliant on an exponentially increasing number of parameters. The overwhelming computation complexity incurs a high inference latency that negatively affects user experience. Existing methods to improve inference efficiency, such as tensor parallelism and quantization, target to reduce per-layer computing latency, yet overlook the cumulative latency due to the number of layers. Recent works on reducing the cumulative latency through layer removing, however, lead to significant performance drop. Motivated by the similarity of inputs among adjacent layers, we propose to identify quasi-independent layers, which can be concurrently computed to significantly decrease inference latency. We also introduce a bypassing technique to mitigate the effect of information loss. Empirical experiments of the proposed approach on the LLaMA models confirm that Concurrent Computation of Quasi-Independent Layers (CQIL) can reduce latency by up to 48.3{\%} on LLaMA-33B, while maintaining a close level of performance.
[ "Zou, Longwei", "Wang, Qingyang", "Zhao, Han", "Jiangangkong, Jiangangkong", "Yang, Yi", "Deng, Yangdong" ]
CQIL: Inference Latency Optimization with Concurrent Computation of Quasi-Independent Layers
acl-long.394
Poster
2404.06709
[ "https://github.com/photooon/cqil" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.394/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.395.bib
@inproceedings{long-etal-2024-prompt, title = "Prompt Optimization via Adversarial In-Context Learning", author = "Long, Do and Zhao, Yiran and Brown, Hannah and Xie, Yuxi and Zhao, James and Chen, Nancy and Kawaguchi, Kenji and Shieh, Michael and He, Junxian", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.395", pages = "7308--7327", abstract = "We propose a new method, Adversarial In-Context Learning (adv-ICL), to optimize prompts for in-context learning (ICL). Inspired by adversarial learning, adv-ICL is implemented as a two-player game between a generator and discriminator, with LLMs acting as both. In each round, given an input prefixed by task instructions and several exemplars, the generator produces an output. The discriminator then classifies the generator{'}s input-output pair as model-generated or real data. Based on the discriminator{'}s loss, a prompt modifier LLM proposes possible edits to the generator and discriminator prompts, and the edits that most improve the adversarial loss are selected. We show that applying adv-ICL results in significant improvements over state-of-the-art prompt optimization techniques for both open and closed-source models on 13 generation and classification tasks including summarization, arithmetic reasoning, machine translation, data-to-text generation, and the MMLU and big-bench hard benchmarks. In addition, our method is computationally efficient, easily extensible to other LLMs and tasks, and effective in low-resource settings.", }
We propose a new method, Adversarial In-Context Learning (adv-ICL), to optimize prompts for in-context learning (ICL). Inspired by adversarial learning, adv-ICL is implemented as a two-player game between a generator and discriminator, with LLMs acting as both. In each round, given an input prefixed by task instructions and several exemplars, the generator produces an output. The discriminator then classifies the generator{'}s input-output pair as model-generated or real data. Based on the discriminator{'}s loss, a prompt modifier LLM proposes possible edits to the generator and discriminator prompts, and the edits that most improve the adversarial loss are selected. We show that applying adv-ICL results in significant improvements over state-of-the-art prompt optimization techniques for both open and closed-source models on 13 generation and classification tasks including summarization, arithmetic reasoning, machine translation, data-to-text generation, and the MMLU and big-bench hard benchmarks. In addition, our method is computationally efficient, easily extensible to other LLMs and tasks, and effective in low-resource settings.
[ "Long, Do", "Zhao, Yiran", "Brown, Hannah", "Xie, Yuxi", "Zhao, James", "Chen, Nancy", "Kawaguchi, Kenji", "Shieh, Michael", "He, Junxian" ]
Prompt Optimization via Adversarial In-Context Learning
acl-long.395
Oral
2312.02614
[ "https://github.com/zhaoyiran924/adv-in-context-learning" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.395/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.396.bib
@inproceedings{wang-etal-2024-streamvoice, title = "{S}tream{V}oice: Streamable Context-Aware Language Modeling for Real-time Zero-Shot Voice Conversion", author = "Wang, Zhichao and Chen, Yuanzhe and Wang, Xinsheng and Xie, Lei and Wang, Yuping", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.396", pages = "7328--7338", abstract = "Recent language model (LM) advancements have showcased impressive zero-shot voice conversion (VC) performance. However, existing LM-based VC models usually apply offline conversion from source semantics to acoustic features, demanding the complete source speech and limiting their deployment to real-time applications. In this paper, we introduce StreamVoice, a novel streaming LM-based model for zero-shot VC, facilitating real-time conversion given arbitrary speaker prompts and source speech. Specifically, to enable streaming capability, StreamVoice employs a fully causal context-aware LM with a temporal-independent acoustic predictor, while alternately processing semantic and acoustic features at each time step of autoregression which eliminates the dependence on complete source speech. To address the potential performance degradation from the incomplete context in streaming processing, we enhance the context-awareness of the LM through two strategies: 1) teacher-guided context foresight, using a teacher model to summarize the present and future semantic context during training to guide the model{'}s forecasting for missing context; 2) semantic masking strategy, promoting acoustic prediction from preceding corrupted semantic and acoustic input, enhancing context-learning ability. Notably, StreamVoice is the first LM-based streaming zero-shot VC model without any future look-ahead. Experiments demonstrate StreamVoice{'}s streaming conversion capability while achieving zero-shot performance comparable to non-streaming VC systems.", }
Recent language model (LM) advancements have showcased impressive zero-shot voice conversion (VC) performance. However, existing LM-based VC models usually apply offline conversion from source semantics to acoustic features, demanding the complete source speech and limiting their deployment to real-time applications. In this paper, we introduce StreamVoice, a novel streaming LM-based model for zero-shot VC, facilitating real-time conversion given arbitrary speaker prompts and source speech. Specifically, to enable streaming capability, StreamVoice employs a fully causal context-aware LM with a temporal-independent acoustic predictor, while alternately processing semantic and acoustic features at each time step of autoregression which eliminates the dependence on complete source speech. To address the potential performance degradation from the incomplete context in streaming processing, we enhance the context-awareness of the LM through two strategies: 1) teacher-guided context foresight, using a teacher model to summarize the present and future semantic context during training to guide the model{'}s forecasting for missing context; 2) semantic masking strategy, promoting acoustic prediction from preceding corrupted semantic and acoustic input, enhancing context-learning ability. Notably, StreamVoice is the first LM-based streaming zero-shot VC model without any future look-ahead. Experiments demonstrate StreamVoice{'}s streaming conversion capability while achieving zero-shot performance comparable to non-streaming VC systems.
[ "Wang, Zhichao", "Chen, Yuanzhe", "Wang, Xinsheng", "Xie, Lei", "Wang, Yuping" ]
StreamVoice: Streamable Context-Aware Language Modeling for Real-time Zero-Shot Voice Conversion
acl-long.396
Oral
2401.11053
[ "" ]
https://huggingface.co/papers/2401.11053
0
8
1
7
https://aclanthology.org/2024.acl-long.396/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.397.bib
@inproceedings{shi-etal-2024-generate, title = "Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering", author = "Shi, Zhengliang and Zhang, Shuo and Sun, Weiwei and Gao, Shen and Ren, Pengjie and Chen, Zhumin and Ren, Zhaochun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.397", pages = "7339--7353", abstract = "Multi-Hop Question Answering (MHQA) task presents a significant challenge for large language models (LLMs) due to the intensive knowledge required. Current solutions, like Retrieval-Augmented Generation, typically retrieve potential documents from an external corpus to read an answer. However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. GenGround empowers LLMs to alternate two phases until the final answer is derived: (1) formulate a simpler, single-hop question and directly generate the answer; (2) ground the question-answer pair into retrieved documents, amending any wrong predictions in the answer. We also propose an instructional grounding distillation method to generalize our method into smaller models. Extensive experiments conducted on four datasets illustrate the superiority of our method. To further facilitate future research, we have collected a dataset that traces the reasoning process.", }
Multi-Hop Question Answering (MHQA) task presents a significant challenge for large language models (LLMs) due to the intensive knowledge required. Current solutions, like Retrieval-Augmented Generation, typically retrieve potential documents from an external corpus to read an answer. However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. GenGround empowers LLMs to alternate two phases until the final answer is derived: (1) formulate a simpler, single-hop question and directly generate the answer; (2) ground the question-answer pair into retrieved documents, amending any wrong predictions in the answer. We also propose an instructional grounding distillation method to generalize our method into smaller models. Extensive experiments conducted on four datasets illustrate the superiority of our method. To further facilitate future research, we have collected a dataset that traces the reasoning process.
[ "Shi, Zhengliang", "Zhang, Shuo", "Sun, Weiwei", "Gao, Shen", "Ren, Pengjie", "Chen, Zhumin", "Ren, Zhaochun" ]
Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering
acl-long.397
Poster
2406.14891
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.397/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.398.bib
@inproceedings{voas-etal-2024-multimodal, title = "Multimodal Contextualized Semantic Parsing from Speech", author = "Voas, Jordan and Harwath, David and Mooney, Ray", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.398", pages = "7354--7369", abstract = "We introduce Semantic Parsing in Contextual Environments (SPICE), a task designed to enhance artificial agents{'} contextual awareness by integrating multimodal inputs with prior contexts. SPICE goes beyond traditional semantic parsing by offering a structured, interpretable framework for dynamically updating an agent{'}s knowledge with new information, mirroring the complexity of human communication. We develop the VG-SPICE dataset, crafted to challenge agents with visual scene graph construction from spoken conversational exchanges, highlighting speech and visual data integration. We also present the Audio-Vision Dialogue Scene Parser (AViD-SP) developed for use on VG-SPICE. These innovations aim to improve multimodal information processing and integration. Both the VG-SPICE dataset and the AViD-SP model are publicly available.", }
We introduce Semantic Parsing in Contextual Environments (SPICE), a task designed to enhance artificial agents{'} contextual awareness by integrating multimodal inputs with prior contexts. SPICE goes beyond traditional semantic parsing by offering a structured, interpretable framework for dynamically updating an agent{'}s knowledge with new information, mirroring the complexity of human communication. We develop the VG-SPICE dataset, crafted to challenge agents with visual scene graph construction from spoken conversational exchanges, highlighting speech and visual data integration. We also present the Audio-Vision Dialogue Scene Parser (AViD-SP) developed for use on VG-SPICE. These innovations aim to improve multimodal information processing and integration. Both the VG-SPICE dataset and the AViD-SP model are publicly available.
[ "Voas, Jordan", "Harwath, David", "Mooney, Ray" ]
Multimodal Contextualized Semantic Parsing from Speech
acl-long.398
Poster
2406.06438
[ "https://github.com/jvoas655/VG-SPICE" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.398/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.399.bib
@inproceedings{salemi-etal-2024-lamp, title = "{L}a{MP}: When Large Language Models Meet Personalization", author = "Salemi, Alireza and Mysore, Sheshera and Bendersky, Michael and Zamani, Hamed", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.399", pages = "7370--7392", abstract = "This paper highlights the importance of personalization in large language models and introduces the LaMP benchmark {---} a novel benchmark for training and evaluating language models for producing personalized outputs. LaMP offers a comprehensive evaluation framework with diverse language tasks and multiple entries for each user profile. It consists of seven personalized tasks, spanning three text classification and four text generation tasks. We additionally propose two retrieval augmentation approaches that retrieve personal items from each user profile for personalizing language model outputs. To this aim, we study various retrieval models, including term matching, semantic matching, and time-aware methods. Extensive experiments on LaMP for zero-shot and fine-tuned language models demonstrate the efficacy of the proposed retrieval augmentation approach and highlight the impact of personalization in various natural language tasks.", }
This paper highlights the importance of personalization in large language models and introduces the LaMP benchmark {---} a novel benchmark for training and evaluating language models for producing personalized outputs. LaMP offers a comprehensive evaluation framework with diverse language tasks and multiple entries for each user profile. It consists of seven personalized tasks, spanning three text classification and four text generation tasks. We additionally propose two retrieval augmentation approaches that retrieve personal items from each user profile for personalizing language model outputs. To this aim, we study various retrieval models, including term matching, semantic matching, and time-aware methods. Extensive experiments on LaMP for zero-shot and fine-tuned language models demonstrate the efficacy of the proposed retrieval augmentation approach and highlight the impact of personalization in various natural language tasks.
[ "Salemi, Alireza", "Mysore, Sheshera", "Bendersky, Michael", "Zamani, Hamed" ]
LaMP: When Large Language Models Meet Personalization
acl-long.399
Poster
2304.11406
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.399/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.400.bib
@inproceedings{lucy-etal-2024-aboutme, title = "{A}bout{M}e: Using Self-Descriptions in Webpages to Document the Effects of {E}nglish Pretraining Data Filters", author = "Lucy, Li and Gururangan, Suchin and Soldaini, Luca and Strubell, Emma and Bamman, David and Klein, Lauren and Dodge, Jesse", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.400", pages = "7393--7420", abstract = "Large language models{'} (LLMs) abilities are drawn from their pretraining data, and model development begins with data curation. However, decisions around what data is retained or removed during this initial stage are under-scrutinized. In our work, we ground web text, which is a popular pretraining data source, to its social and geographic contexts. We create a new dataset of 10.3 million self-descriptions of website creators, and extract information about who they are and where they are from: their topical interests, social roles, and geographic affiliations. Then, we conduct the first study investigating how ten {``}quality{''} and English language identification (langID) filters affect webpages that vary along these social dimensions. Our experiments illuminate a range of implicit preferences in data curation: we show that some quality classifiers act like topical domain filters, and langID can overlook English content from some regions of the world. Overall, we hope that our work will encourage a new line of research on pretraining data curation practices and its social implications.", }
Large language models{'} (LLMs) abilities are drawn from their pretraining data, and model development begins with data curation. However, decisions around what data is retained or removed during this initial stage are under-scrutinized. In our work, we ground web text, which is a popular pretraining data source, to its social and geographic contexts. We create a new dataset of 10.3 million self-descriptions of website creators, and extract information about who they are and where they are from: their topical interests, social roles, and geographic affiliations. Then, we conduct the first study investigating how ten {``}quality{''} and English language identification (langID) filters affect webpages that vary along these social dimensions. Our experiments illuminate a range of implicit preferences in data curation: we show that some quality classifiers act like topical domain filters, and langID can overlook English content from some regions of the world. Overall, we hope that our work will encourage a new line of research on pretraining data curation practices and its social implications.
[ "Lucy, Li", "Gururangan, Suchin", "Soldaini, Luca", "Strubell, Emma", "Bamman, David", "Klein, Lauren", "Dodge, Jesse" ]
AboutMe: Using Self-Descriptions in Webpages to Document the Effects of English Pretraining Data Filters
acl-long.400
Poster
2401.06408
[ "https://github.com/lucy3/whos_filtered" ]
https://huggingface.co/papers/2401.06408
2
1
0
7
https://aclanthology.org/2024.acl-long.400/
[ "lucy3/roberta_social_roles" ]
[ "allenai/aboutme" ]
[]
1