Datasets:

bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
sequencelengths
1
45
title
stringlengths
21
199
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
28
upvotes
int64
-1
255
num_comments
int64
-1
23
n_authors
int64
-1
35
proceedings
stringlengths
38
47
Models
sequencelengths
0
57
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.acl-long.401.bib
@inproceedings{bai-etal-2024-mt, title = "{MT}-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues", author = "Bai, Ge and Liu, Jie and Bu, Xingyuan and He, Yancheng and Liu, Jiaheng and Zhou, Zhanhui and Lin, Zhuoran and Su, Wenbo and Ge, Tiezheng and Zheng, Bo and Ouyang, Wanli", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.401", pages = "7421--7454", abstract = "The advent of Large Language Models (LLMs) has drastically enhanced dialogue systems. However, comprehensively evaluating the dialogue abilities of LLMs remains a challenge. Previous benchmarks have primarily focused on single-turn dialogues or provided coarse-grained and incomplete assessments of multi-turn dialogues, overlooking the complexity and fine-grained nuances of real-life dialogues. To address this issue, we introduce MT-Bench-101, specifically designed to evaluate the fine-grained abilities of LLMs in multi-turn dialogues. By conducting a detailed analysis of real multi-turn dialogue data, we construct a three-tier hierarchical ability taxonomy comprising 4208 turns across 1388 multi-turn dialogues in 13 distinct tasks. We then evaluate 21 popular LLMs based on MT-Bench-101, conducting comprehensive analyses from both ability and task perspectives and observing differing trends in LLMs performance across dialogue turns within various tasks. Further analysis indicates that neither utilizing common alignment techniques nor chat-specific designs has led to obvious enhancements in the multi-turn abilities of LLMs. Extensive case studies suggest that our designed tasks accurately assess the corresponding multi-turn abilities. The data and code are available at https://github.com/mtbench101/mt-bench-101.", }
The advent of Large Language Models (LLMs) has drastically enhanced dialogue systems. However, comprehensively evaluating the dialogue abilities of LLMs remains a challenge. Previous benchmarks have primarily focused on single-turn dialogues or provided coarse-grained and incomplete assessments of multi-turn dialogues, overlooking the complexity and fine-grained nuances of real-life dialogues. To address this issue, we introduce MT-Bench-101, specifically designed to evaluate the fine-grained abilities of LLMs in multi-turn dialogues. By conducting a detailed analysis of real multi-turn dialogue data, we construct a three-tier hierarchical ability taxonomy comprising 4208 turns across 1388 multi-turn dialogues in 13 distinct tasks. We then evaluate 21 popular LLMs based on MT-Bench-101, conducting comprehensive analyses from both ability and task perspectives and observing differing trends in LLMs performance across dialogue turns within various tasks. Further analysis indicates that neither utilizing common alignment techniques nor chat-specific designs has led to obvious enhancements in the multi-turn abilities of LLMs. Extensive case studies suggest that our designed tasks accurately assess the corresponding multi-turn abilities. The data and code are available at https://github.com/mtbench101/mt-bench-101.
[ "Bai, Ge", "Liu, Jie", "Bu, Xingyuan", "He, Yancheng", "Liu, Jiaheng", "Zhou, Zhanhui", "Lin, Zhuoran", "Su, Wenbo", "Ge, Tiezheng", "Zheng, Bo", "Ouyang, Wanli" ]
MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues
acl-long.401
Poster
2402.14762
[ "https://github.com/mtbench101/mt-bench-101" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.401/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.402.bib
@inproceedings{chen-etal-2024-efsa, title = "{EFSA}: Towards Event-Level Financial Sentiment Analysis", author = "Chen, Tianyu and Zhang, Yiming and Yu, Guoxin and Zhang, Dapeng and Zeng, Li and He, Qing and Ao, Xiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.402", pages = "7455--7467", abstract = "In this paper, we extend financial sentiment analysis (FSA) to event-level since events usually serve as the subject of the sentiment in financial text. Though extracting events from the financial text may be conducive to accurate sentiment predictions, it has specialized challenges due to the lengthy and discontinuity of events in a financial text. To this end, we reconceptualize the event extraction as a classification task by designing a categorization comprising coarse-grained and fine-grained event categories. Under this setting, we formulate the Event-Level Financial Sentiment Analysis(EFSA for short) task that outputs quintuples consisting of (company, industry, coarse-grained event, fine-grained event, sentiment) from financial text. A large-scale Chinese dataset containing 12,160 news articles and 13,725 quintuples is publicized as a brand new testbed for our task. A four-hop Chain-of-Thought LLM-based approach is devised for this task. Systematically investigations are conducted on our dataset, and the empirical results demonstrate the benchmarking scores of existing methods and our proposed method can reach the current state-of-the-art. Our dataset and framework implementation are available at https://github.com/cty1934/EFSA", }
In this paper, we extend financial sentiment analysis (FSA) to event-level since events usually serve as the subject of the sentiment in financial text. Though extracting events from the financial text may be conducive to accurate sentiment predictions, it has specialized challenges due to the lengthy and discontinuity of events in a financial text. To this end, we reconceptualize the event extraction as a classification task by designing a categorization comprising coarse-grained and fine-grained event categories. Under this setting, we formulate the Event-Level Financial Sentiment Analysis(EFSA for short) task that outputs quintuples consisting of (company, industry, coarse-grained event, fine-grained event, sentiment) from financial text. A large-scale Chinese dataset containing 12,160 news articles and 13,725 quintuples is publicized as a brand new testbed for our task. A four-hop Chain-of-Thought LLM-based approach is devised for this task. Systematically investigations are conducted on our dataset, and the empirical results demonstrate the benchmarking scores of existing methods and our proposed method can reach the current state-of-the-art. Our dataset and framework implementation are available at https://github.com/cty1934/EFSA
[ "Chen, Tianyu", "Zhang, Yiming", "Yu, Guoxin", "Zhang, Dapeng", "Zeng, Li", "He, Qing", "Ao, Xiang" ]
EFSA: Towards Event-Level Financial Sentiment Analysis
acl-long.402
Poster
2404.08681
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.402/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.403.bib
@inproceedings{wan-etal-2024-evidence, title = "What Evidence Do Language Models Find Convincing?", author = "Wan, Alexander and Wallace, Eric and Klein, Dan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.403", pages = "7468--7484", abstract = "Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as {``}is aspartame linked to cancer{''}. To resolve these ambiguous queries, one must search through a large range of websites and consider {``}which, if any, of this evidence do I find convincing?{''}. In this work, we study how LLMs answer this question. In particular, we construct ConflictingQA, a dataset that pairs controversial queries with a series of real-world evidence documents that contain different facts (e.g., quantitative results), argument styles (e.g., appeals to authority), and answers (Yes or No). We use this dataset to perform sensitivity and counterfactual analyses to explore which text features most affect LLM predictions. Overall, we find that current models rely heavily on the relevance of a website to the query, while largely ignoring stylistic features that humans find important such as whether a text contains scientific references or is written with a neutral tone. Taken together, these results highlight the importance of RAG corpus quality (e.g., the need to filter misinformation), and possibly even a shift in how LLMs are trained to better align with human judgements.", }
Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as {``}is aspartame linked to cancer{''}. To resolve these ambiguous queries, one must search through a large range of websites and consider {``}which, if any, of this evidence do I find convincing?{''}. In this work, we study how LLMs answer this question. In particular, we construct ConflictingQA, a dataset that pairs controversial queries with a series of real-world evidence documents that contain different facts (e.g., quantitative results), argument styles (e.g., appeals to authority), and answers (Yes or No). We use this dataset to perform sensitivity and counterfactual analyses to explore which text features most affect LLM predictions. Overall, we find that current models rely heavily on the relevance of a website to the query, while largely ignoring stylistic features that humans find important such as whether a text contains scientific references or is written with a neutral tone. Taken together, these results highlight the importance of RAG corpus quality (e.g., the need to filter misinformation), and possibly even a shift in how LLMs are trained to better align with human judgements.
[ "Wan, Alex", "er", "Wallace, Eric", "Klein, Dan" ]
What Evidence Do Language Models Find Convincing?
acl-long.403
Poster
2402.11782
[ "https://github.com/alexwan0/rag-convincingness" ]
https://huggingface.co/papers/2402.11782
0
2
0
3
https://aclanthology.org/2024.acl-long.403/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.404.bib
@inproceedings{ai-etal-2024-advancement, title = "Advancement in Graph Understanding: A Multimodal Benchmark and Fine-Tuning of Vision-Language Models", author = "Ai, Qihang and Li, Jiafan and Dai, Jincheng and Zhou, Jianwu and Liu, Lemao and Jiang, Haiyun and Shi, Shuming", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.404", pages = "7485--7501", abstract = "Graph data organizes complex relationships and interactions between objects, facilitating advanced analysis and decision-making across different fields. In this paper, we propose a new paradigm for interactive and instructional graph data understanding and reasoning.Instead of adopting complex graph neural models or heuristic graph-to-text instruction design, we leverage Vision-Language Models (VLMs) to encode the graph images with varying structures across different domains. This paper first evaluates the capabilities of public VLMs in graph learning from multiple aspects. Then it introduces a novel instruction-following dataset for multimodal graph understanding and reasoning in English and Chinese. Besides, by fine-tuning MiniGPT-4 and LLaVA on our dataset, we achieved an accuracy increase of 5{\%}-15{\%} compared to baseline models, with the best-performing model attaining scores comparable to Gemini in GPT-asissted Evaluation. This research not only showcases the potential of integrating VLMs with graph data but also opens new avenues for advancements in graph data understanding.", }
Graph data organizes complex relationships and interactions between objects, facilitating advanced analysis and decision-making across different fields. In this paper, we propose a new paradigm for interactive and instructional graph data understanding and reasoning.Instead of adopting complex graph neural models or heuristic graph-to-text instruction design, we leverage Vision-Language Models (VLMs) to encode the graph images with varying structures across different domains. This paper first evaluates the capabilities of public VLMs in graph learning from multiple aspects. Then it introduces a novel instruction-following dataset for multimodal graph understanding and reasoning in English and Chinese. Besides, by fine-tuning MiniGPT-4 and LLaVA on our dataset, we achieved an accuracy increase of 5{\%}-15{\%} compared to baseline models, with the best-performing model attaining scores comparable to Gemini in GPT-asissted Evaluation. This research not only showcases the potential of integrating VLMs with graph data but also opens new avenues for advancements in graph data understanding.
[ "Ai, Qihang", "Li, Jiafan", "Dai, Jincheng", "Zhou, Jianwu", "Liu, Lemao", "Jiang, Haiyun", "Shi, Shuming" ]
Advancement in Graph Understanding: A Multimodal Benchmark and Fine-Tuning of Vision-Language Models
acl-long.404
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.404/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.405.bib
@inproceedings{yoon-etal-2024-langbridge, title = "{L}ang{B}ridge: Multilingual Reasoning Without Multilingual Supervision", author = "Yoon, Dongkeun and Jang, Joel and Kim, Sungdong and Kim, Seungone and Shafayat, Sheikh and Seo, Minjoon", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.405", pages = "7502--7522", abstract = "We introduce LangBridge, a $\textit{zero-shot}$ approach to adapt language models for multilingual reasoning tasks without multilingual supervision. LangBridge operates by bridging two models, each specialized in different aspects: (1) one specialized in understanding multiple languages (e.g., mT5 encoder) and (2) one specialized in reasoning (e.g., MetaMath). LangBridge connects the two models by introducing minimal trainable parameters between them. Despite utilizing only English data for training, LangBridge considerably enhances the performance of language models on low-resource languages across mathematical reasoning, code completion, logical reasoning, and commonsense reasoning. Our analysis suggests that the efficacy of LangBridge stems from the language-agnostic characteristics of multilingual representations. We publicly release our code and models.", }
We introduce LangBridge, a $\textit{zero-shot}$ approach to adapt language models for multilingual reasoning tasks without multilingual supervision. LangBridge operates by bridging two models, each specialized in different aspects: (1) one specialized in understanding multiple languages (e.g., mT5 encoder) and (2) one specialized in reasoning (e.g., MetaMath). LangBridge connects the two models by introducing minimal trainable parameters between them. Despite utilizing only English data for training, LangBridge considerably enhances the performance of language models on low-resource languages across mathematical reasoning, code completion, logical reasoning, and commonsense reasoning. Our analysis suggests that the efficacy of LangBridge stems from the language-agnostic characteristics of multilingual representations. We publicly release our code and models.
[ "Yoon, Dongkeun", "Jang, Joel", "Kim, Sungdong", "Kim, Seungone", "Shafayat, Sheikh", "Seo, Minjoon" ]
LangBridge: Multilingual Reasoning Without Multilingual Supervision
acl-long.405
Poster
2401.10695
[ "https://github.com/kaistAI/LangBridge" ]
https://huggingface.co/papers/2401.10695
4
4
0
6
https://aclanthology.org/2024.acl-long.405/
[ "kaist-ai/llama2-langbridge-9b", "kaist-ai/langbridge_encoder_tokenizer", "kaist-ai/orca2-langbridge-9b", "kaist-ai/llemma-langbrige-9b", "kaist-ai/codellama-langbridge-15b", "kaist-ai/metamath-langbridge-9b", "kaist-ai/codellama-langbridge-9b", "kaist-ai/metamath-langbridge-15b", "kaist-ai/metamath-langbridge-20b", "kaist-ai/codellama-langbridge-20b", "kaist-ai/orca2-langbridge-20b", "kaist-ai/orca2-langbridge-15b" ]
[]
[ "kevinpro/Open-Multilingual-Reasoning-Leaderboard" ]
1
https://aclanthology.org/2024.acl-long.406.bib
@inproceedings{wang-etal-2024-llms, title = "Can {LLM}s Reason with Rules? Logic Scaffolding for Stress-Testing and Improving {LLM}s", author = "Wang, Siyuan and Wei, Zhongyu and Choi, Yejin and Ren, Xiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.406", pages = "7523--7543", abstract = "Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks. However, their mastery of underlying inferential rules still falls short of human capabilities. To investigate this, we propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic, comprising both primitive and compositional rules across five domains. Our analysis of GPT-series models over a rule subset reveals significant gaps in LLMs{'} logic understanding compared to human performance, especially in compositional and structural complex rules with certain bias patterns. We further distill these rules into a smaller-scale inference engine for flexible rule generation and enhancing downstream reasoning. Through a multi-judger evaluation, our inference engine proves effective in generating accurate, complex and abstract conclusions and premises, and improve various commonsense reasoning tasks. Overall, our work sheds light on LLMs{'} limitations in grasping inferential rule and suggests ways to enhance their logical reasoning abilities .", }
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks. However, their mastery of underlying inferential rules still falls short of human capabilities. To investigate this, we propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic, comprising both primitive and compositional rules across five domains. Our analysis of GPT-series models over a rule subset reveals significant gaps in LLMs{'} logic understanding compared to human performance, especially in compositional and structural complex rules with certain bias patterns. We further distill these rules into a smaller-scale inference engine for flexible rule generation and enhancing downstream reasoning. Through a multi-judger evaluation, our inference engine proves effective in generating accurate, complex and abstract conclusions and premises, and improve various commonsense reasoning tasks. Overall, our work sheds light on LLMs{'} limitations in grasping inferential rule and suggests ways to enhance their logical reasoning abilities .
[ "Wang, Siyuan", "Wei, Zhongyu", "Choi, Yejin", "Ren, Xiang" ]
Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs
acl-long.406
Oral
2402.11442
[ "https://github.com/siyuanwangw/ulogic" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.406/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.407.bib
@inproceedings{zhao-etal-2024-sego, title = "{SEGO}: Sequential Subgoal Optimization for Mathematical Problem-Solving", author = "Zhao, Xueliang and Huang, Xinting and Bi, Wei and Kong, Lingpeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.407", pages = "7544--7565", abstract = "Large Language Models (LLMs) have driven substantial progress in artificial intelligence in recent years, exhibiting impressive capabilities across a wide range of tasks, including mathematical problem-solving. Inspired by the success of subgoal-based methods, we propose a novel framework called \textbf{SE}quential sub\textbf{G}oal \textbf{O}ptimization (SEGO) to enhance LLMs{'} ability to solve mathematical problems. By establishing a connection between the subgoal breakdown process and the probability of solving problems, SEGO aims to identify better subgoals with theoretical guarantees. Addressing the challenge of identifying suitable subgoals in a large solution space, our framework generates problem-specific subgoals and adjusts them according to carefully designed criteria. Incorporating these optimized subgoals into the policy model training leads to significant improvements in problem-solving performance. We validate SEGO{'}s efficacy through experiments on two benchmarks, GSM8K and MATH, where our approach outperforms existing methods, highlighting the potential of SEGO in AI-driven mathematical problem-solving.", }
Large Language Models (LLMs) have driven substantial progress in artificial intelligence in recent years, exhibiting impressive capabilities across a wide range of tasks, including mathematical problem-solving. Inspired by the success of subgoal-based methods, we propose a novel framework called \textbf{SE}quential sub\textbf{G}oal \textbf{O}ptimization (SEGO) to enhance LLMs{'} ability to solve mathematical problems. By establishing a connection between the subgoal breakdown process and the probability of solving problems, SEGO aims to identify better subgoals with theoretical guarantees. Addressing the challenge of identifying suitable subgoals in a large solution space, our framework generates problem-specific subgoals and adjusts them according to carefully designed criteria. Incorporating these optimized subgoals into the policy model training leads to significant improvements in problem-solving performance. We validate SEGO{'}s efficacy through experiments on two benchmarks, GSM8K and MATH, where our approach outperforms existing methods, highlighting the potential of SEGO in AI-driven mathematical problem-solving.
[ "Zhao, Xueliang", "Huang, Xinting", "Bi, Wei", "Kong, Lingpeng" ]
SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving
acl-long.407
Poster
2310.12960
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.407/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.408.bib
@inproceedings{jiang-etal-2024-unlocking, title = "Unlocking the Power of Large Language Models for Entity Alignment", author = "Jiang, Xuhui and Shen, Yinghan and Shi, Zhichao and Xu, Chengjin and Li, Wei and Li, Zixuan and Guo, Jian and Shen, Huawei and Wang, Yuanzhuo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.408", pages = "7566--7583", abstract = "Entity Alignment (EA) is vital for integrating diverse knowledge graph (KG) data, playing a crucial role in data-driven AI applications. Traditional EA methods primarily rely on comparing entity embeddings, but their effectiveness is constrained by the limited input KG data and the capabilities of the representation learning techniques. Against this backdrop, we introduce ChatEA, an innovative framework that incorporates large language models (LLMs) to improve EA. To address the constraints of limited input KG data, ChatEA introduces a KG-code translation module that translates KG structures into a format understandable by LLMs, thereby allowing LLMs to utilize their extensive background knowledge to improve EA accuracy. To overcome the over-reliance on entity embedding comparisons, ChatEA implements a two-stage EA strategy that capitalizes on LLMs{'} capability for multi-step reasoning in a dialogue format, thereby enhancing accuracy while preserving efficiency. Our experimental results affirm ChatEA{'}s superior performance, highlighting LLMs{'} potential in facilitating EA tasks.The source code is available at https://anonymous.4open.science/r/ChatEA/.", }
Entity Alignment (EA) is vital for integrating diverse knowledge graph (KG) data, playing a crucial role in data-driven AI applications. Traditional EA methods primarily rely on comparing entity embeddings, but their effectiveness is constrained by the limited input KG data and the capabilities of the representation learning techniques. Against this backdrop, we introduce ChatEA, an innovative framework that incorporates large language models (LLMs) to improve EA. To address the constraints of limited input KG data, ChatEA introduces a KG-code translation module that translates KG structures into a format understandable by LLMs, thereby allowing LLMs to utilize their extensive background knowledge to improve EA accuracy. To overcome the over-reliance on entity embedding comparisons, ChatEA implements a two-stage EA strategy that capitalizes on LLMs{'} capability for multi-step reasoning in a dialogue format, thereby enhancing accuracy while preserving efficiency. Our experimental results affirm ChatEA{'}s superior performance, highlighting LLMs{'} potential in facilitating EA tasks.The source code is available at https://anonymous.4open.science/r/ChatEA/.
[ "Jiang, Xuhui", "Shen, Yinghan", "Shi, Zhichao", "Xu, Chengjin", "Li, Wei", "Li, Zixuan", "Guo, Jian", "Shen, Huawei", "Wang, Yuanzhuo" ]
Unlocking the Power of Large Language Models for Entity Alignment
acl-long.408
Poster
2402.15048
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.408/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.409.bib
@inproceedings{song-etal-2024-trial, title = "Trial and Error: Exploration-Based Trajectory Optimization of {LLM} Agents", author = "Song, Yifan and Yin, Da and Yue, Xiang and Huang, Jie and Li, Sujian and Lin, Bill Yuchen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.409", pages = "7584--7600", abstract = "Large Language Models (LLMs) have become integral components in various autonomous agent systems.In this study, we present an exploration-based trajectory optimization approach, referred to as ETO. This learning method is designed to enhance the performance of open LLM agents. Contrary to previous studies that exclusively train on successful expert trajectories, our method allows agents to learn from their exploration failures. This leads to improved performance through an iterative optimization framework. During the exploration phase, the agent interacts with the environment while completing given tasks, gathering failure trajectories to create contrastive trajectory pairs. In the subsequent training phase, the agent utilizes these trajectory preference pairs to update its policy using contrastive learning methods like DPO. This iterative cycle of exploration and training fosters continued improvement in the agents. Our experiments on three complex tasks demonstrate that ETO consistently surpasses baseline performance by a large margin. Furthermore, an examination of task-solving efficiency and potential in scenarios lacking expert trajectory underscores the effectiveness of our approach.", }
Large Language Models (LLMs) have become integral components in various autonomous agent systems.In this study, we present an exploration-based trajectory optimization approach, referred to as ETO. This learning method is designed to enhance the performance of open LLM agents. Contrary to previous studies that exclusively train on successful expert trajectories, our method allows agents to learn from their exploration failures. This leads to improved performance through an iterative optimization framework. During the exploration phase, the agent interacts with the environment while completing given tasks, gathering failure trajectories to create contrastive trajectory pairs. In the subsequent training phase, the agent utilizes these trajectory preference pairs to update its policy using contrastive learning methods like DPO. This iterative cycle of exploration and training fosters continued improvement in the agents. Our experiments on three complex tasks demonstrate that ETO consistently surpasses baseline performance by a large margin. Furthermore, an examination of task-solving efficiency and potential in scenarios lacking expert trajectory underscores the effectiveness of our approach.
[ "Song, Yifan", "Yin, Da", "Yue, Xiang", "Huang, Jie", "Li, Sujian", "Lin, Bill Yuchen" ]
Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents
acl-long.409
Poster
[ "" ]
https://huggingface.co/papers/2403.02502
3
3
0
6
https://aclanthology.org/2024.acl-long.409/
[]
[ "agent-eto/eto-sft-trajectory" ]
[]
1
https://aclanthology.org/2024.acl-long.410.bib
@inproceedings{trung-etal-2024-reft, title = "{R}e{FT}: Reasoning with Reinforced Fine-Tuning", author = "Trung, Luong and Zhang, Xinbo and Jie, Zhanming and Sun, Peng and Jin, Xiaoran and Li, Hang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.410", pages = "7601--7614", abstract = "One way to enhance the reasoning capability of Large Language Models (LLMs) is to conduct Supervised Fine-Tuning (SFT) using Chain-of-Thought (CoT) annotations. This approach does not show sufficiently strong generalization ability, however, because the training only relies on the given CoT data. In math problem-solving, for example, there is usually only one annotated reasoning path for each question in the training data. Intuitively, it would be better for the algorithm to learn from multiple annotated reasoning paths given a question. To address this issue, we propose a simple yet effective approach called Reinforced Fine-Tuning (ReFT) to enhance the generalizability of learning LLMs for reasoning, with math problem-solving as an example. ReFT first warmups the model with SFT, and then employs on-line reinforcement learning, specifically the PPO algorithm in this paper, to further fine-tune the model, where an abundance of reasoning paths are automatically sampled given the question and the rewards are naturally derived from the ground-truth answers. Extensive experiments on GSM8K, MathQA, and SVAMP datasets show that ReFT significantly outperforms SFT, and the performance can be potentially further boosted by combining inference-time strategies such as majority voting and re-ranking. Note that ReFT obtains the improvement by learning from the same training questions as SFT, without relying on extra or augmented training questions. This indicates a superior generalization ability for ReFT.", }
One way to enhance the reasoning capability of Large Language Models (LLMs) is to conduct Supervised Fine-Tuning (SFT) using Chain-of-Thought (CoT) annotations. This approach does not show sufficiently strong generalization ability, however, because the training only relies on the given CoT data. In math problem-solving, for example, there is usually only one annotated reasoning path for each question in the training data. Intuitively, it would be better for the algorithm to learn from multiple annotated reasoning paths given a question. To address this issue, we propose a simple yet effective approach called Reinforced Fine-Tuning (ReFT) to enhance the generalizability of learning LLMs for reasoning, with math problem-solving as an example. ReFT first warmups the model with SFT, and then employs on-line reinforcement learning, specifically the PPO algorithm in this paper, to further fine-tune the model, where an abundance of reasoning paths are automatically sampled given the question and the rewards are naturally derived from the ground-truth answers. Extensive experiments on GSM8K, MathQA, and SVAMP datasets show that ReFT significantly outperforms SFT, and the performance can be potentially further boosted by combining inference-time strategies such as majority voting and re-ranking. Note that ReFT obtains the improvement by learning from the same training questions as SFT, without relying on extra or augmented training questions. This indicates a superior generalization ability for ReFT.
[ "Trung, Luong", "Zhang, Xinbo", "Jie, Zhanming", "Sun, Peng", "Jin, Xiaoran", "Li, Hang" ]
ReFT: Reasoning with Reinforced Fine-Tuning
acl-long.410
Poster
2401.08967
[ "https://github.com/lqtrung1998/mwp_reft" ]
https://huggingface.co/papers/2401.08967
2
27
2
6
https://aclanthology.org/2024.acl-long.410/
[ "lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k", "lqtrung1998/Codellama-7b-hf-ReFT-GSM8k", "lqtrung1998/Codellama-7b-hf-SFT-GSM8k", "lqtrung1998/galactica-6.7b-SFT-warmup-GSM8k", "lqtrung1998/Codellama-7b-hf-SFT-warmup-GSM8k", "lqtrung1998/galactica-6.7b-ReFT-GSM8k", "lqtrung1998/galactica-6.7b-SFT-GSM8k", "lqtrung1998/galactica-6.7b-ReFT-Rerank-GSM8k", "lqtrung1998/galactica-6.7b-SFT-Rerank-GSM8k", "lqtrung1998/Codellama-7b-hf-SFT-Rerank-GSM8k" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.411.bib
@inproceedings{li-etal-2024-cognitive, title = "Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge Alignment", author = "Li, Yunxin and Chen, Xinyu and Hu, Baotian and Shi, Haoyuan and Zhang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.411", pages = "7615--7626", abstract = "Evaluating and Rethinking the current landscape of Large Multimodal Models (LMMs), we observe that widely-used visual-language projection approaches (e.g., Q-former or MLP) focus on the alignment of image-text descriptions yet ignore the visual knowledge-dimension alignment, i.e., connecting visuals to their relevant knowledge. Visual knowledge plays a significant role in analyzing, inferring, and interpreting information from visuals, helping improve the accuracy of answers to knowledge-based visual questions. In this paper, we mainly explore improving LMMs with visual-language knowledge alignment, especially aimed at challenging knowledge-based visual question answering (VQA). To this end, we present a Cognitive Visual-Language Mapper (CVLM), which contains a pretrained Visual Knowledge Aligner (VKA) and a Fine-grained Knowledge Adapter (FKA) used in the multimodal instruction tuning stage. Specifically, we design the VKA based on the interaction between a small language model and a visual encoder, training it on collected image-knowledge pairs to achieve visual knowledge acquisition and projection. FKA is employed to distill the fine-grained visual knowledge of an image and inject it into Large Language Models (LLMs). We conduct extensive experiments on knowledge-based VQA benchmarks and experimental results show that CVLM significantly improves the performance of LMMs on knowledge-based VQA (average gain by 5.0{\%}). Ablation studies also verify the effectiveness of VKA and FKA, respectively.", }
Evaluating and Rethinking the current landscape of Large Multimodal Models (LMMs), we observe that widely-used visual-language projection approaches (e.g., Q-former or MLP) focus on the alignment of image-text descriptions yet ignore the visual knowledge-dimension alignment, i.e., connecting visuals to their relevant knowledge. Visual knowledge plays a significant role in analyzing, inferring, and interpreting information from visuals, helping improve the accuracy of answers to knowledge-based visual questions. In this paper, we mainly explore improving LMMs with visual-language knowledge alignment, especially aimed at challenging knowledge-based visual question answering (VQA). To this end, we present a Cognitive Visual-Language Mapper (CVLM), which contains a pretrained Visual Knowledge Aligner (VKA) and a Fine-grained Knowledge Adapter (FKA) used in the multimodal instruction tuning stage. Specifically, we design the VKA based on the interaction between a small language model and a visual encoder, training it on collected image-knowledge pairs to achieve visual knowledge acquisition and projection. FKA is employed to distill the fine-grained visual knowledge of an image and inject it into Large Language Models (LLMs). We conduct extensive experiments on knowledge-based VQA benchmarks and experimental results show that CVLM significantly improves the performance of LMMs on knowledge-based VQA (average gain by 5.0{\%}). Ablation studies also verify the effectiveness of VKA and FKA, respectively.
[ "Li, Yunxin", "Chen, Xinyu", "Hu, Baotian", "Shi, Haoyuan", "Zhang, Min" ]
Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge Alignment
acl-long.411
Poster
2402.13561
[ "https://github.com/hitsz-tmg/cognitive-visual-language-mapper" ]
https://huggingface.co/papers/2402.13561
1
0
0
5
https://aclanthology.org/2024.acl-long.411/
[]
[ "Ghaser/Wikipedia-Knowledge-2M" ]
[]
1
https://aclanthology.org/2024.acl-long.412.bib
@inproceedings{feng-etal-2024-freectrl, title = "{F}ree{C}trl: Constructing Control Centers with Feedforward Layers for Learning-Free Controllable Text Generation", author = "Feng, Zijian and Zhou, Hanzhang and Mao, Kezhi and Zhu, Zixiao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.412", pages = "7627--7640", abstract = "Controllable text generation (CTG) seeks to craft texts adhering to specific attributes, traditionally employing learning-based techniques such as training, fine-tuning, or prefix-tuning with attribute-specific datasets. These approaches, while effective, demand extensive computational and data resources. In contrast, some proposed learning-free alternatives circumvent learning but often yield inferior results, exemplifying the fundamental machine learning trade-off between computational expense and model efficacy. To overcome these limitations, we propose FreeCtrl, a learning-free approach that dynamically adjusts the weights of selected feedforward neural network (FFN) vectors to steer the outputs of large language models (LLMs). FreeCtrl hinges on the principle that the weights of different FFN vectors influence the likelihood of different tokens appearing in the output. By identifying and adaptively adjusting the weights of attribute-related FFN vectors, FreeCtrl can control the output likelihood of attribute keywords in the generated content. Extensive experiments on single- and multi-attribute control reveal that the learning-free FreeCtrl outperforms other learning-free and learning-based methods, successfully resolving the dilemma between learning costs and model performance.", }
Controllable text generation (CTG) seeks to craft texts adhering to specific attributes, traditionally employing learning-based techniques such as training, fine-tuning, or prefix-tuning with attribute-specific datasets. These approaches, while effective, demand extensive computational and data resources. In contrast, some proposed learning-free alternatives circumvent learning but often yield inferior results, exemplifying the fundamental machine learning trade-off between computational expense and model efficacy. To overcome these limitations, we propose FreeCtrl, a learning-free approach that dynamically adjusts the weights of selected feedforward neural network (FFN) vectors to steer the outputs of large language models (LLMs). FreeCtrl hinges on the principle that the weights of different FFN vectors influence the likelihood of different tokens appearing in the output. By identifying and adaptively adjusting the weights of attribute-related FFN vectors, FreeCtrl can control the output likelihood of attribute keywords in the generated content. Extensive experiments on single- and multi-attribute control reveal that the learning-free FreeCtrl outperforms other learning-free and learning-based methods, successfully resolving the dilemma between learning costs and model performance.
[ "Feng, Zijian", "Zhou, Hanzhang", "Mao, Kezhi", "Zhu, Zixiao" ]
FreeCtrl: Constructing Control Centers with Feedforward Layers for Learning-Free Controllable Text Generation
acl-long.412
Poster
2406.09688
[ "https://github.com/zijian678/FreeCtrl" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.412/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.413.bib
@inproceedings{liu-etal-2024-hd, title = "{HD}-Eval: Aligning Large Language Model Evaluators Through Hierarchical Criteria Decomposition", author = "Liu, Yuxuan and Yang, Tianchi and Huang, Shaohan and Zhang, Zihan and Huang, Haizhen and Wei, Furu and Deng, Weiwei and Sun, Feng and Zhang, Qi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.413", pages = "7641--7660", abstract = "Large language models (LLMs) have emerged as a promising alternative to expensive human evaluations. However, the alignment and coverage of LLM-based evaluations are often limited by the scope and potential bias of the evaluation prompts and criteria. To address this challenge, we propose HD-Eval, a novel framework that iteratively aligns LLM-based evaluators with human preference via Hierarchical Criteria Decomposition. HD-Eval inherits the essence from the evaluation mindset of human experts and enhances the alignment of LLM-based evaluators by decomposing a given evaluation task into finer-grained criteria, aggregating them according to estimated human preferences, pruning insignificant criteria with attribution, and further decomposing significant criteria. By integrating these steps within an iterative alignment training process, we obtain a hierarchical decomposition of criteria that comprehensively captures aspects of natural language at multiple levels of granularity. Implemented as a white box, the human preference-guided aggregator is efficient to train and more explainable than relying solely on prompting, and its independence from model parameters makes it applicable to closed-source LLMs. Extensive experiments on three evaluation domains demonstrate the superiority of HD-Eval in further aligning state-of-the-art evaluators and providing deeper insights into the explanation of evaluation results and the task itself.", }
Large language models (LLMs) have emerged as a promising alternative to expensive human evaluations. However, the alignment and coverage of LLM-based evaluations are often limited by the scope and potential bias of the evaluation prompts and criteria. To address this challenge, we propose HD-Eval, a novel framework that iteratively aligns LLM-based evaluators with human preference via Hierarchical Criteria Decomposition. HD-Eval inherits the essence from the evaluation mindset of human experts and enhances the alignment of LLM-based evaluators by decomposing a given evaluation task into finer-grained criteria, aggregating them according to estimated human preferences, pruning insignificant criteria with attribution, and further decomposing significant criteria. By integrating these steps within an iterative alignment training process, we obtain a hierarchical decomposition of criteria that comprehensively captures aspects of natural language at multiple levels of granularity. Implemented as a white box, the human preference-guided aggregator is efficient to train and more explainable than relying solely on prompting, and its independence from model parameters makes it applicable to closed-source LLMs. Extensive experiments on three evaluation domains demonstrate the superiority of HD-Eval in further aligning state-of-the-art evaluators and providing deeper insights into the explanation of evaluation results and the task itself.
[ "Liu, Yuxuan", "Yang, Tianchi", "Huang, Shaohan", "Zhang, Zihan", "Huang, Haizhen", "Wei, Furu", "Deng, Weiwei", "Sun, Feng", "Zhang, Qi" ]
HD-Eval: Aligning Large Language Model Evaluators Through Hierarchical Criteria Decomposition
acl-long.413
Poster
2402.15754
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.413/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.414.bib
@inproceedings{li-ng-2024-conundrums, title = "Conundrums in Cross-Prompt Automated Essay Scoring: Making Sense of the State of the Art", author = "Li, Shengjie and Ng, Vincent", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.414", pages = "7661--7681", abstract = "Cross-prompt automated essay scoring (AES), an under-investigated but challenging task that has gained increasing popularity in the AES community, aims to train an AES system that can generalize well to prompts that are unseen during model training. While recently-developed cross-prompt AES models have combined essay representations that are learned via sophisticated neural architectures with so-called prompt-independent features, an intriguing question is: are complex neural models needed to achieve state-of-the-art results? We answer this question by abandoning sophisticated neural architectures and developing a purely feature-based approach to cross-prompt AES that adopts a simple neural architecture. Experiments on the ASAP dataset demonstrate that our simple approach to cross-prompt AES can achieve state-of-the-art results.", }
Cross-prompt automated essay scoring (AES), an under-investigated but challenging task that has gained increasing popularity in the AES community, aims to train an AES system that can generalize well to prompts that are unseen during model training. While recently-developed cross-prompt AES models have combined essay representations that are learned via sophisticated neural architectures with so-called prompt-independent features, an intriguing question is: are complex neural models needed to achieve state-of-the-art results? We answer this question by abandoning sophisticated neural architectures and developing a purely feature-based approach to cross-prompt AES that adopts a simple neural architecture. Experiments on the ASAP dataset demonstrate that our simple approach to cross-prompt AES can achieve state-of-the-art results.
[ "Li, Shengjie", "Ng, Vincent" ]
Conundrums in Cross-Prompt Automated Essay Scoring: Making Sense of the State of the Art
acl-long.414
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.414/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.415.bib
@inproceedings{plaza-del-arco-etal-2024-angry, title = "Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution", author = "Plaza-del-Arco, Flor and Curry, Amanda and Cercas Curry, Alba and Abercrombie, Gavin and Hovy, Dirk", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.415", pages = "7682--7696", abstract = "Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men{'}s anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes. We prompt the models to adopt a gendered persona and attribute emotions to an event like {`}When I had a serious argument with a dear person{'}. We then analyze the emotions generated by the models in relation to the gender-event pairs. We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes. These findings are in line with established research in psychology and gender studies. Our study sheds light on the complex societal interplay between language, gender, and emotion. The reproduction of emotion stereotypes in LLMs allows us to use those models to study the topic in detail, but raises questions about the predictive use of those same LLMs for emotion applications.", }
Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men{'}s anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes. We prompt the models to adopt a gendered persona and attribute emotions to an event like {`}When I had a serious argument with a dear person{'}. We then analyze the emotions generated by the models in relation to the gender-event pairs. We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes. These findings are in line with established research in psychology and gender studies. Our study sheds light on the complex societal interplay between language, gender, and emotion. The reproduction of emotion stereotypes in LLMs allows us to use those models to study the topic in detail, but raises questions about the predictive use of those same LLMs for emotion applications.
[ "Plaza-del-Arco, Flor", "Curry, Am", "a", "Cercas Curry, Alba", "Abercrombie, Gavin", "Hovy, Dirk" ]
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
acl-long.415
Poster
2403.03121
[ "https://github.com/milanlproc/emotion_gendered_stereotypes" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.415/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.416.bib
@inproceedings{paletto-etal-2024-label, title = "Label Augmentation for Zero-Shot Hierarchical Text Classification", author = "Paletto, Lorenzo and Basile, Valerio and Esposito, Roberto", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.416", pages = "7697--7706", abstract = "Hierarchical Text Classification poses the difficult challenge of classifying documents into multiple labels organized in a hierarchy. The vast majority of works aimed to address this problem relies on supervised methods which are difficult to implement due to the scarcity of labeled data in many real world applications. This paper focuses on strict Zero-Shot Classification, the setting in which the system lacks both labeled instances and training data.We propose a novel approach that uses a Large Language Model to augment the deepest layer of the labels hierarchy in order to enhance its specificity. We achieve this by generating semantically relevant labels as children connected to the existing branches, creating a deeper taxonomy that better overlaps with the input texts. We leverage the enriched hierarchy to perform Zero-Shot Hierarchical Classification by using the Upward score Propagation technique. We test our method on four public datasets, obtaining new state-of-the art results on three of them. We introduce two cosine similarity-based metrics to quantify the density and granularity of a label taxonomy and we show a strong correlation between the metric values and the classification performance of our method on the datasets.", }
Hierarchical Text Classification poses the difficult challenge of classifying documents into multiple labels organized in a hierarchy. The vast majority of works aimed to address this problem relies on supervised methods which are difficult to implement due to the scarcity of labeled data in many real world applications. This paper focuses on strict Zero-Shot Classification, the setting in which the system lacks both labeled instances and training data.We propose a novel approach that uses a Large Language Model to augment the deepest layer of the labels hierarchy in order to enhance its specificity. We achieve this by generating semantically relevant labels as children connected to the existing branches, creating a deeper taxonomy that better overlaps with the input texts. We leverage the enriched hierarchy to perform Zero-Shot Hierarchical Classification by using the Upward score Propagation technique. We test our method on four public datasets, obtaining new state-of-the art results on three of them. We introduce two cosine similarity-based metrics to quantify the density and granularity of a label taxonomy and we show a strong correlation between the metric values and the classification performance of our method on the datasets.
[ "Paletto, Lorenzo", "Basile, Valerio", "Esposito, Roberto" ]
Label Augmentation for Zero-Shot Hierarchical Text Classification
acl-long.416
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.416/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.417.bib
@inproceedings{zhang-etal-2024-stickerconv, title = "{STICKERCONV}: Generating Multimodal Empathetic Responses from Scratch", author = "Zhang, Yiqun and Kong, Fanheng and Wang, Peidong and Sun, Shuang and SWangLing, SWangLing and Feng, Shi and Wang, Daling and Zhang, Yifei and Song, Kaisong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.417", pages = "7707--7733", abstract = "Stickers, while widely recognized for enhancing empathetic communication in online interactions, remain underexplored in current empathetic dialogue research, notably due to the challenge of a lack of comprehensive datasets. In this paper, we introduce the Agent for STICKERCONV (Agent4SC), which uses collaborative agent interactions to realistically simulate human behavior with sticker usage, thereby enhancing multimodal empathetic communication. Building on this foundation, we develop a multimodal empathetic dialogue dataset, STICKERCONV, comprising 12.9K dialogue sessions, 5.8K unique stickers, and 2K diverse conversational scenarios. This dataset serves as a benchmark for multimodal empathetic generation. To advance further, we propose PErceive and Generate Stickers (PEGS), a multimodal empathetic response generation framework, complemented by a comprehensive set of empathy evaluation metrics based on LLM. Our experiments demonstrate PEGS{'}s effectiveness in generating contextually relevant and emotionally resonant multimodal empathetic responses, contributing to the advancement of more nuanced and engaging empathetic dialogue systems.", }
Stickers, while widely recognized for enhancing empathetic communication in online interactions, remain underexplored in current empathetic dialogue research, notably due to the challenge of a lack of comprehensive datasets. In this paper, we introduce the Agent for STICKERCONV (Agent4SC), which uses collaborative agent interactions to realistically simulate human behavior with sticker usage, thereby enhancing multimodal empathetic communication. Building on this foundation, we develop a multimodal empathetic dialogue dataset, STICKERCONV, comprising 12.9K dialogue sessions, 5.8K unique stickers, and 2K diverse conversational scenarios. This dataset serves as a benchmark for multimodal empathetic generation. To advance further, we propose PErceive and Generate Stickers (PEGS), a multimodal empathetic response generation framework, complemented by a comprehensive set of empathy evaluation metrics based on LLM. Our experiments demonstrate PEGS{'}s effectiveness in generating contextually relevant and emotionally resonant multimodal empathetic responses, contributing to the advancement of more nuanced and engaging empathetic dialogue systems.
[ "Zhang, Yiqun", "Kong, Fanheng", "Wang, Peidong", "Sun, Shuang", "SWangLing, SWangLing", "Feng, Shi", "Wang, Daling", "Zhang, Yifei", "Song, Kaisong" ]
STICKERCONV: Generating Multimodal Empathetic Responses from Scratch
acl-long.417
Poster
2402.01679
[ "https://github.com/ZhangYiqun018/StickerConv" ]
https://huggingface.co/papers/2402.01679
0
0
0
9
https://aclanthology.org/2024.acl-long.417/
[]
[ "Estwld/StickerConv_llm" ]
[]
1
https://aclanthology.org/2024.acl-long.418.bib
@inproceedings{zheng-etal-2024-eit, title = "{EIT}: Enhanced Interactive Transformer", author = "Zheng, Tong and Li, Bei and Bao, Huiwen and Xiao, Tong and Zhu, JingBo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.418", pages = "7734--7751", abstract = "Two principles: the complementary principle and the consensus principle are widely acknowledged in the literature of multi-view learning. However, the current design of multi-head self-attention, an instance of multi-view learning, prioritizes the complementarity while ignoring the consensus. To address this problem, we propose an enhanced multi-head self-attention (EMHA). First, to satisfy the complementary principle, EMHA removes the one-to-one mapping constraint among queries and keys in multiple subspaces and allows each query to attend to multiple keys. On top of that, we develop a method to fully encourage consensus among heads by introducing two interaction models, namely inner-subspace interaction and cross-subspace interaction. Extensive experiments on a wide range of language tasks (e.g., machine translation, abstractive summarization and grammar correction, language modeling), show its superiority, with a very modest increase in model size. Our code would be available at: https://github.com/zhengkid/EIT-Enhanced-Interactive-Transformer.", }
Two principles: the complementary principle and the consensus principle are widely acknowledged in the literature of multi-view learning. However, the current design of multi-head self-attention, an instance of multi-view learning, prioritizes the complementarity while ignoring the consensus. To address this problem, we propose an enhanced multi-head self-attention (EMHA). First, to satisfy the complementary principle, EMHA removes the one-to-one mapping constraint among queries and keys in multiple subspaces and allows each query to attend to multiple keys. On top of that, we develop a method to fully encourage consensus among heads by introducing two interaction models, namely inner-subspace interaction and cross-subspace interaction. Extensive experiments on a wide range of language tasks (e.g., machine translation, abstractive summarization and grammar correction, language modeling), show its superiority, with a very modest increase in model size. Our code would be available at: https://github.com/zhengkid/EIT-Enhanced-Interactive-Transformer.
[ "Zheng, Tong", "Li, Bei", "Bao, Huiwen", "Xiao, Tong", "Zhu, JingBo" ]
EIT: Enhanced Interactive Transformer
acl-long.418
Poster
2212.10197
[ "https://github.com/zhengkid/eit-enhanced-interactive-transformer" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.418/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.419.bib
@inproceedings{bakman-etal-2024-mars, title = "{MARS}: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative {LLM}s", author = "Bakman, Yavuz Faruk and Yaldiz, Duygu Nur and Buyukates, Baturalp and Tao, Chenyang and Dimitriadis, Dimitrios and Avestimehr, Salman", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.419", pages = "7752--7767", abstract = "Generative Large Language Models (LLMs) are widely utilized for their excellence in various tasks. However, their tendency to produce inaccurate or misleading outputs poses a potential risk, particularly in high-stakes environments. Therefore, estimating the correctness of generative LLM outputs is an important task for enhanced reliability. Uncertainty Estimation (UE) in generative LLMs is an evolving domain, where SOTA probability-based methods commonly employ length-normalized scoring. In this work, we propose Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized scoring for UE methods. MARS is a novel scoring function that considers the semantic contribution of each token in the generated sequence in the context of the question. We demonstrate that integrating MARS into UE methods results in a universal and significant improvement in UE performance. We conduct experiments using three distinct closed-book question-answering datasets across five popular pre-trained LLMs. Lastly, we validate the efficacy of MARS on a Medical QA dataset. Code can be found here.", }
Generative Large Language Models (LLMs) are widely utilized for their excellence in various tasks. However, their tendency to produce inaccurate or misleading outputs poses a potential risk, particularly in high-stakes environments. Therefore, estimating the correctness of generative LLM outputs is an important task for enhanced reliability. Uncertainty Estimation (UE) in generative LLMs is an evolving domain, where SOTA probability-based methods commonly employ length-normalized scoring. In this work, we propose Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized scoring for UE methods. MARS is a novel scoring function that considers the semantic contribution of each token in the generated sequence in the context of the question. We demonstrate that integrating MARS into UE methods results in a universal and significant improvement in UE performance. We conduct experiments using three distinct closed-book question-answering datasets across five popular pre-trained LLMs. Lastly, we validate the efficacy of MARS on a Medical QA dataset. Code can be found here.
[ "Bakman, Yavuz Faruk", "Yaldiz, Duygu Nur", "Buyukates, Baturalp", "Tao, Chenyang", "Dimitriadis, Dimitrios", "Avestimehr, Salman" ]
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs
acl-long.419
Poster
2402.11756
[ "https://github.com/ybakman/llm_uncertainity" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.419/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.420.bib
@inproceedings{das-etal-2024-exams, title = "{EXAMS}-{V}: A Multi-Discipline Multilingual Multimodal Exam Benchmark for Evaluating Vision Language Models", author = "Das, Rocktim and Hristov, Simeon and Li, Haonan and Dimitrov, Dimitar and Koychev, Ivan and Nakov, Preslav", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.420", pages = "7768--7791", abstract = "We introduce EXAMS-V, a new challenging multi-discipline multimodal multilingual exam benchmark for evaluating vision language models. It consists of 20,932 multiple-choice questions across 20 school disciplines covering natural science, social science, and other miscellaneous studies, e.g., religion, fine arts, business, etc. EXAMS-V includes a variety of multimodal features such as text, images, tables, figures, diagrams, maps, scientific symbols, and equations. The questions come in 11 languages from 7 language families. Unlike existing benchmarks, EXAMS-V is uniquely curated by gathering school exam questions from various countries, with a variety of education systems. This distinctive approach calls for intricate reasoning across diverse languages and relies on region-specific knowledge. Solving the problems in the dataset requires advanced perception and joint reasoning over the text and the visual content in the image. Our evaluation results demonstrate that this is a challenging dataset, which is difficult even for advanced vision{--}text models such as GPT-4V and Gemini; this underscores the inherent complexity of the dataset and its significance as a future benchmark.", }
We introduce EXAMS-V, a new challenging multi-discipline multimodal multilingual exam benchmark for evaluating vision language models. It consists of 20,932 multiple-choice questions across 20 school disciplines covering natural science, social science, and other miscellaneous studies, e.g., religion, fine arts, business, etc. EXAMS-V includes a variety of multimodal features such as text, images, tables, figures, diagrams, maps, scientific symbols, and equations. The questions come in 11 languages from 7 language families. Unlike existing benchmarks, EXAMS-V is uniquely curated by gathering school exam questions from various countries, with a variety of education systems. This distinctive approach calls for intricate reasoning across diverse languages and relies on region-specific knowledge. Solving the problems in the dataset requires advanced perception and joint reasoning over the text and the visual content in the image. Our evaluation results demonstrate that this is a challenging dataset, which is difficult even for advanced vision{--}text models such as GPT-4V and Gemini; this underscores the inherent complexity of the dataset and its significance as a future benchmark.
[ "Das, Rocktim", "Hristov, Simeon", "Li, Haonan", "Dimitrov, Dimitar", "Koychev, Ivan", "Nakov, Preslav" ]
EXAMS-V: A Multi-Discipline Multilingual Multimodal Exam Benchmark for Evaluating Vision Language Models
acl-long.420
Poster
2403.10378
[ "https://github.com/rocktimjyotidas/exams-v" ]
https://huggingface.co/papers/2403.10378
0
0
0
6
https://aclanthology.org/2024.acl-long.420/
[]
[ "Rocktim/EXAMS-V" ]
[]
1
https://aclanthology.org/2024.acl-long.421.bib
@inproceedings{wang-etal-2024-order, title = "Order-Agnostic Data Augmentation for Few-Shot Named Entity Recognition", author = "Wang, Huiming and Cheng, Liying and Zhang, Wenxuan and Soh, De Wen and Bing, Lidong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.421", pages = "7792--7807", abstract = "Data augmentation (DA) methods have been proven to be effective for pre-trained language models (PLMs) in low-resource settings, including few-shot named entity recognition (NER). However, existing NER DA techniques either perform rule-based manipulations on words that break the semantic coherence of the sentence, or exploit generative models for entity or context substitution, which requires a substantial amount of labeled data and contradicts the objective of operating in low-resource settings. In this work, we propose order-agnostic data augmentation (OaDA), an alternative solution that exploits the often overlooked order-agnostic property in the training data construction phase of sequence-to-sequence NER methods for data augmentation. To effectively utilize the augmented data without suffering from the one-to-many issue, where multiple augmented target sequences exist for one single sentence, we further propose the use of ordering instructions and an innovative OaDA-XE loss. Specifically, by treating each permutation of entity types as an ordering instruction, we rearrange the entity set accordingly, ensuring a distinct input-output pair, while OaDA-XE assigns loss based on the best match between the target sequence and model predictions. We conduct comprehensive experiments and analyses across three major NER benchmarks and significantly enhance the few-shot capabilities of PLMs with OaDA.", }
Data augmentation (DA) methods have been proven to be effective for pre-trained language models (PLMs) in low-resource settings, including few-shot named entity recognition (NER). However, existing NER DA techniques either perform rule-based manipulations on words that break the semantic coherence of the sentence, or exploit generative models for entity or context substitution, which requires a substantial amount of labeled data and contradicts the objective of operating in low-resource settings. In this work, we propose order-agnostic data augmentation (OaDA), an alternative solution that exploits the often overlooked order-agnostic property in the training data construction phase of sequence-to-sequence NER methods for data augmentation. To effectively utilize the augmented data without suffering from the one-to-many issue, where multiple augmented target sequences exist for one single sentence, we further propose the use of ordering instructions and an innovative OaDA-XE loss. Specifically, by treating each permutation of entity types as an ordering instruction, we rearrange the entity set accordingly, ensuring a distinct input-output pair, while OaDA-XE assigns loss based on the best match between the target sequence and model predictions. We conduct comprehensive experiments and analyses across three major NER benchmarks and significantly enhance the few-shot capabilities of PLMs with OaDA.
[ "Wang, Huiming", "Cheng, Liying", "Zhang, Wenxuan", "Soh, De Wen", "Bing, Lidong" ]
Order-Agnostic Data Augmentation for Few-Shot Named Entity Recognition
acl-long.421
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.421/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.422.bib
@inproceedings{chen-etal-2024-text, title = "Text Embedding Inversion Security for Multilingual Language Models", author = "Chen, Yiyi and Lent, Heather and Bjerva, Johannes", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.422", pages = "7808--7827", abstract = "Textual data is often represented as real-numbered embeddings in NLP, particularly with the popularity of large language models (LLMs) and Embeddings as a Service (EaaS). However, storing sensitive information as embeddings can be susceptible to security breaches, as research shows that text can be reconstructed from embeddings, even without knowledge of the underlying model. While defence mechanisms have been explored, these are exclusively focused on English, leaving other languages potentially exposed to attacks. This work explores LLM security through multilingual embedding inversion. We define the problem of black-box multilingual and crosslingual inversion attacks, and explore their potential implications. Our findings suggest that multilingual LLMs may be more vulnerable to inversion attacks, in part because English-based defences may be ineffective. To alleviate this, we propose a simple masking defense effective for both monolingual and multilingual models. This study is the first to investigate multilingual inversion attacks, shedding light on the differences in attacks and defenses across monolingual and multilingual settings.", }
Textual data is often represented as real-numbered embeddings in NLP, particularly with the popularity of large language models (LLMs) and Embeddings as a Service (EaaS). However, storing sensitive information as embeddings can be susceptible to security breaches, as research shows that text can be reconstructed from embeddings, even without knowledge of the underlying model. While defence mechanisms have been explored, these are exclusively focused on English, leaving other languages potentially exposed to attacks. This work explores LLM security through multilingual embedding inversion. We define the problem of black-box multilingual and crosslingual inversion attacks, and explore their potential implications. Our findings suggest that multilingual LLMs may be more vulnerable to inversion attacks, in part because English-based defences may be ineffective. To alleviate this, we propose a simple masking defense effective for both monolingual and multilingual models. This study is the first to investigate multilingual inversion attacks, shedding light on the differences in attacks and defenses across monolingual and multilingual settings.
[ "Chen, Yiyi", "Lent, Heather", "Bjerva, Johannes" ]
Text Embedding Inversion Security for Multilingual Language Models
acl-long.422
Poster
2401.12192
[ "https://github.com/siebeniris/multivec2text" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.422/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.423.bib
@inproceedings{lu-etal-2024-large, title = "Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment", author = "Lu, Keming and Yu, Bowen and Zhou, Chang and Zhou, Jingren", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.423", pages = "7828--7840", abstract = "Considerable efforts have been invested in augmenting the role-playing proficiency of open-source large language models (LLMs) by emulating proprietary counterparts. Nevertheless, we posit that LLMs inherently harbor role-play capabilities, owing to the extensive knowledge of characters and potential dialogues ingrained in their vast training corpora. Thus, we introduce Ditto, the first self-alignment method for role-play, which encourages an instruction-following LLM to simulate role-play dialogues as a variant of reading comprehension, and creates a role-play training set comprising 4000 characters, surpassing the scale of currently available datasets by tenfold regarding the number of roles. Subsequently, we fine-tune the LLM using this self-generated dataset to augment its role-playing capabilities. Upon evaluating our meticulously constructed role-play benchmark and the roleplay subset of MT-Bench, Ditto, in various parameter scales, consistently maintains a consistent role identity and provides accurate role-specific knowledge in multi-turn role-play conversations, outperforming all open-source role-play baselines. Furthermore, we present the first cross-supervision role-play experiment, revealing that the role-play styles can be easily acquired, while the intrinsic capabilities of LLMs confine the knowledge within role-play.", }
Considerable efforts have been invested in augmenting the role-playing proficiency of open-source large language models (LLMs) by emulating proprietary counterparts. Nevertheless, we posit that LLMs inherently harbor role-play capabilities, owing to the extensive knowledge of characters and potential dialogues ingrained in their vast training corpora. Thus, we introduce Ditto, the first self-alignment method for role-play, which encourages an instruction-following LLM to simulate role-play dialogues as a variant of reading comprehension, and creates a role-play training set comprising 4000 characters, surpassing the scale of currently available datasets by tenfold regarding the number of roles. Subsequently, we fine-tune the LLM using this self-generated dataset to augment its role-playing capabilities. Upon evaluating our meticulously constructed role-play benchmark and the roleplay subset of MT-Bench, Ditto, in various parameter scales, consistently maintains a consistent role identity and provides accurate role-specific knowledge in multi-turn role-play conversations, outperforming all open-source role-play baselines. Furthermore, we present the first cross-supervision role-play experiment, revealing that the role-play styles can be easily acquired, while the intrinsic capabilities of LLMs confine the knowledge within role-play.
[ "Lu, Keming", "Yu, Bowen", "Zhou, Chang", "Zhou, Jingren" ]
Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment
acl-long.423
Poster
2401.12474
[ "https://github.com/ofa-sys/ditto" ]
https://huggingface.co/papers/2401.12474
4
33
1
4
https://aclanthology.org/2024.acl-long.423/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.424.bib
@inproceedings{kong-etal-2024-platolm, title = "{P}lato{LM}: Teaching {LLM}s in Multi-Round Dialogue via a User Simulator", author = "Kong, Chuyi and Fan, Yaxin and Wan, Xiang and Jiang, Feng and Wang, Benyou", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.424", pages = "7841--7863", abstract = "The unparalleled performance of closed-sourced ChatGPT has sparked efforts towards its democratization, with notable strides made by leveraging real user and ChatGPT dialogues, as evidenced by Vicuna. However, due to challenges in gathering dialogues involving human participation, current endeavors like Baize and UltraChat rely on ChatGPT conducting roleplay to simulate humans based on instructions, resulting in overdependence on seeds, diminished human-likeness, limited topic diversity, and an absence of genuine multi-round conversational dynamics. To address the above issues, we propose a paradigm to simulate human behavior better and explore the benefits of incorporating more human-like questions in multi-turn conversations. Specifically, we directly target human questions extracted from genuine human-machine conversations as a learning goal and provide a novel user simulator called {`}Socratic{`}. The experimental results show our response model, {`}PlatoLM{`}, achieves SoTA performance among LLaMA-based 7B models in MT-Bench. Our findings further demonstrate that our method introduces highly human-like questioning patterns and rich topic structures, which can teach the response model better than previous works in multi-round conversations.", }
The unparalleled performance of closed-sourced ChatGPT has sparked efforts towards its democratization, with notable strides made by leveraging real user and ChatGPT dialogues, as evidenced by Vicuna. However, due to challenges in gathering dialogues involving human participation, current endeavors like Baize and UltraChat rely on ChatGPT conducting roleplay to simulate humans based on instructions, resulting in overdependence on seeds, diminished human-likeness, limited topic diversity, and an absence of genuine multi-round conversational dynamics. To address the above issues, we propose a paradigm to simulate human behavior better and explore the benefits of incorporating more human-like questions in multi-turn conversations. Specifically, we directly target human questions extracted from genuine human-machine conversations as a learning goal and provide a novel user simulator called {`}Socratic{`}. The experimental results show our response model, {`}PlatoLM{`}, achieves SoTA performance among LLaMA-based 7B models in MT-Bench. Our findings further demonstrate that our method introduces highly human-like questioning patterns and rich topic structures, which can teach the response model better than previous works in multi-round conversations.
[ "Kong, Chuyi", "Fan, Yaxin", "Wan, Xiang", "Jiang, Feng", "Wang, Benyou" ]
PlatoLM: Teaching LLMs in Multi-Round Dialogue via a User Simulator
acl-long.424
Poster
2308.11534
[ "" ]
https://huggingface.co/papers/2308.11534
0
2
0
5
https://aclanthology.org/2024.acl-long.424/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.425.bib
@inproceedings{yang-etal-2024-synthesizing, title = "Synthesizing Text-to-{SQL} Data from Weak and Strong {LLM}s", author = "Yang, Jiaxi and Hui, Binyuan and Yang, Min and Yang, Jian and Lin, Junyang and Zhou, Chang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.425", pages = "7864--7875", abstract = "The capability gap between open-source and closed-source large language models (LLMs) remains a challenge in text-to-SQL tasks. In this paper, we introduce a synthetic data approach that combines data produced by larger, more powerful models (strong models) with error information data generated by smaller, not well-aligned models (weak models). The method not only enhances the domain generalization of text-to-SQL models but also explores the potential of error data supervision through preference learning. Furthermore, we employ the synthetic data approach for instruction tuning on open-source LLMs, resulting SENSE, a specialized text-to-SQL model. The effectiveness of SENSE is demonstrated through state-of-the-art results on the SPIDER and BIRD benchmarks, bridging the performance gap between open-source models and methods prompted by closed-source models.", }
The capability gap between open-source and closed-source large language models (LLMs) remains a challenge in text-to-SQL tasks. In this paper, we introduce a synthetic data approach that combines data produced by larger, more powerful models (strong models) with error information data generated by smaller, not well-aligned models (weak models). The method not only enhances the domain generalization of text-to-SQL models but also explores the potential of error data supervision through preference learning. Furthermore, we employ the synthetic data approach for instruction tuning on open-source LLMs, resulting SENSE, a specialized text-to-SQL model. The effectiveness of SENSE is demonstrated through state-of-the-art results on the SPIDER and BIRD benchmarks, bridging the performance gap between open-source models and methods prompted by closed-source models.
[ "Yang, Jiaxi", "Hui, Binyuan", "Yang, Min", "Yang, Jian", "Lin, Junyang", "Zhou, Chang" ]
Synthesizing Text-to-SQL Data from Weak and Strong LLMs
acl-long.425
Poster
2408.03256
[ "" ]
https://huggingface.co/papers/2408.03256
3
6
2
6
https://aclanthology.org/2024.acl-long.425/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.426.bib
@inproceedings{jain-etal-2024-structsum, title = "{STRUCTSUM} Generation for Faster Text Comprehension", author = "Jain, Parag and Marzoca, Andreea and Piccinno, Francesco", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.426", pages = "7876--7896", abstract = "We consider the task of generating structured representations of text using large language models (LLMs). We focus on tables and mind maps as representative modalities. Tables are more organized way of representing data, while mind maps provide a visually dynamic and flexible approach, particularly suitable for sparse content. Despite the effectiveness of LLMs on different tasks, we show that current models struggle with generating structured outputs. In response, we present effective prompting strategies for both of these tasks. We introduce a taxonomy of problems around factuality, global and local structure, common to both modalities and propose a set of critiques to tackle these issues resulting in an absolute improvement in accuracy of $+37$pp (79{\%}) for mind maps and $+15$pp (78{\%}) for tables. To evaluate semantic coverage of generated structured representations we propose Auto-QA, and we verify the adequacy of Auto-QA using SQuAD dataset. We further evaluate the usefulness of structured representations via a text comprehension user study. The results show a significant reduction in comprehension time compared to text when using table (42.9{\%}) and mind map (31.9{\%}), without loss in accuracy.", }
We consider the task of generating structured representations of text using large language models (LLMs). We focus on tables and mind maps as representative modalities. Tables are more organized way of representing data, while mind maps provide a visually dynamic and flexible approach, particularly suitable for sparse content. Despite the effectiveness of LLMs on different tasks, we show that current models struggle with generating structured outputs. In response, we present effective prompting strategies for both of these tasks. We introduce a taxonomy of problems around factuality, global and local structure, common to both modalities and propose a set of critiques to tackle these issues resulting in an absolute improvement in accuracy of $+37$pp (79{\%}) for mind maps and $+15$pp (78{\%}) for tables. To evaluate semantic coverage of generated structured representations we propose Auto-QA, and we verify the adequacy of Auto-QA using SQuAD dataset. We further evaluate the usefulness of structured representations via a text comprehension user study. The results show a significant reduction in comprehension time compared to text when using table (42.9{\%}) and mind map (31.9{\%}), without loss in accuracy.
[ "Jain, Parag", "Marzoca, Andreea", "Piccinno, Francesco" ]
STRUCTSUM Generation for Faster Text Comprehension
acl-long.426
Poster
2401.06837
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.426/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.427.bib
@inproceedings{zhao-etal-2024-analysing, title = "Analysing The Impact of Sequence Composition on Language Model Pre-Training", author = "Zhao, Yu and Qu, Yuanbin and Staniszewski, Konrad and Tworkowski, Szymon and Liu, Wei and Mi{\l}o{\'s}, Piotr and Wu, Yuxiang and Minervini, Pasquale", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.427", pages = "7897--7912", abstract = "Most language model pre-training frameworks concatenate multiple documents into fixed-length sequences and use \textit{causal masking} to compute the likelihood of each token given its context; this strategy is widely adopted due to its simplicity and efficiency. However, to this day, the influence of the pre-training sequence composition strategy on the generalisation properties of the model remains under-explored.In this work, we find that applying causal masking can lead to the inclusion of distracting information from previous documents during pre-training, which negatively impacts the performance of the models on language modelling and downstream tasks. In \textit{intra-document causal masking}, the likelihood of each token is only conditioned on the previous tokens in the same document, eliminating potential distracting information from previous documents and significantly improving performance. Furthermore, we find that concatenating related documents can reduce some potential distractions during pre-training, and our proposed efficient retrieval-based sequence construction method, Bm25Chunk, can improve in-context learning (+11.6{\%}), knowledge memorisation (+9.8{\%}), and context utilisation (+7.2{\%}) abilities of language models without sacrificing efficiency.", }
Most language model pre-training frameworks concatenate multiple documents into fixed-length sequences and use \textit{causal masking} to compute the likelihood of each token given its context; this strategy is widely adopted due to its simplicity and efficiency. However, to this day, the influence of the pre-training sequence composition strategy on the generalisation properties of the model remains under-explored.In this work, we find that applying causal masking can lead to the inclusion of distracting information from previous documents during pre-training, which negatively impacts the performance of the models on language modelling and downstream tasks. In \textit{intra-document causal masking}, the likelihood of each token is only conditioned on the previous tokens in the same document, eliminating potential distracting information from previous documents and significantly improving performance. Furthermore, we find that concatenating related documents can reduce some potential distractions during pre-training, and our proposed efficient retrieval-based sequence construction method, Bm25Chunk, can improve in-context learning (+11.6{\%}), knowledge memorisation (+9.8{\%}), and context utilisation (+7.2{\%}) abilities of language models without sacrificing efficiency.
[ "Zhao, Yu", "Qu, Yuanbin", "Staniszewski, Konrad", "Tworkowski, Szymon", "Liu, Wei", "Mi{\\l}o{\\'s}, Piotr", "Wu, Yuxiang", "Minervini, Pasquale" ]
Analysing The Impact of Sequence Composition on Language Model Pre-Training
acl-long.427
Oral
2402.13991
[ "https://github.com/yuzhaouoe/pretraining-data-packing" ]
https://huggingface.co/papers/2402.13991
2
1
0
8
https://aclanthology.org/2024.acl-long.427/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.428.bib
@inproceedings{chen-etal-2024-nacl, title = "{NACL}: A General and Effective {KV} Cache Eviction Framework for {LLM} at Inference Time", author = "Chen, Yilong and Wang, Guoxia and Shang, Junyuan and Cui, Shiyao and Zhang, Zhenyu and Liu, Tingwen and Wang, Shuohuan and Sun, Yu and Yu, Dianhai and Wu, Hua", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.428", pages = "7913--7926", abstract = "Large Language Models (LLMs) have ignited an innovative surge of AI applications, marking a new era of exciting possibilities equipped with extended context windows. However, hosting these models is cost-prohibitive mainly due to the extensive memory consumption of KV Cache involving long-context modeling. Despite several works proposing to evict unnecessary tokens from the KV Cache, most of them rely on the biased local statistics of accumulated attention scores and report performance using unconvincing metric like perplexity on inadequate short-text evaluation. In this paper, we propose NACL, a general framework for long-context KV cache eviction that achieves more optimal and efficient eviction in a single operation during the encoding phase. Due to NACL{'}s efficiency, we combine more accurate attention score statistics in Proxy-Tokens Eviction with the diversified random eviction strategy of Random Eviction, aiming to alleviate the issue of attention bias and enhance the robustness in maintaining pivotal tokens for long-context modeling tasks. Notably, our method significantly improves the performance on short- and long-text tasks by 80{\%} and 76{\%} respectively, reducing KV Cache by up to $5\times$ with over 95{\%} performance maintenance. Code available at https://github.com/PaddlePaddle/Research/tree/master/NLP/ACL2024-NACL.", }
Large Language Models (LLMs) have ignited an innovative surge of AI applications, marking a new era of exciting possibilities equipped with extended context windows. However, hosting these models is cost-prohibitive mainly due to the extensive memory consumption of KV Cache involving long-context modeling. Despite several works proposing to evict unnecessary tokens from the KV Cache, most of them rely on the biased local statistics of accumulated attention scores and report performance using unconvincing metric like perplexity on inadequate short-text evaluation. In this paper, we propose NACL, a general framework for long-context KV cache eviction that achieves more optimal and efficient eviction in a single operation during the encoding phase. Due to NACL{'}s efficiency, we combine more accurate attention score statistics in Proxy-Tokens Eviction with the diversified random eviction strategy of Random Eviction, aiming to alleviate the issue of attention bias and enhance the robustness in maintaining pivotal tokens for long-context modeling tasks. Notably, our method significantly improves the performance on short- and long-text tasks by 80{\%} and 76{\%} respectively, reducing KV Cache by up to $5\times$ with over 95{\%} performance maintenance. Code available at https://github.com/PaddlePaddle/Research/tree/master/NLP/ACL2024-NACL.
[ "Chen, Yilong", "Wang, Guoxia", "Shang, Junyuan", "Cui, Shiyao", "Zhang, Zhenyu", "Liu, Tingwen", "Wang, Shuohuan", "Sun, Yu", "Yu, Dianhai", "Wu, Hua" ]
NACL: A General and Effective KV Cache Eviction Framework for LLM at Inference Time
acl-long.428
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.428/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.429.bib
@inproceedings{wang-etal-2024-spikevoice, title = "{S}pike{V}oice: High-Quality Text-to-Speech Via Efficient Spiking Neural Network", author = "Wang, Kexin and Zhang, Jiahong and Ren, Yong and Yao, Man and Shang, Di and Xu, Bo and Li, Guoqi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.429", pages = "7927--7940", abstract = "Brain-inspired Spiking Neural Network (SNN) has demonstrated its effectiveness and efficiency in vision, natural language, and speech understanding tasks, indicating their capacity to {``}see{''}, {``}listen{''}, and {``}read{''}. In this paper, we design SpikeVoice, which performs high-quality Text-To-Speech (TTS) via SNN, to explore the potential of SNN to {``}speak{''}. A major obstacle to using SNN for such generative tasks lies in the demand for models to grasp long-term dependencies. The serial nature of spiking neurons, however, leads to the invisibility of information at future spiking time steps, limiting SNN models to capture sequence dependencies solely within the same time step. We term this phenomenon {``}partial-time dependency{''}. To address this issue, we introduce Spiking Temporal-Sequential Attention (STSA) in the SpikeVoice. To the best of our knowledge, SpikeVoice is the first TTS work in the SNN field. We perform experiments using four well-established datasets that cover both Chinese and English languages, encompassing scenarios with both single-speaker and multi-speaker configurations. The results demonstrate that SpikeVoice can achieve results comparable to Artificial Neural Networks (ANN) with only 10.5{\%} energy consumption of ANN. Both our demo and code are available as supplementary material.", }
Brain-inspired Spiking Neural Network (SNN) has demonstrated its effectiveness and efficiency in vision, natural language, and speech understanding tasks, indicating their capacity to {``}see{''}, {``}listen{''}, and {``}read{''}. In this paper, we design SpikeVoice, which performs high-quality Text-To-Speech (TTS) via SNN, to explore the potential of SNN to {``}speak{''}. A major obstacle to using SNN for such generative tasks lies in the demand for models to grasp long-term dependencies. The serial nature of spiking neurons, however, leads to the invisibility of information at future spiking time steps, limiting SNN models to capture sequence dependencies solely within the same time step. We term this phenomenon {``}partial-time dependency{''}. To address this issue, we introduce Spiking Temporal-Sequential Attention (STSA) in the SpikeVoice. To the best of our knowledge, SpikeVoice is the first TTS work in the SNN field. We perform experiments using four well-established datasets that cover both Chinese and English languages, encompassing scenarios with both single-speaker and multi-speaker configurations. The results demonstrate that SpikeVoice can achieve results comparable to Artificial Neural Networks (ANN) with only 10.5{\%} energy consumption of ANN. Both our demo and code are available as supplementary material.
[ "Wang, Kexin", "Zhang, Jiahong", "Ren, Yong", "Yao, Man", "Shang, Di", "Xu, Bo", "Li, Guoqi" ]
SpikeVoice: High-Quality Text-to-Speech Via Efficient Spiking Neural Network
acl-long.429
Poster
2408.00788
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.429/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.430.bib
@inproceedings{tu-etal-2024-context, title = "Context-aware Difference Distilling for Multi-change Captioning", author = "Tu, Yunbin and Li, Liang and Su, Li and Zha, Zheng-Jun and Yan, Chenggang and Huang, Qingming", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.430", pages = "7941--7956", abstract = "Multi-change captioning aims to describe complex and coupled changes within an image pair in natural language. Compared with single-change captioning, this task requires the model to have higher-level cognition ability to reason an arbitrary number of changes. In this paper, we propose a novel context-aware difference distilling (CARD) network to capture all genuine changes for yielding sentences. Given an image pair, CARD first decouples context features that aggregate all similar/dissimilar semantics, termed common/difference context features. Then, the consistency and independence constraints are designed to guarantee the alignment/discrepancy of common/difference context features. Further, the common context features guide the model to mine locally unchanged features, which are subtracted from the pair to distill locally difference features. Next, the difference context features augment the locally difference features to ensure that all changes are distilled. In this way, we obtain an omni-representation of all changes, which is translated into linguistic sentences by a transformer decoder. Extensive experiments on three public datasets show CARD performs favourably against state-of-the-art methods. The code is available at https://github.com/tuyunbin/CARD.", }
Multi-change captioning aims to describe complex and coupled changes within an image pair in natural language. Compared with single-change captioning, this task requires the model to have higher-level cognition ability to reason an arbitrary number of changes. In this paper, we propose a novel context-aware difference distilling (CARD) network to capture all genuine changes for yielding sentences. Given an image pair, CARD first decouples context features that aggregate all similar/dissimilar semantics, termed common/difference context features. Then, the consistency and independence constraints are designed to guarantee the alignment/discrepancy of common/difference context features. Further, the common context features guide the model to mine locally unchanged features, which are subtracted from the pair to distill locally difference features. Next, the difference context features augment the locally difference features to ensure that all changes are distilled. In this way, we obtain an omni-representation of all changes, which is translated into linguistic sentences by a transformer decoder. Extensive experiments on three public datasets show CARD performs favourably against state-of-the-art methods. The code is available at https://github.com/tuyunbin/CARD.
[ "Tu, Yunbin", "Li, Liang", "Su, Li", "Zha, Zheng-Jun", "Yan, Chenggang", "Huang, Qingming" ]
Context-aware Difference Distilling for Multi-change Captioning
acl-long.430
Poster
2405.20810
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.430/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.431.bib
@inproceedings{cheng-etal-2024-dataflow, title = "Dataflow-Guided Retrieval Augmentation for Repository-Level Code Completion", author = "Cheng, Wei and Wu, Yuhan and Hu, Wei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.431", pages = "7957--7977", abstract = "Recent years have witnessed the deployment of code language models (LMs) in various code intelligence tasks such as code completion. Yet, it is challenging for pre-trained LMs to generate correct completions in private repositories. Previous studies retrieve cross-file context based on import relations or text similarity, which is insufficiently relevant to completion targets. In this paper, we propose a dataflow-guided retrieval augmentation approach, called DraCo, for repository-level code completion. DraCo parses a private repository into code entities and establishes their relations through an extended dataflow analysis, forming a repo-specific context graph. Whenever triggering code completion, DraCo precisely retrieves relevant background knowledge from the repo-specific context graph and generates well-formed prompts to query code LMs. Furthermore, we construct a large Python dataset, ReccEval, with more diverse completion targets. Our experiments demonstrate the superior accuracy and applicable efficiency of DraCo, improving code exact match by 3.43{\%} and identifier F1-score by 3.27{\%} on average compared to the state-of-the-art approach.", }
Recent years have witnessed the deployment of code language models (LMs) in various code intelligence tasks such as code completion. Yet, it is challenging for pre-trained LMs to generate correct completions in private repositories. Previous studies retrieve cross-file context based on import relations or text similarity, which is insufficiently relevant to completion targets. In this paper, we propose a dataflow-guided retrieval augmentation approach, called DraCo, for repository-level code completion. DraCo parses a private repository into code entities and establishes their relations through an extended dataflow analysis, forming a repo-specific context graph. Whenever triggering code completion, DraCo precisely retrieves relevant background knowledge from the repo-specific context graph and generates well-formed prompts to query code LMs. Furthermore, we construct a large Python dataset, ReccEval, with more diverse completion targets. Our experiments demonstrate the superior accuracy and applicable efficiency of DraCo, improving code exact match by 3.43{\%} and identifier F1-score by 3.27{\%} on average compared to the state-of-the-art approach.
[ "Cheng, Wei", "Wu, Yuhan", "Hu, Wei" ]
Dataflow-Guided Retrieval Augmentation for Repository-Level Code Completion
acl-long.431
Poster
2405.19782
[ "https://github.com/nju-websoft/DraCo" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.431/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.432.bib
@inproceedings{luo-etal-2024-chain, title = "Chain-of-Exemplar: Enhancing Distractor Generation for Multimodal Educational Question Generation", author = "Luo, Haohao and Deng, Yang and Shen, Ying and Ng, See-Kiong and Chua, Tat-Seng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.432", pages = "7978--7993", abstract = "Multiple-choice questions (MCQs) are important in enhancing concept learning and student engagement for educational purposes. Despite the multimodal nature of educational content, current methods focus mainly on text-based inputs and often neglect the integration of visual information. In this work, we study the problem of multimodal educational question generation, which aims at generating subject-specific educational questions with plausible yet incorrect distractors based on multimodal educational content. To tackle this problem, we introduce a novel framework, named Chain-of-Exemplar (CoE), which utilizes multimodal large language models (MLLMs) with Chain-of-Thought reasoning to improve the generation of challenging distractors. Furthermore, CoE leverages three-stage contextualized exemplar retrieval to retrieve exemplary questions as guides for generating more subject-specific educational questions. Experimental results on the ScienceQA benchmark demonstrate the superiority of CoE in both question generation and distractor generation over existing methods across various subjects and educational levels.", }
Multiple-choice questions (MCQs) are important in enhancing concept learning and student engagement for educational purposes. Despite the multimodal nature of educational content, current methods focus mainly on text-based inputs and often neglect the integration of visual information. In this work, we study the problem of multimodal educational question generation, which aims at generating subject-specific educational questions with plausible yet incorrect distractors based on multimodal educational content. To tackle this problem, we introduce a novel framework, named Chain-of-Exemplar (CoE), which utilizes multimodal large language models (MLLMs) with Chain-of-Thought reasoning to improve the generation of challenging distractors. Furthermore, CoE leverages three-stage contextualized exemplar retrieval to retrieve exemplary questions as guides for generating more subject-specific educational questions. Experimental results on the ScienceQA benchmark demonstrate the superiority of CoE in both question generation and distractor generation over existing methods across various subjects and educational levels.
[ "Luo, Haohao", "Deng, Yang", "Shen, Ying", "Ng, See-Kiong", "Chua, Tat-Seng" ]
Chain-of-Exemplar: Enhancing Distractor Generation for Multimodal Educational Question Generation
acl-long.432
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.432/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.433.bib
@inproceedings{chunliu-etal-2024-llmembed, title = "{LLME}mbed: Rethinking Lightweight {LLM}{'}s Genuine Function in Text Classification", author = "ChunLiu, ChunLiu and Zhang, Hongguang and Zhao, Kainan and Ju, Xinghai and Yang, Lin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.433", pages = "7994--8004", abstract = "With the booming of Large Language Models (LLMs), prompt-learning has become a promising method mainly researched in various research areas. Recently, many attempts based on prompt-learning have been made to improve the performance of text classification. However, most of these methods are based on heuristic Chain-of-Thought (CoT), and tend to be more complex but less efficient. In this paper, we rethink the LLM-based text classification methodology, propose a simple and effective transfer learning strategy, namely LLMEmbed, to address this classical but challenging task. To illustrate, we first study how to properly extract and fuse the text embeddings via various lightweight LLMs at different network depths to improve their robustness and discrimination, then adapt such embeddings to train the classifier. We perform extensive experiments on publicly available datasets, and the results show that LLMEmbed achieves strong performance while enjoys low training overhead using lightweight LLM backbones compared to recent methods based on larger LLMs, *i.e.* GPT-3, and sophisticated prompt-based strategies. Our LLMEmbed achieves adequate accuracy on publicly available benchmarks without any fine-tuning while merely use 4{\%} model parameters, 1.8{\%} electricity consumption and 1.5{\%} runtime compared to its counterparts. Code is available at: https://github.com/ChunLiu-cs/LLMEmbed-ACL2024.", }
With the booming of Large Language Models (LLMs), prompt-learning has become a promising method mainly researched in various research areas. Recently, many attempts based on prompt-learning have been made to improve the performance of text classification. However, most of these methods are based on heuristic Chain-of-Thought (CoT), and tend to be more complex but less efficient. In this paper, we rethink the LLM-based text classification methodology, propose a simple and effective transfer learning strategy, namely LLMEmbed, to address this classical but challenging task. To illustrate, we first study how to properly extract and fuse the text embeddings via various lightweight LLMs at different network depths to improve their robustness and discrimination, then adapt such embeddings to train the classifier. We perform extensive experiments on publicly available datasets, and the results show that LLMEmbed achieves strong performance while enjoys low training overhead using lightweight LLM backbones compared to recent methods based on larger LLMs, *i.e.* GPT-3, and sophisticated prompt-based strategies. Our LLMEmbed achieves adequate accuracy on publicly available benchmarks without any fine-tuning while merely use 4{\%} model parameters, 1.8{\%} electricity consumption and 1.5{\%} runtime compared to its counterparts. Code is available at: https://github.com/ChunLiu-cs/LLMEmbed-ACL2024.
[ "ChunLiu, ChunLiu", "Zhang, Hongguang", "Zhao, Kainan", "Ju, Xinghai", "Yang, Lin" ]
LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text Classification
acl-long.433
Poster
2406.03725
[ "https://github.com/chunliu-cs/llmembed-acl2024" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.433/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.434.bib
@inproceedings{chen-etal-2024-lemon, title = "{LEMON}: Reviving Stronger and Smaller {LM}s from Larger {LM}s with Linear Parameter Fusion", author = "Chen, Yilong and Shang, Junyuan and Zhang, Zhenyu and Cui, Shiyao and Liu, Tingwen and Wang, Shuohuan and Sun, Yu and Wu, Hua", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.434", pages = "8005--8019", abstract = "In the new era of language models, small models (with billions of parameter sizes) are receiving increasing attention due to their flexibility and cost-effectiveness in deployment. However, limited by the model size, the performance of small models trained from scratch may often be unsatisfactory. Learning a stronger and smaller model with the help of larger models is an intuitive idea. Inspired by the observing modular structures in preliminary analysis, we propose LEMON to learn competent initial points for smaller models by fusing parameters from larger models, thereby laying a solid foundation for subsequent training. Specifically, the parameter fusion process involves two operators for layer and dimension, respectively, and we also introduce controllable receptive fields to model the prior parameter characteristics. In this way, the larger model could be transformed into any specific smaller scale and architecture. Starting from LLaMA 2-7B, we revive two stronger and smaller models with 1.3B and 2.7B. Experimental results demonstrate that the fusion-based method exhibits flexibility and outperforms a series of competitive baselines in terms of both effectiveness and efficiency.", }
In the new era of language models, small models (with billions of parameter sizes) are receiving increasing attention due to their flexibility and cost-effectiveness in deployment. However, limited by the model size, the performance of small models trained from scratch may often be unsatisfactory. Learning a stronger and smaller model with the help of larger models is an intuitive idea. Inspired by the observing modular structures in preliminary analysis, we propose LEMON to learn competent initial points for smaller models by fusing parameters from larger models, thereby laying a solid foundation for subsequent training. Specifically, the parameter fusion process involves two operators for layer and dimension, respectively, and we also introduce controllable receptive fields to model the prior parameter characteristics. In this way, the larger model could be transformed into any specific smaller scale and architecture. Starting from LLaMA 2-7B, we revive two stronger and smaller models with 1.3B and 2.7B. Experimental results demonstrate that the fusion-based method exhibits flexibility and outperforms a series of competitive baselines in terms of both effectiveness and efficiency.
[ "Chen, Yilong", "Shang, Junyuan", "Zhang, Zhenyu", "Cui, Shiyao", "Liu, Tingwen", "Wang, Shuohuan", "Sun, Yu", "Wu, Hua" ]
LEMON: Reviving Stronger and Smaller LMs from Larger LMs with Linear Parameter Fusion
acl-long.434
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.434/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.435.bib
@inproceedings{yu-etal-2024-speech, title = "Speech Sense Disambiguation: Tackling Homophone Ambiguity in End-to-End Speech Translation", author = "Yu, Tengfei and Liu, Xuebo and Ding, Liang and Chen, Kehai and Tao, Dacheng and Zhang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.435", pages = "8020--8035", abstract = "End-to-end speech translation (ST) presents notable disambiguation challenges as it necessitates simultaneous cross-modal and cross-lingual transformations. While word sense disambiguation is an extensively investigated topic in textual machine translation, the exploration of disambiguation strategies for ST models remains limited. Addressing this gap, this paper introduces the concept of speech sense disambiguation (SSD), specifically emphasizing homophones - words pronounced identically but with different meanings. To facilitate this, we first create a comprehensive homophone dictionary and an annotated dataset rich with homophone information established based on speech-text alignment. Building on this unique dictionary, we introduce AmbigST, an innovative homophone-aware contrastive learning approach that integrates a homophone-aware masking strategy. Our experiments on different MuST-C and CoVoST ST benchmarks demonstrate that AmbigST sets new performance standards. Specifically, it achieves SOTA results on BLEU scores for English to German, Spanish, and French ST tasks, underlining its effectiveness in reducing speech sense ambiguity. Data, code and scripts are freely available at https://github.com/ytf-philp/AmbigST.", }
End-to-end speech translation (ST) presents notable disambiguation challenges as it necessitates simultaneous cross-modal and cross-lingual transformations. While word sense disambiguation is an extensively investigated topic in textual machine translation, the exploration of disambiguation strategies for ST models remains limited. Addressing this gap, this paper introduces the concept of speech sense disambiguation (SSD), specifically emphasizing homophones - words pronounced identically but with different meanings. To facilitate this, we first create a comprehensive homophone dictionary and an annotated dataset rich with homophone information established based on speech-text alignment. Building on this unique dictionary, we introduce AmbigST, an innovative homophone-aware contrastive learning approach that integrates a homophone-aware masking strategy. Our experiments on different MuST-C and CoVoST ST benchmarks demonstrate that AmbigST sets new performance standards. Specifically, it achieves SOTA results on BLEU scores for English to German, Spanish, and French ST tasks, underlining its effectiveness in reducing speech sense ambiguity. Data, code and scripts are freely available at https://github.com/ytf-philp/AmbigST.
[ "Yu, Tengfei", "Liu, Xuebo", "Ding, Liang", "Chen, Kehai", "Tao, Dacheng", "Zhang, Min" ]
Speech Sense Disambiguation: Tackling Homophone Ambiguity in End-to-End Speech Translation
acl-long.435
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.435/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.436.bib
@inproceedings{wang-utiyama-2024-continuous, title = "To be Continuous, or to be Discrete, Those are Bits of Questions", author = "Wang, Yiran and Utiyama, Masao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.436", pages = "8036--8049", abstract = "Recently, binary representation has been proposed as a novel representation that lies between continuous and discrete representations. It exhibits considerable information-preserving capability when being used to replace continuous input vectors. In this paper, we investigate the feasibility of further introducing it to the output side, aiming to allow models to output binary labels instead. To preserve the structural information on the output side along with label information, we extend the previous contrastive hashing method as structured contrastive hashing. More specifically, we upgrade CKY from label-level to bit-level, define a new similarity function with span marginal probabilities, and introduce a novel contrastive loss function with a carefully designed instance selection strategy. Our model achieves competitive performance on various structured prediction tasks, and demonstrates that binary representation can be considered a novel representation that further bridges the gap between the continuous nature of deep learning and the discrete intrinsic property of natural languages.", }
Recently, binary representation has been proposed as a novel representation that lies between continuous and discrete representations. It exhibits considerable information-preserving capability when being used to replace continuous input vectors. In this paper, we investigate the feasibility of further introducing it to the output side, aiming to allow models to output binary labels instead. To preserve the structural information on the output side along with label information, we extend the previous contrastive hashing method as structured contrastive hashing. More specifically, we upgrade CKY from label-level to bit-level, define a new similarity function with span marginal probabilities, and introduce a novel contrastive loss function with a carefully designed instance selection strategy. Our model achieves competitive performance on various structured prediction tasks, and demonstrates that binary representation can be considered a novel representation that further bridges the gap between the continuous nature of deep learning and the discrete intrinsic property of natural languages.
[ "Wang, Yiran", "Utiyama, Masao" ]
To be Continuous, or to be Discrete, Those are Bits of Questions
acl-long.436
Poster
2406.07812
[ "https://github.com/speedcell4/parserker" ]
https://huggingface.co/papers/2406.07812
0
1
0
2
https://aclanthology.org/2024.acl-long.436/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.437.bib
@inproceedings{schneider-etal-2024-mousai, title = "Mo{\^u}sai: Efficient Text-to-Music Diffusion Models", author = {Schneider, Flavio and Kamal, Ojasv and Jin, Zhijing and Sch{\"o}lkopf, Bernhard}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.437", pages = "8050--8068", abstract = "Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another {``}language{''} of communication {--} music. Music, much like text, can convey emotions, stories, and ideas, and has its own unique structure and syntax. In our work, we bridge text and music via a text-to-music generation model that is highly efficient, expressive, and can handle long-term structure. Specifically, we develop Mo{\^u}sai, a cascading two-stage latent diffusion model that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. Moreover, our model features high efficiency, which enables real-time inference on a single consumer GPU with a reasonable speed. Through experiments and property analyses, we show our model{'}s competence over a variety of criteria compared with existing music generation models. Lastly, to promote the open-source culture, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: Codes: https://github.com/archinetai/audio-diffusion-pytorch. Music samples for this paper: http://bit.ly/44ozWDH. Music samples for all models: https://bit.ly/audio-diffusion.", }
Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another {``}language{''} of communication {--} music. Music, much like text, can convey emotions, stories, and ideas, and has its own unique structure and syntax. In our work, we bridge text and music via a text-to-music generation model that is highly efficient, expressive, and can handle long-term structure. Specifically, we develop Mo{\^u}sai, a cascading two-stage latent diffusion model that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. Moreover, our model features high efficiency, which enables real-time inference on a single consumer GPU with a reasonable speed. Through experiments and property analyses, we show our model{'}s competence over a variety of criteria compared with existing music generation models. Lastly, to promote the open-source culture, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: Codes: https://github.com/archinetai/audio-diffusion-pytorch. Music samples for this paper: http://bit.ly/44ozWDH. Music samples for all models: https://bit.ly/audio-diffusion.
[ "Schneider, Flavio", "Kamal, Ojasv", "Jin, Zhijing", "Sch{\\\"o}lkopf, Bernhard" ]
Moûsai: Efficient Text-to-Music Diffusion Models
acl-long.437
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.437/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.438.bib
@inproceedings{gu-etal-2024-pokemqa, title = "{P}oke{MQA}: Programmable knowledge editing for Multi-hop Question Answering", author = "Gu, Hengrui and Zhou, Kaixiong and Han, Xiaotian and Liu, Ninghao and Wang, Ruobing and Wang, Xin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.438", pages = "8069--8083", abstract = "Multi-hop question answering (MQA) is one of the challenging tasks to evaluate machine{'}s comprehension and reasoning abilities, where large language models (LLMs) have widely achieved the human-comparable performance. Due to the dynamics of knowledge facts in real world, knowledge editing has been explored to update model with the up-to-date facts while avoiding expensive re-training or fine-tuning. Starting from the edited fact, the updated model needs to provide cascading changes in the chain of MQA. The previous art simply adopts a mix-up prompt to instruct LLMs conducting multiple reasoning tasks sequentially, including question decomposition, answer generation, and conflict checking via comparing with edited facts. However, the coupling of these functionally-diverse reasoning tasks inhibits LLMs{'} advantages in comprehending and answering questions while disturbing them with the unskilled task of conflict checking. We thus propose a framework, Programmable knowledge editing for Multi-hop Question Answering (PokeMQA), to decouple the jobs. Specifically, we prompt LLMs to decompose knowledge-augmented multi-hop question, while interacting with a detached trainable scope detector to modulate LLMs behavior depending on external conflict signal. The experiments on three LLM backbones and two benchmark datasets validate our superiority in knowledge editing of MQA, outperforming all competitors by a large margin in almost all settings and consistently producing reliable reasoning process.", }
Multi-hop question answering (MQA) is one of the challenging tasks to evaluate machine{'}s comprehension and reasoning abilities, where large language models (LLMs) have widely achieved the human-comparable performance. Due to the dynamics of knowledge facts in real world, knowledge editing has been explored to update model with the up-to-date facts while avoiding expensive re-training or fine-tuning. Starting from the edited fact, the updated model needs to provide cascading changes in the chain of MQA. The previous art simply adopts a mix-up prompt to instruct LLMs conducting multiple reasoning tasks sequentially, including question decomposition, answer generation, and conflict checking via comparing with edited facts. However, the coupling of these functionally-diverse reasoning tasks inhibits LLMs{'} advantages in comprehending and answering questions while disturbing them with the unskilled task of conflict checking. We thus propose a framework, Programmable knowledge editing for Multi-hop Question Answering (PokeMQA), to decouple the jobs. Specifically, we prompt LLMs to decompose knowledge-augmented multi-hop question, while interacting with a detached trainable scope detector to modulate LLMs behavior depending on external conflict signal. The experiments on three LLM backbones and two benchmark datasets validate our superiority in knowledge editing of MQA, outperforming all competitors by a large margin in almost all settings and consistently producing reliable reasoning process.
[ "Gu, Hengrui", "Zhou, Kaixiong", "Han, Xiaotian", "Liu, Ninghao", "Wang, Ruobing", "Wang, Xin" ]
PokeMQA: Programmable knowledge editing for Multi-hop Question Answering
acl-long.438
Poster
2312.15194
[ "https://github.com/hengrui-gu/pokemqa" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.438/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.439.bib
@inproceedings{jha-etal-2024-memeguard, title = "{M}eme{G}uard: An {LLM} and {VLM}-based Framework for Advancing Content Moderation via Meme Intervention", author = "Jha, Prince and Jain, Raghav and Mandal, Konika and Chadha, Aman and Saha, Sriparna and Bhattacharyya, Pushpak", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.439", pages = "8084--8104", abstract = "In the digital world, memes present a unique challenge for content moderation due to their potential to spread harmful content. Although detection methods have improved, proactive solutions such as intervention are still limited, with current research focusing mostly on text-based content, neglecting the widespread influence of multimodal content like memes. Addressing this gap, we present \textit{MemeGuard}, a comprehensive framework leveraging Large Language Models (LLMs) and Visual Language Models (VLMs) for meme intervention. \textit{MemeGuard} harnesses a specially fine-tuned VLM, \textit{VLMeme}, for meme interpretation, and a multimodal knowledge selection and ranking mechanism (\textit{MKS}) for distilling relevant knowledge. This knowledge is then employed by a general-purpose LLM to generate contextually appropriate interventions. Another key contribution of this work is the \textit{ \textbf{I}ntervening} \textit{ \textbf{C}yberbullying in \textbf{M}ultimodal \textbf{M}emes (ICMM)} dataset, a high-quality, labeled dataset featuring toxic memes and their corresponding human-annotated interventions. We leverage \textit{ICMM} to test \textit{MemeGuard}, demonstrating its proficiency in generating relevant and effective responses to toxic memes. red \textbf{Disclaimer}: \textit{This paper contains harmful content that may be disturbing to some readers.}", }
In the digital world, memes present a unique challenge for content moderation due to their potential to spread harmful content. Although detection methods have improved, proactive solutions such as intervention are still limited, with current research focusing mostly on text-based content, neglecting the widespread influence of multimodal content like memes. Addressing this gap, we present \textit{MemeGuard}, a comprehensive framework leveraging Large Language Models (LLMs) and Visual Language Models (VLMs) for meme intervention. \textit{MemeGuard} harnesses a specially fine-tuned VLM, \textit{VLMeme}, for meme interpretation, and a multimodal knowledge selection and ranking mechanism (\textit{MKS}) for distilling relevant knowledge. This knowledge is then employed by a general-purpose LLM to generate contextually appropriate interventions. Another key contribution of this work is the \textit{ \textbf{I}ntervening} \textit{ \textbf{C}yberbullying in \textbf{M}ultimodal \textbf{M}emes (ICMM)} dataset, a high-quality, labeled dataset featuring toxic memes and their corresponding human-annotated interventions. We leverage \textit{ICMM} to test \textit{MemeGuard}, demonstrating its proficiency in generating relevant and effective responses to toxic memes. red \textbf{Disclaimer}: \textit{This paper contains harmful content that may be disturbing to some readers.}
[ "Jha, Prince", "Jain, Raghav", "M", "al, Konika", "Chadha, Aman", "Saha, Sriparna", "Bhattacharyya, Pushpak" ]
MemeGuard: An LLM and VLM-based Framework for Advancing Content Moderation via Meme Intervention
acl-long.439
Poster
2406.05344
[ "https://github.com/Jhaprince/MemeGuard" ]
https://huggingface.co/papers/2406.05344
1
0
0
6
https://aclanthology.org/2024.acl-long.439/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.440.bib
@inproceedings{carlson-etal-2024-efficient, title = "Efficient {OCR} for Building a Diverse Digital History", author = "Carlson, Jacob and Bryan, Tom and Dell, Melissa", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.440", pages = "8105--8115", abstract = "Many users consult digital archives daily, but the information they can access is unrepresentative of the diversity of documentary history. The sequence-to-sequence architecture typically used for optical character recognition (OCR) {--} which jointly learns a vision and language model {--} is poorly extensible to low-resource document collections, as learning a language-vision model requires extensive labeled sequences and compute. This study models OCR as a character level image retrieval problem, using a contrastively trained vision encoder. Because the model only learns characters{'} visual features, it is more sample efficient and extensible than existing architectures, enabling accurate OCR in settings where existing solutions fail. Crucially, it opens new avenues for community engagement in making digital history more representative of documentary history.", }
Many users consult digital archives daily, but the information they can access is unrepresentative of the diversity of documentary history. The sequence-to-sequence architecture typically used for optical character recognition (OCR) {--} which jointly learns a vision and language model {--} is poorly extensible to low-resource document collections, as learning a language-vision model requires extensive labeled sequences and compute. This study models OCR as a character level image retrieval problem, using a contrastively trained vision encoder. Because the model only learns characters{'} visual features, it is more sample efficient and extensible than existing architectures, enabling accurate OCR in settings where existing solutions fail. Crucially, it opens new avenues for community engagement in making digital history more representative of documentary history.
[ "Carlson, Jacob", "Bryan, Tom", "Dell, Melissa" ]
Efficient OCR for Building a Diverse Digital History
acl-long.440
Oral
2304.02737
[ "https://github.com/dell-research-harvard/effocr" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.440/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.441.bib
@inproceedings{wu-etal-2024-acquiring, title = "Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space", author = "Wu, Zongru and Zhang, Zhuosheng and Cheng, Pengzhou and Liu, Gongshen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.441", pages = "8116--8134", abstract = "Despite the notable success of language models (LMs) in various natural language processing (NLP) tasks, the reliability of LMs is susceptible to backdoor attacks. Prior research attempts to mitigate backdoor learning while training the LMs on the poisoned dataset, yet struggles against complex backdoor attacks in real-world scenarios. In this paper, we investigate the learning mechanisms of backdoor LMs in the frequency space by Fourier analysis. Our findings indicate that the backdoor mapping presented on the poisoned datasets exhibits a more discernible inclination towards lower frequency compared to clean mapping, resulting in the faster convergence of backdoor mapping. To alleviate this dilemma, we propose \textbf{Mu}lti-\textbf{Sc}a\textbf{le} \textbf{Lo}w-\textbf{R}ank \textbf{A}daptation (MuScleLoRA), which deploys multiple radial scalings in the frequency space with low-rank adaptation to the target model and further aligns the gradients when updating parameters. Through downscaling in the frequency space, MuScleLoRA encourages the model to prioritize the learning of relatively high-frequency clean mapping, consequently mitigating backdoor learning. Experimental results demonstrate that MuScleLoRA outperforms baselines significantly. Notably, MuScleLoRA reduces the average success rate of diverse backdoor attacks to below 15{\%} across multiple datasets and generalizes to various backbone LMs, including BERT, RoBERTa, and Llama2. The codes are publicly available at Anonymous.", }
Despite the notable success of language models (LMs) in various natural language processing (NLP) tasks, the reliability of LMs is susceptible to backdoor attacks. Prior research attempts to mitigate backdoor learning while training the LMs on the poisoned dataset, yet struggles against complex backdoor attacks in real-world scenarios. In this paper, we investigate the learning mechanisms of backdoor LMs in the frequency space by Fourier analysis. Our findings indicate that the backdoor mapping presented on the poisoned datasets exhibits a more discernible inclination towards lower frequency compared to clean mapping, resulting in the faster convergence of backdoor mapping. To alleviate this dilemma, we propose \textbf{Mu}lti-\textbf{Sc}a\textbf{le} \textbf{Lo}w-\textbf{R}ank \textbf{A}daptation (MuScleLoRA), which deploys multiple radial scalings in the frequency space with low-rank adaptation to the target model and further aligns the gradients when updating parameters. Through downscaling in the frequency space, MuScleLoRA encourages the model to prioritize the learning of relatively high-frequency clean mapping, consequently mitigating backdoor learning. Experimental results demonstrate that MuScleLoRA outperforms baselines significantly. Notably, MuScleLoRA reduces the average success rate of diverse backdoor attacks to below 15{\%} across multiple datasets and generalizes to various backbone LMs, including BERT, RoBERTa, and Llama2. The codes are publicly available at Anonymous.
[ "Wu, Zongru", "Zhang, Zhuosheng", "Cheng, Pengzhou", "Liu, Gongshen" ]
Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space
acl-long.441
Poster
2402.12026
[ "https://github.com/zrw00/musclelora" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.441/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.442.bib
@inproceedings{ji-etal-2024-anah, title = "{ANAH}: Analytical Annotation of Hallucinations in Large Language Models", author = "Ji, Ziwei and Gu, Yuzhe and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.442", pages = "8135--8158", abstract = "Reducing the \textit{hallucination} problem of Large Language Models (LLMs) is crucial for their wide applications. A comprehensive and fine-grained measurement of the hallucination is the first key step for the governance of this issue but is under-explored in the community.Thus, we present \textbf{ANAH}, a bilingual dataset that offers $\textbf{AN}$alytical $\textbf{A}$nnotation of $\textbf{H}$allucinations in LLMs within Generative Question Answering.Each answer sentence in our dataset undergoes rigorous annotation, involving the retrieval of a reference fragment, the judgment of the hallucination type, and the correction of hallucinated content. ANAH consists of {\textasciitilde}12k sentence-level annotations for {\textasciitilde}4.3k LLM responses covering over 700 topics, constructed by a human-in-the-loop pipeline.Thanks to the fine granularity of the hallucination annotations, we can quantitatively confirm that the hallucinations of LLMs progressively accumulate in the answer and use ANAH to train and evaluate hallucination annotators. We conduct extensive experiments on studying generative and discriminative annotators and show that, although current open-source LLMs have difficulties in fine-grained hallucination annotation, the generative annotator trained with ANAH can surpass all open-source LLMs and GPT-3.5, obtain performance competitive with GPT-4, and exhibits better generalization ability on unseen questions.", }
Reducing the \textit{hallucination} problem of Large Language Models (LLMs) is crucial for their wide applications. A comprehensive and fine-grained measurement of the hallucination is the first key step for the governance of this issue but is under-explored in the community.Thus, we present \textbf{ANAH}, a bilingual dataset that offers $\textbf{AN}$alytical $\textbf{A}$nnotation of $\textbf{H}$allucinations in LLMs within Generative Question Answering.Each answer sentence in our dataset undergoes rigorous annotation, involving the retrieval of a reference fragment, the judgment of the hallucination type, and the correction of hallucinated content. ANAH consists of {\textasciitilde}12k sentence-level annotations for {\textasciitilde}4.3k LLM responses covering over 700 topics, constructed by a human-in-the-loop pipeline.Thanks to the fine granularity of the hallucination annotations, we can quantitatively confirm that the hallucinations of LLMs progressively accumulate in the answer and use ANAH to train and evaluate hallucination annotators. We conduct extensive experiments on studying generative and discriminative annotators and show that, although current open-source LLMs have difficulties in fine-grained hallucination annotation, the generative annotator trained with ANAH can surpass all open-source LLMs and GPT-3.5, obtain performance competitive with GPT-4, and exhibits better generalization ability on unseen questions.
[ "Ji, Ziwei", "Gu, Yuzhe", "Zhang, Wenwei", "Lyu, Chengqi", "Lin, Dahua", "Chen, Kai" ]
ANAH: Analytical Annotation of Hallucinations in Large Language Models
acl-long.442
Poster
2405.20315
[ "https://github.com/open-compass/anah" ]
https://huggingface.co/papers/2405.20315
1
0
0
6
https://aclanthology.org/2024.acl-long.442/
[ "opencompass/anah-20b", "opencompass/anah-7b" ]
[ "opencompass/anah" ]
[]
1
https://aclanthology.org/2024.acl-long.443.bib
@inproceedings{lu-etal-2024-aligning, title = "Aligning Large Language Models for Controllable Recommendations", author = "Lu, Wensheng and Lian, Jianxun and Zhang, Wei and Li, Guanghua and Zhou, Mingyang and Liao, Hao and Xie, Xing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.443", pages = "8159--8172", abstract = "Inspired by the exceptional general intelligence of Large Language Models (LLMs), researchers have begun to explore their application in pioneering the next generation of recommender systems {---} systems that are conversational, explainable, and controllable. However, existing literature primarily concentrates on integrating domain-specific knowledge into LLMs to enhance accuracy using a fixed task template, often overlooking the diversity of recommendation tasks and the ability of LLMs to follow recommendation-specific instructions. To address this gap, we first introduce a collection of supervised learning tasks, augmented with labels derived from a conventional recommender model, aimed at explicitly improving LLMs{'} proficiency in adhering to recommendation-specific instructions. Next, we propose a reinforcement learning-based alignment procedure to enhance LLMs{'} generalization ability. Extensive experiments on two real-world datasets demonstrate that our approach significantly improves the capability of LLMs to respond to instructions within recommender systems, reducing formatting errors while maintaining a high level of accuracy.", }
Inspired by the exceptional general intelligence of Large Language Models (LLMs), researchers have begun to explore their application in pioneering the next generation of recommender systems {---} systems that are conversational, explainable, and controllable. However, existing literature primarily concentrates on integrating domain-specific knowledge into LLMs to enhance accuracy using a fixed task template, often overlooking the diversity of recommendation tasks and the ability of LLMs to follow recommendation-specific instructions. To address this gap, we first introduce a collection of supervised learning tasks, augmented with labels derived from a conventional recommender model, aimed at explicitly improving LLMs{'} proficiency in adhering to recommendation-specific instructions. Next, we propose a reinforcement learning-based alignment procedure to enhance LLMs{'} generalization ability. Extensive experiments on two real-world datasets demonstrate that our approach significantly improves the capability of LLMs to respond to instructions within recommender systems, reducing formatting errors while maintaining a high level of accuracy.
[ "Lu, Wensheng", "Lian, Jianxun", "Zhang, Wei", "Li, Guanghua", "Zhou, Mingyang", "Liao, Hao", "Xie, Xing" ]
Aligning Large Language Models for Controllable Recommendations
acl-long.443
Poster
2403.05063
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.443/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.444.bib
@inproceedings{yu-etal-2024-revealing, title = "Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods", author = "Yu, Haeun and Atanasova, Pepa and Augenstein, Isabelle", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.444", pages = "8173--8186", abstract = "Language Models (LMs) acquire parametric knowledge from their training process, embedding it within their weights. The increasing scalability of LMs, however, poses significant challenges for understanding a model{'}s inner workings and further for updating or correcting this embedded knowledge without the significant cost of retraining. This underscores the importance of unveiling exactly what knowledge is stored and its association with specific model components. Instance Attribution (IA) and Neuron Attribution (NA) offer insights into this training-acquired knowledge, though they have not been compared systematically. Our study introduces a novel evaluation framework to quantify and compare the knowledge revealed by IA and NA. To align the results of the methods we introduce the attribution method NA-Instances to apply NA for retrieving influential training instances, and IA-Neurons to discover important neurons of influential instances discovered by IA. We further propose a comprehensive list of faithfulness tests to evaluate the comprehensiveness and sufficiency of the explanations provided by both methods. Through extensive experiments and analysis, we demonstrate that NA generally reveals more diverse and comprehensive information regarding the LM{'}s parametric knowledge compared to IA. Nevertheless, IA provides unique and valuable insights into the LM{'}s parametric knowledge, which are not revealed by NA. Our findings further suggest the potential of a synergistic approach of combining the diverse findings of IA and NA for a more holistic understanding of an LM{'}s parametric knowledge.", }
Language Models (LMs) acquire parametric knowledge from their training process, embedding it within their weights. The increasing scalability of LMs, however, poses significant challenges for understanding a model{'}s inner workings and further for updating or correcting this embedded knowledge without the significant cost of retraining. This underscores the importance of unveiling exactly what knowledge is stored and its association with specific model components. Instance Attribution (IA) and Neuron Attribution (NA) offer insights into this training-acquired knowledge, though they have not been compared systematically. Our study introduces a novel evaluation framework to quantify and compare the knowledge revealed by IA and NA. To align the results of the methods we introduce the attribution method NA-Instances to apply NA for retrieving influential training instances, and IA-Neurons to discover important neurons of influential instances discovered by IA. We further propose a comprehensive list of faithfulness tests to evaluate the comprehensiveness and sufficiency of the explanations provided by both methods. Through extensive experiments and analysis, we demonstrate that NA generally reveals more diverse and comprehensive information regarding the LM{'}s parametric knowledge compared to IA. Nevertheless, IA provides unique and valuable insights into the LM{'}s parametric knowledge, which are not revealed by NA. Our findings further suggest the potential of a synergistic approach of combining the diverse findings of IA and NA for a more holistic understanding of an LM{'}s parametric knowledge.
[ "Yu, Haeun", "Atanasova, Pepa", "Augenstein, Isabelle" ]
Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods
acl-long.444
Poster
2404.18655
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.444/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.445.bib
@inproceedings{lv-etal-2024-full, title = "Full Parameter Fine-tuning for Large Language Models with Limited Resources", author = "Lv, Kai and Yang, Yuqing and Liu, Tengxiao and Guo, Qipeng and Qiu, Xipeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.445", pages = "8187--8198", abstract = "Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) but demand massive GPU resources for training. Lowering the threshold for LLMs training would encourage greater participation from researchers, benefiting both academia and society. While existing approaches have focused on parameter-efficient fine-tuning, which tunes or adds a small number of parameters, few have addressed the challenge of tuning the full parameters of LLMs with limited resources. In this work, we propose a new optimizer, LOw-Memory Optimization (LOMO), which fuses the gradient computation and the parameter update in one step to reduce memory usage. By integrating LOMO with existing memory saving techniques, we reduce memory usage to 10.8{\%} compared to the standard approach (DeepSpeed solution). Consequently, our approach enables the full parameter fine-tuning of a 65B model on a single machine with 8 $\times$ RTX 3090, each with 24GB memory. Code and data are available at https://github.com/OpenLMLab/LOMO.", }
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) but demand massive GPU resources for training. Lowering the threshold for LLMs training would encourage greater participation from researchers, benefiting both academia and society. While existing approaches have focused on parameter-efficient fine-tuning, which tunes or adds a small number of parameters, few have addressed the challenge of tuning the full parameters of LLMs with limited resources. In this work, we propose a new optimizer, LOw-Memory Optimization (LOMO), which fuses the gradient computation and the parameter update in one step to reduce memory usage. By integrating LOMO with existing memory saving techniques, we reduce memory usage to 10.8{\%} compared to the standard approach (DeepSpeed solution). Consequently, our approach enables the full parameter fine-tuning of a 65B model on a single machine with 8 $\times$ RTX 3090, each with 24GB memory. Code and data are available at https://github.com/OpenLMLab/LOMO.
[ "Lv, Kai", "Yang, Yuqing", "Liu, Tengxiao", "Guo, Qipeng", "Qiu, Xipeng" ]
Full Parameter Fine-tuning for Large Language Models with Limited Resources
acl-long.445
Oral
2306.09782
[ "https://github.com/openlmlab/lomo" ]
https://huggingface.co/papers/2306.09782
3
29
2
6
https://aclanthology.org/2024.acl-long.445/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.446.bib
@inproceedings{chen-etal-2024-m3cot, title = "{M}$^3${C}o{T}: A Novel Benchmark for Multi-Domain Multi-step Multi-modal Chain-of-Thought", author = "Chen, Qiguang and Qin, Libo and Zhang, Jin and Chen, Zhi and Xu, Xiao and Che, Wanxiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.446", pages = "8199--8221", abstract = "Multi-modal Chain-of-Thought (MCoT) requires models to leverage knowledge from both textual and visual modalities for step-by-step reasoning, which gains increasing attention. Nevertheless, the current MCoT benchmark still faces some challenges: (1) absence of visual modal reasoning, (2) single-step visual modal reasoning, and (3) domain missing, thereby hindering the development of MCoT. Motivated by this, we introduce a novel benchmark (M$^3$CoT) to address the above challenges, advancing the multi-domain, multi-step, and multi-modal CoT. Additionally, we conduct a thorough evaluation involving abundant MCoT approaches on Vision Large Language Models (VLLMs). In addition, we highlight that the current VLLMs still struggle to correctly reason in M$^3$CoT and there is a large gap between VLLMs and human performance in M$^3$CoT, despite their superior results on previous MCoT benchmarks. To our knowledge, we take the first meaningful step toward the multi-domain, multi-step, and multi-modal scenario in MCoT. We hope that M$^3$CoT will serve as a valuable resource, providing a pioneering foundation in multi-domain, multi-step, multi-modal chain-of-thought research.", }
Multi-modal Chain-of-Thought (MCoT) requires models to leverage knowledge from both textual and visual modalities for step-by-step reasoning, which gains increasing attention. Nevertheless, the current MCoT benchmark still faces some challenges: (1) absence of visual modal reasoning, (2) single-step visual modal reasoning, and (3) domain missing, thereby hindering the development of MCoT. Motivated by this, we introduce a novel benchmark (M$^3$CoT) to address the above challenges, advancing the multi-domain, multi-step, and multi-modal CoT. Additionally, we conduct a thorough evaluation involving abundant MCoT approaches on Vision Large Language Models (VLLMs). In addition, we highlight that the current VLLMs still struggle to correctly reason in M$^3$CoT and there is a large gap between VLLMs and human performance in M$^3$CoT, despite their superior results on previous MCoT benchmarks. To our knowledge, we take the first meaningful step toward the multi-domain, multi-step, and multi-modal scenario in MCoT. We hope that M$^3$CoT will serve as a valuable resource, providing a pioneering foundation in multi-domain, multi-step, multi-modal chain-of-thought research.
[ "Chen, Qiguang", "Qin, Libo", "Zhang, Jin", "Chen, Zhi", "Xu, Xiao", "Che, Wanxiang" ]
M^3CoT: A Novel Benchmark for Multi-Domain Multi-step Multi-modal Chain-of-Thought
acl-long.446
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.446/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.447.bib
@inproceedings{chen-etal-2024-long, title = "Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models", author = "Chen, Longze and Liu, Ziqiang and He, Wanwei and Zheng, Yinhe and Sun, Hao and Li, Yunshui and Luo, Run and Yang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.447", pages = "8222--8234", abstract = "Long-context modeling capabilities are important for large language models (LLMs) in various applications. However, directly training LLMs with long context windows is insufficient to enhance this capability since some training samples do not exhibit strong semantic dependencies across long contexts.In this study, we propose a data mining framework ProLong that can assign each training sample with a long dependency score, which can be used to rank and filter samples that are more advantageous for enhancing long-context modeling abilities in LLM training. Specifically, we first use delta perplexity scores to measure the Dependency Strength between text segments in a given document. Then, we refine this metric based on the Dependency Distance of these segments to incorporate spatial relationships across long contexts. Final results are calibrated with a Dependency Specificity metric to prevent trivial dependencies introduced by repetitive patterns. Moreover, a random sampling approach is proposed to optimize the computational efficiency of ProLong. Comprehensive experiments on multiple benchmarks indicate that ProLong effectively identifies documents that carry long dependencies, and LLMs trained on these documents exhibit significantly enhanced long-context modeling capabilities.", }
Long-context modeling capabilities are important for large language models (LLMs) in various applications. However, directly training LLMs with long context windows is insufficient to enhance this capability since some training samples do not exhibit strong semantic dependencies across long contexts.In this study, we propose a data mining framework ProLong that can assign each training sample with a long dependency score, which can be used to rank and filter samples that are more advantageous for enhancing long-context modeling abilities in LLM training. Specifically, we first use delta perplexity scores to measure the Dependency Strength between text segments in a given document. Then, we refine this metric based on the Dependency Distance of these segments to incorporate spatial relationships across long contexts. Final results are calibrated with a Dependency Specificity metric to prevent trivial dependencies introduced by repetitive patterns. Moreover, a random sampling approach is proposed to optimize the computational efficiency of ProLong. Comprehensive experiments on multiple benchmarks indicate that ProLong effectively identifies documents that carry long dependencies, and LLMs trained on these documents exhibit significantly enhanced long-context modeling capabilities.
[ "Chen, Longze", "Liu, Ziqiang", "He, Wanwei", "Zheng, Yinhe", "Sun, Hao", "Li, Yunshui", "Luo, Run", "Yang, Min" ]
Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models
acl-long.447
Oral
2405.17915
[ "https://github.com/October2001/ProLong" ]
https://huggingface.co/papers/2405.17915
1
1
1
6
https://aclanthology.org/2024.acl-long.447/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.448.bib
@inproceedings{deng-woodland-2024-label, title = "Label-Synchronous Neural Transducer for {E}2{E} Simultaneous Speech Translation", author = "Deng, Keqi and Woodland, Phil", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.448", pages = "8235--8251", abstract = "While the neural transducer is popular for online speech recognition, simultaneous speech translation (SST) requires both streaming and re-ordering capabilities. This paper presents the LS-Transducer-SST, a label-synchronous neural transducer for SST, which naturally possesses these two properties. The LS-Transducer-SST dynamically decides when to emit translation tokens based on an Auto-regressive Integrate-and-Fire (AIF) mechanism. A latency-controllable AIF is also proposed, which can control the quality-latency trade-off either only during decoding, or it can be used in both decoding and training. The LS-Transducer-SST can naturally utilise monolingual text-only data via its prediction network which helps alleviate the key issue of data sparsity for E2E SST. During decoding, a chunk-based incremental joint decoding technique is designed to refine and expand the search space. Experiments on the Fisher-CallHome Spanish (Es-En) and MuST-C En-De data show that the LS-Transducer-SST gives a better quality-latency trade-off than existing popular methods. For example, the LS-Transducer-SST gives a 3.1/2.9 point BLEU increase (Es-En/En-De) relative to CAAT at a similar latency and a 1.4 s reduction in average lagging latency with similar BLEU scores relative to Wait-k.", }
While the neural transducer is popular for online speech recognition, simultaneous speech translation (SST) requires both streaming and re-ordering capabilities. This paper presents the LS-Transducer-SST, a label-synchronous neural transducer for SST, which naturally possesses these two properties. The LS-Transducer-SST dynamically decides when to emit translation tokens based on an Auto-regressive Integrate-and-Fire (AIF) mechanism. A latency-controllable AIF is also proposed, which can control the quality-latency trade-off either only during decoding, or it can be used in both decoding and training. The LS-Transducer-SST can naturally utilise monolingual text-only data via its prediction network which helps alleviate the key issue of data sparsity for E2E SST. During decoding, a chunk-based incremental joint decoding technique is designed to refine and expand the search space. Experiments on the Fisher-CallHome Spanish (Es-En) and MuST-C En-De data show that the LS-Transducer-SST gives a better quality-latency trade-off than existing popular methods. For example, the LS-Transducer-SST gives a 3.1/2.9 point BLEU increase (Es-En/En-De) relative to CAAT at a similar latency and a 1.4 s reduction in average lagging latency with similar BLEU scores relative to Wait-k.
[ "Deng, Keqi", "Woodl", ", Phil" ]
Label-Synchronous Neural Transducer for E2E Simultaneous Speech Translation
acl-long.448
Poster
2406.04541
[ "https://github.com/D-Keqi/LS-Transducer-SST" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.448/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.449.bib
@inproceedings{choi-etal-2024-hard, title = "Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with {RL}", author = "Choi, Yunseon and Bae, Sangmin and Ban, Seonghyun and Jeong, Minchan and Zhang, Chuheng and Song, Lei and Zhao, Li and Bian, Jiang and Kim, Kee-Eung", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.449", pages = "8252--8271", abstract = "With the advent of foundation models, prompt tuning has positioned itself as an important technique for directing model behaviors and eliciting desired responses. Prompt tuning regards selecting appropriate keywords included into the input, thereby adapting to the downstream task without adjusting or fine-tuning the model parameters. There is a wide range of work in prompt tuning, from approaches that directly harness the backpropagated gradient signals from the model, to those employing black-box optimization such as reinforcement learning (RL) methods. Our primary focus is on RLPrompt, which aims to find optimal prompt tokens leveraging soft Q-learning. While the results show promise, we have observed that the prompts frequently appear unnatural, which impedes their interpretability. We address this limitation by using sparse Tsallis entropy regularization, a principled approach to filtering out unlikely tokens from consideration. We extensively evaluate our approach across various tasks, including few-shot text classification, unsupervised text style transfer, and textual inversion from images. The results indicate a notable improvement over baselines, highlighting the efficacy of our approach in addressing the challenges of prompt tuning. Moreover, we show that the prompts discovered using our method are more natural and interpretable compared to those from other baselines.", }
With the advent of foundation models, prompt tuning has positioned itself as an important technique for directing model behaviors and eliciting desired responses. Prompt tuning regards selecting appropriate keywords included into the input, thereby adapting to the downstream task without adjusting or fine-tuning the model parameters. There is a wide range of work in prompt tuning, from approaches that directly harness the backpropagated gradient signals from the model, to those employing black-box optimization such as reinforcement learning (RL) methods. Our primary focus is on RLPrompt, which aims to find optimal prompt tokens leveraging soft Q-learning. While the results show promise, we have observed that the prompts frequently appear unnatural, which impedes their interpretability. We address this limitation by using sparse Tsallis entropy regularization, a principled approach to filtering out unlikely tokens from consideration. We extensively evaluate our approach across various tasks, including few-shot text classification, unsupervised text style transfer, and textual inversion from images. The results indicate a notable improvement over baselines, highlighting the efficacy of our approach in addressing the challenges of prompt tuning. Moreover, we show that the prompts discovered using our method are more natural and interpretable compared to those from other baselines.
[ "Choi, Yunseon", "Bae, Sangmin", "Ban, Seonghyun", "Jeong, Minchan", "Zhang, Chuheng", "Song, Lei", "Zhao, Li", "Bian, Jiang", "Kim, Kee-Eung" ]
Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL
acl-long.449
Oral
2407.14733
[ "https://github.com/youseob/pin" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.449/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.450.bib
@inproceedings{mahon-lapata-2024-modular, title = "A Modular Approach for Multimodal Summarization of {TV} Shows", author = "Mahon, Louis and Lapata, Mirella", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.450", pages = "8272--8291", abstract = "In this paper we address the task of summarizing television shows, which touches key areas in AI research: complex reasoning, multiple modalities, and long narratives. We present a modular approach where separate components perform specialized sub-tasks which we argue affords greater flexibility compared to end-to-end methods. Our modules involve detecting scene boundaries, reordering scenes so as to minimize the number of cuts between different events, converting visual information to text, summarizing the dialogue in each scene, and fusing the scene summaries into a final summary for the entire episode. We also present a new metric, PRISMA (**P**recision and **R**ecall Evaluat**i**on of **s**ummary F**a**cts), to measure both precision and recall of generated summaries, which we decompose into atomic facts. Tested on the recently released SummScreen3D dataset (Papalampidi {\&} Lapata, 2023), our method produces higher quality summaries than comparison models, as measured with ROUGE and our new fact-based metric.", }
In this paper we address the task of summarizing television shows, which touches key areas in AI research: complex reasoning, multiple modalities, and long narratives. We present a modular approach where separate components perform specialized sub-tasks which we argue affords greater flexibility compared to end-to-end methods. Our modules involve detecting scene boundaries, reordering scenes so as to minimize the number of cuts between different events, converting visual information to text, summarizing the dialogue in each scene, and fusing the scene summaries into a final summary for the entire episode. We also present a new metric, PRISMA (**P**recision and **R**ecall Evaluat**i**on of **s**ummary F**a**cts), to measure both precision and recall of generated summaries, which we decompose into atomic facts. Tested on the recently released SummScreen3D dataset (Papalampidi {\&} Lapata, 2023), our method produces higher quality summaries than comparison models, as measured with ROUGE and our new fact-based metric.
[ "Mahon, Louis", "Lapata, Mirella" ]
A Modular Approach for Multimodal Summarization of TV Shows
acl-long.450
Poster
2403.03823
[ "https://github.com/lou1sm/modular_multimodal_summarization" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.450/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.451.bib
@inproceedings{wilf-etal-2024-think, title = "Think Twice: Perspective-Taking Improves Large Language Models{'} Theory-of-Mind Capabilities", author = "Wilf, Alex and Lee, Sihyun and Liang, Paul Pu and Morency, Louis-Philippe", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.451", pages = "8292--8308", abstract = "Human interactions are deeply rooted in the interplay of thoughts, beliefs, and desires made possible by Theory of Mind (ToM): our cognitive ability to understand the mental states of ourselves and others. Although ToM may come naturally to us, emulating it presents a challenge to even the most advanced Large Language Models (LLMs). Recent improvements to LLMs{'} reasoning capabilities from simple yet effective prompting techniques such as Chain-of-Thought (CoT) have seen limited applicability to ToM. In this paper, we turn to the prominent cognitive science theory {``}Simulation Theory{''} to bridge this gap. We introduce SimToM, a novel two-stage prompting framework inspired by Simulation Theory{'}s notion of perspective-taking. To implement this idea on current ToM benchmarks, SimToM first filters context based on what the character in question knows before answering a question about their mental state. Our approach, which requires no additional training and minimal prompt-tuning, shows substantial improvement over existing methods, and our analysis reveals the importance of perspective-taking to Theory-of-Mind capabilities. Our findings suggest perspective-taking as a promising direction for future research into improving LLMs{'} ToM capabilities.", }
Human interactions are deeply rooted in the interplay of thoughts, beliefs, and desires made possible by Theory of Mind (ToM): our cognitive ability to understand the mental states of ourselves and others. Although ToM may come naturally to us, emulating it presents a challenge to even the most advanced Large Language Models (LLMs). Recent improvements to LLMs{'} reasoning capabilities from simple yet effective prompting techniques such as Chain-of-Thought (CoT) have seen limited applicability to ToM. In this paper, we turn to the prominent cognitive science theory {``}Simulation Theory{''} to bridge this gap. We introduce SimToM, a novel two-stage prompting framework inspired by Simulation Theory{'}s notion of perspective-taking. To implement this idea on current ToM benchmarks, SimToM first filters context based on what the character in question knows before answering a question about their mental state. Our approach, which requires no additional training and minimal prompt-tuning, shows substantial improvement over existing methods, and our analysis reveals the importance of perspective-taking to Theory-of-Mind capabilities. Our findings suggest perspective-taking as a promising direction for future research into improving LLMs{'} ToM capabilities.
[ "Wilf, Alex", "Lee, Sihyun", "Liang, Paul Pu", "Morency, Louis-Philippe" ]
Think Twice: Perspective-Taking Improves Large Language Models' Theory-of-Mind Capabilities
acl-long.451
Poster
2311.10227
[ "https://github.com/shawnsihyunlee/simulatedtom" ]
https://huggingface.co/papers/2311.10227
0
0
0
4
https://aclanthology.org/2024.acl-long.451/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.452.bib
@inproceedings{krumdick-etal-2024-bizbench, title = "{B}iz{B}ench: A Quantitative Reasoning Benchmark for Business and Finance", author = "Krumdick, Michael and Koncel-Kedziorski, Rik and Lai, Viet and Reddy, Varshini and Lovering, Charles and Tanner, Chris", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.452", pages = "8309--8332", abstract = "Answering questions within business and finance requires reasoning, precision, and a wide-breadth of technical knowledge. Together, these requirements make this domain difficult for large language models (LLMs). We introduce BizBench, a benchmark for evaluating models{'} ability to reason about realistic financial problems. BizBench comprises eight quantitative reasoning tasks, focusing on question-answering (QA) over financial data via program synthesis. We include three financially-themed code-generation tasks from newly collected and augmented QA data. Additionally, we isolate the reasoning capabilities required for financial QA: reading comprehension of financial text and tables for extracting intermediate values, and understanding financial concepts and formulas needed to calculate complex solutions. Collectively, these tasks evaluate a model{'}s financial background knowledge, ability to parse financial documents, and capacity to solve problems with code. We conduct an in-depth evaluation of open-source and commercial LLMs, comparing and contrasting the behavior of code-focused and language-focused models. We demonstrate that the current bottleneck in performance is due to LLMs{'} limited business and financial understanding, highlighting the value of a challenging benchmark for quantitative reasoning within this domain.", }
Answering questions within business and finance requires reasoning, precision, and a wide-breadth of technical knowledge. Together, these requirements make this domain difficult for large language models (LLMs). We introduce BizBench, a benchmark for evaluating models{'} ability to reason about realistic financial problems. BizBench comprises eight quantitative reasoning tasks, focusing on question-answering (QA) over financial data via program synthesis. We include three financially-themed code-generation tasks from newly collected and augmented QA data. Additionally, we isolate the reasoning capabilities required for financial QA: reading comprehension of financial text and tables for extracting intermediate values, and understanding financial concepts and formulas needed to calculate complex solutions. Collectively, these tasks evaluate a model{'}s financial background knowledge, ability to parse financial documents, and capacity to solve problems with code. We conduct an in-depth evaluation of open-source and commercial LLMs, comparing and contrasting the behavior of code-focused and language-focused models. We demonstrate that the current bottleneck in performance is due to LLMs{'} limited business and financial understanding, highlighting the value of a challenging benchmark for quantitative reasoning within this domain.
[ "Krumdick, Michael", "Koncel-Kedziorski, Rik", "Lai, Viet", "Reddy, Varshini", "Lovering, Charles", "Tanner, Chris" ]
BizBench: A Quantitative Reasoning Benchmark for Business and Finance
acl-long.452
Poster
2311.06602
[ "" ]
https://huggingface.co/papers/2311.06602
0
0
0
6
https://aclanthology.org/2024.acl-long.452/
[]
[ "kensho/bizbench" ]
[]
1
https://aclanthology.org/2024.acl-long.453.bib
@inproceedings{takada-etal-2024-direct, title = "Direct Metric Optimization for Image Captioning through Reward-Weighted Augmented Data Utilization", author = "Takada, Takumi and Suzuki, Yuma and Takushima, Hiroki and Tanoue, Hayato and Sato, Haruki and Kumar, Aiswariya and Nishihara, Hiroki and Hori, Takayuki and Ueki, Kazuya", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.453", pages = "8333--8346", abstract = "While image captioning is an essential field of vision language models (VLM), a lack of continuity between the learning objective and final performance metrics of VLMs complicates their training and optimization. Reinforcement learning (RL) can directly optimize such metrics, but it is accompanied by a significant computational cost, making it difficult to apply to recent large-scale VLMs. In this paper, we propose Direct Metric Optimization (DMO), which is a lightweight final-metric-optimizing training method. We replace the computationally expensive exploration process in RL with an offline, diverse text data augmentation and show that self-supervised training on reward-weighted augmented data leads to direct and stable metric optimization. Our experiments demonstrate that DMO achieves performance comparable to those of the state-of-the-art RL method while saving hundreds of times more model forwarding iterations and greater amounts of computation time. This suggests that DMO constitutes a promising alternative for metric optimization in the era of large-scale VLMs.", }
While image captioning is an essential field of vision language models (VLM), a lack of continuity between the learning objective and final performance metrics of VLMs complicates their training and optimization. Reinforcement learning (RL) can directly optimize such metrics, but it is accompanied by a significant computational cost, making it difficult to apply to recent large-scale VLMs. In this paper, we propose Direct Metric Optimization (DMO), which is a lightweight final-metric-optimizing training method. We replace the computationally expensive exploration process in RL with an offline, diverse text data augmentation and show that self-supervised training on reward-weighted augmented data leads to direct and stable metric optimization. Our experiments demonstrate that DMO achieves performance comparable to those of the state-of-the-art RL method while saving hundreds of times more model forwarding iterations and greater amounts of computation time. This suggests that DMO constitutes a promising alternative for metric optimization in the era of large-scale VLMs.
[ "Takada, Takumi", "Suzuki, Yuma", "Takushima, Hiroki", "Tanoue, Hayato", "Sato, Haruki", "Kumar, Aiswariya", "Nishihara, Hiroki", "Hori, Takayuki", "Ueki, Kazuya" ]
Direct Metric Optimization for Image Captioning through Reward-Weighted Augmented Data Utilization
acl-long.453
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.453/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.454.bib
@inproceedings{hossain-etal-2024-deciphering, title = "Deciphering Hate: Identifying Hateful Memes and Their Targets", author = "Hossain, Eftekhar and Sharif, Omar and Hoque, Mohammed Moshiul and Preum, Sarah Masud", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.454", pages = "8347--8359", abstract = "Internet memes have become a powerful means for individuals to express emotions, thoughts, and perspectives on social media. While often considered as a source of humor and entertainment, memes can also disseminate hateful content targeting individuals or communities. Most existing research focuses on the negative aspects of memes in high-resource languages, overlooking the distinctive challenges associated with low-resource languages like Bengali (also known as Bangla). Furthermore, while previous work on Bengali memes has focused on detecting hateful memes, there has been no work on detecting their targeted entities. To bridge this gap and facilitate research in this arena, we introduce a novel multimodal dataset for Bengali, BHM (Bengali Hateful Memes). The dataset consists of 7,148 memes with Bengali as well as code-mixed captions, tailored for two tasks: (i) detecting hateful memes, and (ii) detecting the social entities they target (i.e., Individual, Organization, Community, and Society). To solve these tasks, we propose DORA (Dual cO-attention fRAmework), a multimodal deep neural network that systematically extracts the significant modality features from the memes and jointly evaluates them with the modality-specific features to understand the context better. Our experiments show that DORA is generalizable on other low-resource hateful meme datasets and outperforms several state-of-the-art rivaling baselines.", }
Internet memes have become a powerful means for individuals to express emotions, thoughts, and perspectives on social media. While often considered as a source of humor and entertainment, memes can also disseminate hateful content targeting individuals or communities. Most existing research focuses on the negative aspects of memes in high-resource languages, overlooking the distinctive challenges associated with low-resource languages like Bengali (also known as Bangla). Furthermore, while previous work on Bengali memes has focused on detecting hateful memes, there has been no work on detecting their targeted entities. To bridge this gap and facilitate research in this arena, we introduce a novel multimodal dataset for Bengali, BHM (Bengali Hateful Memes). The dataset consists of 7,148 memes with Bengali as well as code-mixed captions, tailored for two tasks: (i) detecting hateful memes, and (ii) detecting the social entities they target (i.e., Individual, Organization, Community, and Society). To solve these tasks, we propose DORA (Dual cO-attention fRAmework), a multimodal deep neural network that systematically extracts the significant modality features from the memes and jointly evaluates them with the modality-specific features to understand the context better. Our experiments show that DORA is generalizable on other low-resource hateful meme datasets and outperforms several state-of-the-art rivaling baselines.
[ "Hossain, Eftekhar", "Sharif, Omar", "Hoque, Mohammed Moshiul", "Preum, Sarah Masud" ]
Deciphering Hate: Identifying Hateful Memes and Their Targets
acl-long.454
Poster
2403.10829
[ "https://github.com/eftekhar-hossain/bengali-hateful-memes" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.454/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.455.bib
@inproceedings{jiang-etal-2024-inducing, title = "Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings", author = "Jiang, Yichen and Zhou, Xiang and Bansal, Mohit", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.455", pages = "8360--8383", abstract = "Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity. We observe that when the training set is sufficiently complex, the model encodes structurally equivalent sentences using a systematic attention pattern. Inspired by this observation, we propose SQ-Transformer (Structurally Quantized) that explicitly encourages systematicity in the embeddings and attention layers even with low-complexity data. At the embedding level, we introduce Structure-oriented Vector Quantization (SoVQ) to cluster word embeddings into several classes of structurally equivalent entities. At the attention level, we devise the Systematic Attention Layer (SAL) and an alternative, Systematically Regularized Layer (SRL) that operate on the quantized word embeddings so that sentences of the same structure are encoded with invariant or similar attention patterns. Empirically, we show SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets. In our analysis, we show SoVQ indeed learns a syntactically clustered embedding space, and SAL/SRL induces generalizable attention patterns, altogether leading to improved systematicity.", }
Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity. We observe that when the training set is sufficiently complex, the model encodes structurally equivalent sentences using a systematic attention pattern. Inspired by this observation, we propose SQ-Transformer (Structurally Quantized) that explicitly encourages systematicity in the embeddings and attention layers even with low-complexity data. At the embedding level, we introduce Structure-oriented Vector Quantization (SoVQ) to cluster word embeddings into several classes of structurally equivalent entities. At the attention level, we devise the Systematic Attention Layer (SAL) and an alternative, Systematically Regularized Layer (SRL) that operate on the quantized word embeddings so that sentences of the same structure are encoded with invariant or similar attention patterns. Empirically, we show SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets. In our analysis, we show SoVQ indeed learns a syntactically clustered embedding space, and SAL/SRL induces generalizable attention patterns, altogether leading to improved systematicity.
[ "Jiang, Yichen", "Zhou, Xiang", "Bansal, Mohit" ]
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
acl-long.455
Poster
2402.06492
[ "https://github.com/jiangyctarheel/sq-transformer" ]
https://huggingface.co/papers/2402.06492
0
0
0
3
https://aclanthology.org/2024.acl-long.455/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.456.bib
@inproceedings{ashury-tahan-etal-2024-label, title = "Label-Efficient Model Selection for Text Generation", author = "Ashury Tahan, Shir and Gera, Ariel and Sznajder, Benjamin and Choshen, Leshem and Ein-Dor, Liat and Shnarch, Eyal", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.456", pages = "8384--8402", abstract = "Model selection for a given target task can be costly, as it may entail extensive annotation of the quality of outputs of different models. We introduce DiffUse, an efficient method to make an informed decision between candidate text generation models based on preference annotations. DiffUse reduces the required amount of annotations, thus saving valuable time and resources in performing evaluation.DiffUse intelligently selects instances by clustering embeddings that represent the semantic differences between model outputs. Thus, it is able to identify a subset of examples that are more informative for preference decisions. Our method is model-agnostic, and can be applied to any text generation model for selecting between models, prompts and configurations. Moreover, we propose a practical iterative approach for dynamically determining how many instances to annotate. In a series of experiments over hundreds of model pairs, we demonstrate that DiffUse can dramatically reduce the required number of annotations {--} by up to 75{\%} {--} while maintaining high evaluation reliability.", }
Model selection for a given target task can be costly, as it may entail extensive annotation of the quality of outputs of different models. We introduce DiffUse, an efficient method to make an informed decision between candidate text generation models based on preference annotations. DiffUse reduces the required amount of annotations, thus saving valuable time and resources in performing evaluation.DiffUse intelligently selects instances by clustering embeddings that represent the semantic differences between model outputs. Thus, it is able to identify a subset of examples that are more informative for preference decisions. Our method is model-agnostic, and can be applied to any text generation model for selecting between models, prompts and configurations. Moreover, we propose a practical iterative approach for dynamically determining how many instances to annotate. In a series of experiments over hundreds of model pairs, we demonstrate that DiffUse can dramatically reduce the required number of annotations {--} by up to 75{\%} {--} while maintaining high evaluation reliability.
[ "Ashury Tahan, Shir", "Gera, Ariel", "Sznajder, Benjamin", "Choshen, Leshem", "Ein-Dor, Liat", "Shnarch, Eyal" ]
Label-Efficient Model Selection for Text Generation
acl-long.456
Poster
2402.07891
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.456/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.457.bib
@inproceedings{yao-etal-2024-machine, title = "Machine Unlearning of Pre-trained Large Language Models", author = "Yao, Jin and Chien, Eli and Du, Minxin and Niu, Xinyao and Wang, Tianhao and Cheng, Zezhou and Yue, Xiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.457", pages = "8403--8419", abstract = "This study investigates the concept of the {`}right to be forgotten{'} within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models{--}a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over $10^5$ times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development.", }
This study investigates the concept of the {`}right to be forgotten{'} within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models{--}a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over $10^5$ times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development.
[ "Yao, Jin", "Chien, Eli", "Du, Minxin", "Niu, Xinyao", "Wang, Tianhao", "Cheng, Zezhou", "Yue, Xiang" ]
Machine Unlearning of Pre-trained Large Language Models
acl-long.457
Poster
2402.15159
[ "https://github.com/yaojin17/unlearning_llm" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.457/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.458.bib
@inproceedings{ortu-etal-2024-competition, title = "Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals", author = {Ortu, Francesco and Jin, Zhijing and Doimo, Diego and Sachan, Mrinmaya and Cazzaniga, Alberto and Sch{\"o}lkopf, Bernhard}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.458", pages = "8420--8436", abstract = "Interpretability research aims to bridge the gap between the empirical success and our scientific understanding of the inner workings of large language models (LLMs). However, most existing research in this area focused on analyzing a single mechanism, such as how models copy or recall factual knowledge. In this work, we propose the formulation of competition of mechanisms, which instead of individual mechanisms focuses on the interplay of multiple mechanisms, and traces how one of them becomes dominant in the final prediction. We uncover how and where the competition of mechanisms happens within LLMs using two interpretability methods, logit inspection and attention modification. Our findings show traces of the mechanisms and their competition across various model components, and reveal attention positions that effectively control the strength of certain mechanisms.", }
Interpretability research aims to bridge the gap between the empirical success and our scientific understanding of the inner workings of large language models (LLMs). However, most existing research in this area focused on analyzing a single mechanism, such as how models copy or recall factual knowledge. In this work, we propose the formulation of competition of mechanisms, which instead of individual mechanisms focuses on the interplay of multiple mechanisms, and traces how one of them becomes dominant in the final prediction. We uncover how and where the competition of mechanisms happens within LLMs using two interpretability methods, logit inspection and attention modification. Our findings show traces of the mechanisms and their competition across various model components, and reveal attention positions that effectively control the strength of certain mechanisms.
[ "Ortu, Francesco", "Jin, Zhijing", "Doimo, Diego", "Sachan, Mrinmaya", "Cazzaniga, Alberto", "Sch{\\\"o}lkopf, Bernhard" ]
Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals
acl-long.458
Poster
2402.11655
[ "https://github.com/francescortu/competition_of_mechanisms" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.458/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.459.bib
@inproceedings{joseph-etal-2024-factpico, title = "{F}act{PICO}: Factuality Evaluation for Plain Language Summarization of Medical Evidence", author = {Joseph, Sebastian and Chen, Lily and Trienes, Jan and G{\"o}ke, Hannah and Coers, Monika and Xu, Wei and Wallace, Byron and Li, Junyi Jessy}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.459", pages = "8437--8464", abstract = "Plain language summarization with LLMs can be useful for improving textual accessibility of technical content. But how factual are these summaries in a high-stakes domain like medicine? This paper presents FactPICO, a factuality benchmark for plain language summarization of medical texts describing randomized controlled trials (RCTs), which are the basis of evidence-based medicine and can directly inform patient treatment. FactPICO consists of 345 plain language summaries of RCT abstracts generated from three LLMs (i.e., GPT-4, Llama-2, and Alpaca), with fine-grained evaluation and natural language rationales from experts. We assess the factuality of critical elements of RCTs in those summaries: Populations, Interventions, Comparators, Outcomes (PICO), as well as the reported findings concerning these. We also evaluate the correctness of the extra information (e.g., explanations) added by LLMs. Using FactPICO, we benchmark a range of existing factuality metrics, including the newly devised ones based on LLMs. We find that plain language summarization of medical evidence is still challenging, especially when balancing between simplicity and factuality, and that existing metrics correlate poorly with expert judgments on the instance level.", }
Plain language summarization with LLMs can be useful for improving textual accessibility of technical content. But how factual are these summaries in a high-stakes domain like medicine? This paper presents FactPICO, a factuality benchmark for plain language summarization of medical texts describing randomized controlled trials (RCTs), which are the basis of evidence-based medicine and can directly inform patient treatment. FactPICO consists of 345 plain language summaries of RCT abstracts generated from three LLMs (i.e., GPT-4, Llama-2, and Alpaca), with fine-grained evaluation and natural language rationales from experts. We assess the factuality of critical elements of RCTs in those summaries: Populations, Interventions, Comparators, Outcomes (PICO), as well as the reported findings concerning these. We also evaluate the correctness of the extra information (e.g., explanations) added by LLMs. Using FactPICO, we benchmark a range of existing factuality metrics, including the newly devised ones based on LLMs. We find that plain language summarization of medical evidence is still challenging, especially when balancing between simplicity and factuality, and that existing metrics correlate poorly with expert judgments on the instance level.
[ "Joseph, Sebastian", "Chen, Lily", "Trienes, Jan", "G{\\\"o}ke, Hannah", "Coers, Monika", "Xu, Wei", "Wallace, Byron", "Li, Junyi Jessy" ]
FactPICO: Factuality Evaluation for Plain Language Summarization of Medical Evidence
acl-long.459
Poster
2402.11456
[ "https://github.com/lilywchen/factpico" ]
https://huggingface.co/papers/2402.11456
1
0
0
8
https://aclanthology.org/2024.acl-long.459/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.460.bib
@inproceedings{bai-etal-2024-bvsp, title = "{B}v{SP}: Broad-view Soft Prompting for Few-Shot Aspect Sentiment Quad Prediction", author = "Bai, Yinhao and Xie, Yalan and Liu, Xiaoyi and Zhao, Yuhua and Han, Zhixin and Hu, Mengting and Gao, Hang and Cheng, Renhong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.460", pages = "8465--8482", abstract = "Aspect sentiment quad prediction (ASQP) aims to predict four aspect-based elements, including aspect term, opinion term, aspect category, and sentiment polarity. In practice, unseen aspects, due to distinct data distribution, impose many challenges for a trained neural model. Motivated by this, this work formulates ASQP into the few-shot scenario, which aims for fast adaptation in real applications. Therefore, we first construct a few-shot ASQP dataset (FSQP) that contains richer categories and is more balanced for the few-shot study. Moreover, recent methods extract quads through a generation paradigm, which involves converting the input sentence into a templated target sequence. However, they primarily focus on the utilization of a single template or the consideration of different template orders, thereby overlooking the correlations among various templates. To tackle this issue, we further propose a Broad-view Soft Prompting (BvSP) method that aggregates multiple templates with a broader view by taking into account the correlation between the different templates. Specifically, BvSP uses the pre-trained language model to select the most relevant k templates with Jensen{--}Shannon divergence. BvSP further introduces soft prompts to guide the pre-trained language model using the selected templates. Then, we aggregate the results of multi-templates by voting mechanism. Empirical results demonstrate that BvSP significantly outperforms the state-of-the-art methods under four few-shot settings and other public datasets. Our code and dataset are available at https://github.com/byinhao/BvSP.", }
Aspect sentiment quad prediction (ASQP) aims to predict four aspect-based elements, including aspect term, opinion term, aspect category, and sentiment polarity. In practice, unseen aspects, due to distinct data distribution, impose many challenges for a trained neural model. Motivated by this, this work formulates ASQP into the few-shot scenario, which aims for fast adaptation in real applications. Therefore, we first construct a few-shot ASQP dataset (FSQP) that contains richer categories and is more balanced for the few-shot study. Moreover, recent methods extract quads through a generation paradigm, which involves converting the input sentence into a templated target sequence. However, they primarily focus on the utilization of a single template or the consideration of different template orders, thereby overlooking the correlations among various templates. To tackle this issue, we further propose a Broad-view Soft Prompting (BvSP) method that aggregates multiple templates with a broader view by taking into account the correlation between the different templates. Specifically, BvSP uses the pre-trained language model to select the most relevant k templates with Jensen{--}Shannon divergence. BvSP further introduces soft prompts to guide the pre-trained language model using the selected templates. Then, we aggregate the results of multi-templates by voting mechanism. Empirical results demonstrate that BvSP significantly outperforms the state-of-the-art methods under four few-shot settings and other public datasets. Our code and dataset are available at https://github.com/byinhao/BvSP.
[ "Bai, Yinhao", "Xie, Yalan", "Liu, Xiaoyi", "Zhao, Yuhua", "Han, Zhixin", "Hu, Mengting", "Gao, Hang", "Cheng, Renhong" ]
BvSP: Broad-view Soft Prompting for Few-Shot Aspect Sentiment Quad Prediction
acl-long.460
Poster
2406.07365
[ "https://github.com/byinhao/bvsp" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.460/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.461.bib
@inproceedings{fu-etal-2024-safety, title = "Safety Alignment in {NLP} Tasks: Weakly Aligned Summarization as an In-Context Attack", author = "Fu, Yu and Li, Yufei and Xiao, Wen and Liu, Cong and Dong, Yue", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.461", pages = "8483--8502", abstract = "Recent developments in balancing the usefulness and safety of Large Language Models (LLMs) have raised a critical question: Are mainstream NLP tasks adequately aligned with safety consideration? Our study, focusing on safety-sensitive documents obtained through adversarial attacks, reveals significant disparities in the safety alignment of various NLP tasks. For instance, LLMs can effectively summarize malicious long documents but often refuse to translate them. This discrepancy highlights a previously unidentified vulnerability: attacks exploiting tasks with weaker safety alignment, like summarization, can potentially compromise the integrity of tasks traditionally deemed more robust, such as translation and question-answering (QA). Moreover, the concurrent use of multiple NLP tasks with lesser safety alignment increases the risk of LLMs inadvertently processing harmful content. We demonstrate these vulnerabilities in various safety-aligned LLMs, particularly Llama2 models, Gemini and GPT-4, indicating an urgent need for strengthening safety alignments across a broad spectrum of NLP tasks.", }
Recent developments in balancing the usefulness and safety of Large Language Models (LLMs) have raised a critical question: Are mainstream NLP tasks adequately aligned with safety consideration? Our study, focusing on safety-sensitive documents obtained through adversarial attacks, reveals significant disparities in the safety alignment of various NLP tasks. For instance, LLMs can effectively summarize malicious long documents but often refuse to translate them. This discrepancy highlights a previously unidentified vulnerability: attacks exploiting tasks with weaker safety alignment, like summarization, can potentially compromise the integrity of tasks traditionally deemed more robust, such as translation and question-answering (QA). Moreover, the concurrent use of multiple NLP tasks with lesser safety alignment increases the risk of LLMs inadvertently processing harmful content. We demonstrate these vulnerabilities in various safety-aligned LLMs, particularly Llama2 models, Gemini and GPT-4, indicating an urgent need for strengthening safety alignments across a broad spectrum of NLP tasks.
[ "Fu, Yu", "Li, Yufei", "Xiao, Wen", "Liu, Cong", "Dong, Yue" ]
Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack
acl-long.461
Poster
2312.06924
[ "https://github.com/fyyfu/safetyalignnlp" ]
https://huggingface.co/papers/2312.06924
1
0
0
5
https://aclanthology.org/2024.acl-long.461/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.462.bib
@inproceedings{oota-etal-2024-speech, title = "Speech language models lack important brain-relevant semantics", author = "Oota, Subba Reddy and {\c{C}}elik, Emin and Deniz, Fatma and Toneva, Mariya", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.462", pages = "8503--8528", abstract = "Despite known differences between reading and listening in the brain, recent work has shown that text-based language models predict both text-evoked and speech-evoked brain activity to an impressive degree. This poses the question of what types of information language models truly predict in the brain. We investigate this question via a direct approach, in which we systematically remove specific low-level stimulus features (textual, speech, and visual) from language model representations to assess their impact on alignment with fMRI brain recordings during reading and listening. Comparing these findings with speech-based language models reveals starkly different effects of low-level features on brain alignment. While text-based models show reduced alignment in early sensory regions post-removal, they retain significant predictive power in late language regions. In contrast, speech-based models maintain strong alignment in early auditory regions even after feature removal but lose all predictive power in late language regions. These results suggest that speech-based models provide insights into additional information processed by early auditory regions, but caution is needed when using them to model processing in late language regions. We make our code publicly available. [https://github.com/subbareddy248/speech-llm-brain]", }
Despite known differences between reading and listening in the brain, recent work has shown that text-based language models predict both text-evoked and speech-evoked brain activity to an impressive degree. This poses the question of what types of information language models truly predict in the brain. We investigate this question via a direct approach, in which we systematically remove specific low-level stimulus features (textual, speech, and visual) from language model representations to assess their impact on alignment with fMRI brain recordings during reading and listening. Comparing these findings with speech-based language models reveals starkly different effects of low-level features on brain alignment. While text-based models show reduced alignment in early sensory regions post-removal, they retain significant predictive power in late language regions. In contrast, speech-based models maintain strong alignment in early auditory regions even after feature removal but lose all predictive power in late language regions. These results suggest that speech-based models provide insights into additional information processed by early auditory regions, but caution is needed when using them to model processing in late language regions. We make our code publicly available. [https://github.com/subbareddy248/speech-llm-brain]
[ "Oota, Subba Reddy", "{\\c{C}}elik, Emin", "Deniz, Fatma", "Toneva, Mariya" ]
Speech language models lack important brain-relevant semantics
acl-long.462
Poster
2311.04664
[ "https://github.com/subbareddy248/speech-llm-brain" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.462/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.463.bib
@inproceedings{wang-etal-2024-docllm, title = "{D}oc{LLM}: A Layout-Aware Generative Language Model for Multimodal Document Understanding", author = "Wang, Dongsheng and Raman, Natraj and Sibue, Mathieu and Ma, Zhiqiang and Babkin, Petr and Kaur, Simerjot and Pei, Yulong and Nourbakhsh, Armineh and Liu, Xiaomo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.463", pages = "8529--8548", abstract = "Enterprise documents such as forms, receipts, reports, and other such records, often carry rich semantics at the intersection of textual and spatial modalities. The visual cues offered by their complex layouts play a crucial role in comprehending these documents effectively. In this paper, we present DocLLM, a lightweight extension to traditional large language models (LLMs) for reasoning over visual documents, taking into account both textual semantics and spatial layout. Our model differs from existing multimodal LLMs by avoiding expensive image encoders and focuses exclusively on bounding box information to incorporate the spatial layout structure. Specifically, the cross-alignment between text and spatial modalities is captured by decomposing the attention mechanism in classical transformers to a set of disentangled matrices. Furthermore, we devise a pre-training objective that learns to infill text segments. This approach allows us to address irregular layouts and heterogeneous content frequently encountered in visual documents. The pre-trained model is fine-tuned using a large-scale instruction dataset, covering four core document intelligence tasks. We demonstrate that our solution outperforms SotA LLMs on 14 out of 16 datasets across all tasks, and generalizes well to 4 out of 5 previously unseen datasets.", }
Enterprise documents such as forms, receipts, reports, and other such records, often carry rich semantics at the intersection of textual and spatial modalities. The visual cues offered by their complex layouts play a crucial role in comprehending these documents effectively. In this paper, we present DocLLM, a lightweight extension to traditional large language models (LLMs) for reasoning over visual documents, taking into account both textual semantics and spatial layout. Our model differs from existing multimodal LLMs by avoiding expensive image encoders and focuses exclusively on bounding box information to incorporate the spatial layout structure. Specifically, the cross-alignment between text and spatial modalities is captured by decomposing the attention mechanism in classical transformers to a set of disentangled matrices. Furthermore, we devise a pre-training objective that learns to infill text segments. This approach allows us to address irregular layouts and heterogeneous content frequently encountered in visual documents. The pre-trained model is fine-tuned using a large-scale instruction dataset, covering four core document intelligence tasks. We demonstrate that our solution outperforms SotA LLMs on 14 out of 16 datasets across all tasks, and generalizes well to 4 out of 5 previously unseen datasets.
[ "Wang, Dongsheng", "Raman, Natraj", "Sibue, Mathieu", "Ma, Zhiqiang", "Babkin, Petr", "Kaur, Simerjot", "Pei, Yulong", "Nourbakhsh, Armineh", "Liu, Xiaomo" ]
DocLLM: A Layout-Aware Generative Language Model for Multimodal Document Understanding
acl-long.463
Poster
2401.00908
[ "" ]
https://huggingface.co/papers/2401.00908
5
177
23
9
https://aclanthology.org/2024.acl-long.463/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.464.bib
@inproceedings{wu-chandrasekaran-2024-bypassing, title = "Bypassing {LLM} Watermarks with Color-Aware Substitutions", author = "Wu, Qilong and Chandrasekaran, Varun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.464", pages = "8549--8581", abstract = "Watermarking approaches are proposed to identify if text being circulated is human- or large language model- (LLM) generated. The state-of-the-art watermarking strategy of Kirchenbauer et al. (2023a) biases the LLM to generate specific ({``}green{''}) tokens. However, determining the robustness of this watermarking method under finite (low) edit budgets is an open problem. Additionally, existing attack methods failto evade detection for longer text segments. We overcome these limitations, and propose Self Color Testing-based Substitution (SCTS), thefirst {``}color-aware{''} attack. SCTS obtains color information by strategically prompting the watermarked LLM and comparing output tokensfrequencies. It uses this information to determine token colors, and substitutes green tokens with non-green ones. In our experiments, SCTS successfully evades watermark detection using fewer number of edits than related work. Additionally, we show both theoretically and empirically that SCTS can remove the watermark for arbitrarily long watermarked text.", }
Watermarking approaches are proposed to identify if text being circulated is human- or large language model- (LLM) generated. The state-of-the-art watermarking strategy of Kirchenbauer et al. (2023a) biases the LLM to generate specific ({``}green{''}) tokens. However, determining the robustness of this watermarking method under finite (low) edit budgets is an open problem. Additionally, existing attack methods failto evade detection for longer text segments. We overcome these limitations, and propose Self Color Testing-based Substitution (SCTS), thefirst {``}color-aware{''} attack. SCTS obtains color information by strategically prompting the watermarked LLM and comparing output tokensfrequencies. It uses this information to determine token colors, and substitutes green tokens with non-green ones. In our experiments, SCTS successfully evades watermark detection using fewer number of edits than related work. Additionally, we show both theoretically and empirically that SCTS can remove the watermark for arbitrarily long watermarked text.
[ "Wu, Qilong", "Ch", "rasekaran, Varun" ]
Bypassing LLM Watermarks with Color-Aware Substitutions
acl-long.464
Poster
2403.14719
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.464/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.465.bib
@inproceedings{chen-etal-2024-parallel, title = "Parallel Structures in Pre-training Data Yield In-Context Learning", author = "Chen, Yanda and Zhao, Chen and Yu, Zhou and McKeown, Kathleen and He, He", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.465", pages = "8582--8592", abstract = "Pre-trained language models (LMs) are capable of in-context learning (ICL): they can adapt to a task with only a few examples given in the prompt without any parameter update. However, it is unclear where this capability comes from as there is a stark distribution shift between pre-training text and ICL prompts. In this work, we study what patterns of the pre-training data contribute to ICL. We find that LMs{'} ICL ability depends on $\textit{parallel structures}$ in the pre-training data{---}pairs of phrases following similar templates in the same context window. Specifically, we detect parallel structures by checking whether training on one phrase improves prediction of the other, and conduct ablation experiments to study their effect on ICL. We show that removing parallel structures in the pre-training data reduces LMs{'} ICL accuracy by $\textbf{51}${\%} (vs 2{\%} from random ablation). This drop persists even when excluding common patterns such as n-gram repetitions and long-range dependency, showing the diversity and generality of parallel structures. A closer look at the detected parallel structures indicates that they cover diverse linguistic tasks and span long distances in the data.", }
Pre-trained language models (LMs) are capable of in-context learning (ICL): they can adapt to a task with only a few examples given in the prompt without any parameter update. However, it is unclear where this capability comes from as there is a stark distribution shift between pre-training text and ICL prompts. In this work, we study what patterns of the pre-training data contribute to ICL. We find that LMs{'} ICL ability depends on $\textit{parallel structures}$ in the pre-training data{---}pairs of phrases following similar templates in the same context window. Specifically, we detect parallel structures by checking whether training on one phrase improves prediction of the other, and conduct ablation experiments to study their effect on ICL. We show that removing parallel structures in the pre-training data reduces LMs{'} ICL accuracy by $\textbf{51}${\%} (vs 2{\%} from random ablation). This drop persists even when excluding common patterns such as n-gram repetitions and long-range dependency, showing the diversity and generality of parallel structures. A closer look at the detected parallel structures indicates that they cover diverse linguistic tasks and span long distances in the data.
[ "Chen, Y", "a", "Zhao, Chen", "Yu, Zhou", "McKeown, Kathleen", "He, He" ]
Parallel Structures in Pre-training Data Yield In-Context Learning
acl-long.465
Poster
2402.12530
[ "" ]
https://huggingface.co/papers/2402.12530
0
0
0
5
https://aclanthology.org/2024.acl-long.465/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.466.bib
@inproceedings{xu-etal-2024-opentom, title = "{O}pen{T}o{M}: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models", author = "Xu, Hainiu and Zhao, Runcong and Zhu, Lixing and Du, Jinhua and He, Yulan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.466", pages = "8593--8623", abstract = "Neural Theory-of-Mind (N-ToM), machine{'}s ability to understand and keep track of the mental states of others, is pivotal in developing socially intelligent agents. However, prevalent N-ToM benchmarks have several shortcomings, including the presence of ambiguous and artificial narratives, absence of personality traits and preferences, a lack of questions addressing characters{'} psychological mental states, and limited diversity in the questions posed. In response to these issues, we construct OpenToM, a new benchmark for assessing N-ToM with (1) longer and clearer narrative stories, (2) characters with explicit personality traits, (3) actions that are triggered by character intentions, and (4) questions designed to challenge LLMs{'} capabilities of modeling characters{'} mental states of both the physical and psychological world. Using OpenToM, we reveal that state-of-the-art LLMs thrive at modeling certain aspects of mental states in the physical world but fall short when tracking characters{'} mental states in the psychological world.", }
Neural Theory-of-Mind (N-ToM), machine{'}s ability to understand and keep track of the mental states of others, is pivotal in developing socially intelligent agents. However, prevalent N-ToM benchmarks have several shortcomings, including the presence of ambiguous and artificial narratives, absence of personality traits and preferences, a lack of questions addressing characters{'} psychological mental states, and limited diversity in the questions posed. In response to these issues, we construct OpenToM, a new benchmark for assessing N-ToM with (1) longer and clearer narrative stories, (2) characters with explicit personality traits, (3) actions that are triggered by character intentions, and (4) questions designed to challenge LLMs{'} capabilities of modeling characters{'} mental states of both the physical and psychological world. Using OpenToM, we reveal that state-of-the-art LLMs thrive at modeling certain aspects of mental states in the physical world but fall short when tracking characters{'} mental states in the psychological world.
[ "Xu, Hainiu", "Zhao, Runcong", "Zhu, Lixing", "Du, Jinhua", "He, Yulan" ]
OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models
acl-long.466
Poster
2402.06044
[ "https://github.com/seacowx/opentom" ]
https://huggingface.co/papers/2402.06044
0
1
0
5
https://aclanthology.org/2024.acl-long.466/
[]
[ "SeacowX/OpenToM" ]
[]
1
https://aclanthology.org/2024.acl-long.467.bib
@inproceedings{rust-etal-2024-towards, title = "Towards Privacy-Aware Sign Language Translation at Scale", author = "Rust, Phillip and Shi, Bowen and Wang, Skyler and Camgoz, Necati Cihan and Maillard, Jean", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.467", pages = "8624--8641", abstract = "A major impediment to the advancement of sign language translation (SLT) is data scarcity. Much of the sign language data currently available on the web cannot be used for training supervised models due to the lack of aligned captions. Furthermore, scaling SLT using large-scale web-scraped datasets bears privacy risks due to the presence of biometric information, which the responsible development of SLT technologies should account for. In this work, we propose a two-stage framework for privacy-aware SLT at scale that addresses both of these issues. We introduce SSVP-SLT, which leverages self-supervised video pretraining on anonymized and unannotated videos, followed by supervised SLT finetuning on a curated parallel dataset. SSVP-SLT achieves state-of-the-art finetuned and zero-shot gloss-free SLT performance on the How2Sign dataset, outperforming the strongest respective baselines by over 3 BLEU-4. Based on controlled experiments, we further discuss the advantages and limitations of self-supervised pretraining and anonymization via facial obfuscation for SLT.", }
A major impediment to the advancement of sign language translation (SLT) is data scarcity. Much of the sign language data currently available on the web cannot be used for training supervised models due to the lack of aligned captions. Furthermore, scaling SLT using large-scale web-scraped datasets bears privacy risks due to the presence of biometric information, which the responsible development of SLT technologies should account for. In this work, we propose a two-stage framework for privacy-aware SLT at scale that addresses both of these issues. We introduce SSVP-SLT, which leverages self-supervised video pretraining on anonymized and unannotated videos, followed by supervised SLT finetuning on a curated parallel dataset. SSVP-SLT achieves state-of-the-art finetuned and zero-shot gloss-free SLT performance on the How2Sign dataset, outperforming the strongest respective baselines by over 3 BLEU-4. Based on controlled experiments, we further discuss the advantages and limitations of self-supervised pretraining and anonymization via facial obfuscation for SLT.
[ "Rust, Phillip", "Shi, Bowen", "Wang, Skyler", "Camgoz, Necati Cihan", "Maillard, Jean" ]
Towards Privacy-Aware Sign Language Translation at Scale
acl-long.467
Oral
2402.09611
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.467/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.468.bib
@inproceedings{wang-etal-2024-arithmetic, title = "Arithmetic Control of {LLM}s for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards", author = "Wang, Haoxiang and Lin, Yong and Xiong, Wei and Yang, Rui and Diao, Shizhe and Qiu, Shuang and Zhao, Han and Zhang, Tong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.468", pages = "8642--8655", abstract = "Fine-grained control over large language models (LLMs) remains a significant challenge, hindering their adaptability to diverse user needs. While Reinforcement Learning from Human Feedback (RLHF) shows promise in aligning LLMs, its reliance on scalar rewards often limits its ability to capture diverse user preferences in real-world applications. To address this limitation, we introduce the Directional Preference Alignment (DPA) framework. Unlike the scalar-reward RLHF, DPA incorporates multi-objective reward modeling to represent diverse preference profiles. Additionally, DPA models user preferences as directions (i.e., unit vectors) in the reward space to achieve user-dependent preference control. Our method involves training a multi-objective reward model and then fine-tuning the LLM with a preference-conditioned variant of Rejection Sampling Finetuning (RSF), an RLHF method adopted by Llama 2. This method enjoys a better performance trade-off across various reward objectives. In comparison with the scalar-reward RLHF, DPA offers users intuitive control over LLM generation: they can arithmetically specify their desired trade-offs (e.g., more helpfulness with less verbosity). We also validate the effectiveness of DPA with real-world alignment experiments on Mistral-7B. Our method provides straightforward arithmetic control over the trade-off between helpfulness and verbosity while maintaining competitive performance with strong baselines such as Direct Preference Optimization (DPO).", }
Fine-grained control over large language models (LLMs) remains a significant challenge, hindering their adaptability to diverse user needs. While Reinforcement Learning from Human Feedback (RLHF) shows promise in aligning LLMs, its reliance on scalar rewards often limits its ability to capture diverse user preferences in real-world applications. To address this limitation, we introduce the Directional Preference Alignment (DPA) framework. Unlike the scalar-reward RLHF, DPA incorporates multi-objective reward modeling to represent diverse preference profiles. Additionally, DPA models user preferences as directions (i.e., unit vectors) in the reward space to achieve user-dependent preference control. Our method involves training a multi-objective reward model and then fine-tuning the LLM with a preference-conditioned variant of Rejection Sampling Finetuning (RSF), an RLHF method adopted by Llama 2. This method enjoys a better performance trade-off across various reward objectives. In comparison with the scalar-reward RLHF, DPA offers users intuitive control over LLM generation: they can arithmetically specify their desired trade-offs (e.g., more helpfulness with less verbosity). We also validate the effectiveness of DPA with real-world alignment experiments on Mistral-7B. Our method provides straightforward arithmetic control over the trade-off between helpfulness and verbosity while maintaining competitive performance with strong baselines such as Direct Preference Optimization (DPO).
[ "Wang, Haoxiang", "Lin, Yong", "Xiong, Wei", "Yang, Rui", "Diao, Shizhe", "Qiu, Shuang", "Zhao, Han", "Zhang, Tong" ]
Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards
acl-long.468
Poster
2402.18571
[ "https://github.com/haoxiang-wang/directional-preference-alignment" ]
https://huggingface.co/papers/2402.18571
1
0
0
8
https://aclanthology.org/2024.acl-long.468/
[ "RLHFlow/ArmoRM-Llama3-8B-v0.1", "RLHFlow/DPA-v1-Mistral-7B", "RLHFlow/RewardModel-Mistral-7B-for-DPA-v1", "SteveTran/ArmoRM-Llama3-8B-v0.1-4bit", "SteveTran/ArmoRM-Llama3-8B-v0.1-8bit" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.469.bib
@inproceedings{li-etal-2024-towards-real, title = "Towards Real-World Writing Assistance: A {C}hinese Character Checking Benchmark with Faked and Misspelled Characters", author = "Li, Yinghui and Xu, Zishan and Chen, Shaoshen and Huang, Haojing and Li, Yangning and Ma, Shirong and Jiang, Yong and Li, Zhongli and Zhou, Qingyu and Zheng, Hai-Tao and Shen, Ying", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.469", pages = "8656--8668", abstract = "Writing assistance aims to improve the correctness and quality of input texts, with character checking being crucial in detecting and correcting wrong characters. In the real world where handwriting occupies the vast majority, characters that humans get wrong include faked characters (i.e., untrue characters created due to writing errors) and misspelled characters (i.e., true characters used incorrectly due to spelling errors). However, existing datasets and related studies only focus on misspelled characters that can be represented by computer text encoding systems, thereby ignoring faked characters which are more common and difficult. To break through this dilemma, we present $\textbf{Visual-C}$$^3$, a human-annotated $\textbf{Visual}$ $\textbf{C}$hinese $\textbf{C}$haracter $\textbf{C}$hecking dataset with faked and misspelled Chinese characters. To the best of our knowledge, Visual-C$^3$ is the first real-world visual and the largest human-crafted dataset for the Chinese character checking scenario. Additionally, we also propose and evaluate novel baseline methods on Visual-C$^3$. Extensive empirical results and analyses show that Visual-C$^3$ is high-quality yet challenging. As the first study focusing on Chinese faked characters, the dataset and the baseline methods are publicly available at https://github.com/THUKElab/Visual-C3.", }
Writing assistance aims to improve the correctness and quality of input texts, with character checking being crucial in detecting and correcting wrong characters. In the real world where handwriting occupies the vast majority, characters that humans get wrong include faked characters (i.e., untrue characters created due to writing errors) and misspelled characters (i.e., true characters used incorrectly due to spelling errors). However, existing datasets and related studies only focus on misspelled characters that can be represented by computer text encoding systems, thereby ignoring faked characters which are more common and difficult. To break through this dilemma, we present $\textbf{Visual-C}$$^3$, a human-annotated $\textbf{Visual}$ $\textbf{C}$hinese $\textbf{C}$haracter $\textbf{C}$hecking dataset with faked and misspelled Chinese characters. To the best of our knowledge, Visual-C$^3$ is the first real-world visual and the largest human-crafted dataset for the Chinese character checking scenario. Additionally, we also propose and evaluate novel baseline methods on Visual-C$^3$. Extensive empirical results and analyses show that Visual-C$^3$ is high-quality yet challenging. As the first study focusing on Chinese faked characters, the dataset and the baseline methods are publicly available at https://github.com/THUKElab/Visual-C3.
[ "Li, Yinghui", "Xu, Zishan", "Chen, Shaoshen", "Huang, Haojing", "Li, Yangning", "Ma, Shirong", "Jiang, Yong", "Li, Zhongli", "Zhou, Qingyu", "Zheng, Hai-Tao", "Shen, Ying" ]
Towards Real-World Writing Assistance: A Chinese Character Checking Benchmark with Faked and Misspelled Characters
acl-long.469
Poster
2311.11268
[ "https://github.com/thukelab/visual-c3" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.469/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.470.bib
@inproceedings{huang-etal-2024-ravel, title = "{RAVEL}: Evaluating Interpretability Methods on Disentangling Language Model Representations", author = "Huang, Jing and Wu, Zhengxuan and Potts, Christopher and Geva, Mor and Geiger, Atticus", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.470", pages = "8669--8687", abstract = "Individual neurons participate in the representation of multiple high-level concepts. To what extent can different interpretability methods successfully disentangle these roles? To help address this question, we introduce RAVEL (Resolving Attribute-Value Entanglements in Language Models), a dataset that enables tightly controlled, quantitative comparisons between a variety of existing interpretability methods. We use the resulting conceptual framework to define the new method of Multi-task Distributed Alignment Search (MDAS), which allows us to find distributed representations satisfying multiple causal criteria. With Llama2-7B as the target language model, MDAS achieves state-of-the-art results on RAVEL, demonstrating the importance of going beyond neuron-level analyses to identify features distributed across activations. We release our benchmark at https://github.com/explanare/ravel.", }
Individual neurons participate in the representation of multiple high-level concepts. To what extent can different interpretability methods successfully disentangle these roles? To help address this question, we introduce RAVEL (Resolving Attribute-Value Entanglements in Language Models), a dataset that enables tightly controlled, quantitative comparisons between a variety of existing interpretability methods. We use the resulting conceptual framework to define the new method of Multi-task Distributed Alignment Search (MDAS), which allows us to find distributed representations satisfying multiple causal criteria. With Llama2-7B as the target language model, MDAS achieves state-of-the-art results on RAVEL, demonstrating the importance of going beyond neuron-level analyses to identify features distributed across activations. We release our benchmark at https://github.com/explanare/ravel.
[ "Huang, Jing", "Wu, Zhengxuan", "Potts, Christopher", "Geva, Mor", "Geiger, Atticus" ]
RAVEL: Evaluating Interpretability Methods on Disentangling Language Model Representations
acl-long.470
Poster
2402.17700
[ "https://github.com/explanare/ravel" ]
https://huggingface.co/papers/2402.17700
3
1
0
5
https://aclanthology.org/2024.acl-long.470/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.471.bib
@inproceedings{li-etal-2024-large-language-models, title = "Large Language Models as Zero-shot Dialogue State Tracker through Function Calling", author = "Li, Zekun and Chen, Zhiyu and Ross, Mike and Huber, Patrick and Moon, Seungwhan and Lin, Zhaojiang and Dong, Xin and Sagar, Adithya and Yan, Xifeng and Crook, Paul", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.471", pages = "8688--8704", abstract = "Large language models (LLMs) are increasingly prevalent in conversational systems due to their advanced understanding and generative capabilities in general contexts. However, their effectiveness in task-oriented dialogues (TOD), which requires not only response generation but also effective dialogue state tracking (DST) within specific tasks and domains, remains less satisfying. In this work, we propose a novel approach FnCTOD for solving DST with LLMs through function calling. This method improves zero-shot DST, allowing adaptation to diverse domains without extensive data collection or model tuning. Our experimental results demonstrate that our approach achieves exceptional performance with both modestly sized open-source and also proprietary LLMs: with in-context prompting it enables various 7B or 13B parameter models to surpass the previous state-of-the-art (SOTA) achieved by ChatGPT, and improves ChatGPT{'}s performance beating the SOTA by 5.6{\%} average joint goal accuracy (JGA). Individual model results for GPT-3.5 and GPT-4 are boosted by 4.8{\%} and 14{\%}, respectively. We also show that by fine-tuning on a small collection of diverse task-oriented dialogues, we can equip modestly sized models, specifically a 13B parameter LLaMA2-Chat model, with function-calling capabilities and DST performance comparable to ChatGPT while maintaining their chat capabilities. We have made the code publicly available at https://github.com/facebookresearch/FnCTOD.", }
Large language models (LLMs) are increasingly prevalent in conversational systems due to their advanced understanding and generative capabilities in general contexts. However, their effectiveness in task-oriented dialogues (TOD), which requires not only response generation but also effective dialogue state tracking (DST) within specific tasks and domains, remains less satisfying. In this work, we propose a novel approach FnCTOD for solving DST with LLMs through function calling. This method improves zero-shot DST, allowing adaptation to diverse domains without extensive data collection or model tuning. Our experimental results demonstrate that our approach achieves exceptional performance with both modestly sized open-source and also proprietary LLMs: with in-context prompting it enables various 7B or 13B parameter models to surpass the previous state-of-the-art (SOTA) achieved by ChatGPT, and improves ChatGPT{'}s performance beating the SOTA by 5.6{\%} average joint goal accuracy (JGA). Individual model results for GPT-3.5 and GPT-4 are boosted by 4.8{\%} and 14{\%}, respectively. We also show that by fine-tuning on a small collection of diverse task-oriented dialogues, we can equip modestly sized models, specifically a 13B parameter LLaMA2-Chat model, with function-calling capabilities and DST performance comparable to ChatGPT while maintaining their chat capabilities. We have made the code publicly available at https://github.com/facebookresearch/FnCTOD.
[ "Li, Zekun", "Chen, Zhiyu", "Ross, Mike", "Huber, Patrick", "Moon, Seungwhan", "Lin, Zhaojiang", "Dong, Xin", "Sagar, Adithya", "Yan, Xifeng", "Crook, Paul" ]
Large Language Models as Zero-shot Dialogue State Tracker through Function Calling
acl-long.471
Poster
2402.10466
[ "https://github.com/facebookresearch/fnctod" ]
https://huggingface.co/papers/2402.10466
7
16
3
10
https://aclanthology.org/2024.acl-long.471/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.472.bib
@inproceedings{krichene-etal-2024-faithful, title = "Faithful Chart Summarization with {C}ha{TS}-Pi", author = "Krichene, Syrine and Piccinno, Francesco and Liu, Fangyu and Eisenschlos, Julian", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.472", pages = "8705--8723", abstract = "Chart-to-summary generation can help explore data, communicate insights, and help the visually impaired people. Multi-modal generative models have been used to produce fluent summaries, but they can suffer from factual and perceptual errors. In this work we present CHATS-CRITIC, a reference-free chart summarization metric for scoring faithfulness. CHATS-CRITIC is composed of an image-to-text model to recover the table from a chart, and a tabular entailment model applied to score the summary sentence by sentence. We find that CHATS-CRITIC evaluates the summary quality according to human ratings better than reference-based metrics, either learned or n-gram based, and can be further used to fix candidate summaries by removing not supported sentences. We then introduce CHATS-PI, a chart-to-summary pipeline that leverages CHATS-CRITIC during inference to fix and rank sampled candidates from any chart-summarization model. We evaluate CHATS-PI and CHATS-CRITIC using human raters, establishing state-of-the-art results on two popular chart-to-summary datasets.", }
Chart-to-summary generation can help explore data, communicate insights, and help the visually impaired people. Multi-modal generative models have been used to produce fluent summaries, but they can suffer from factual and perceptual errors. In this work we present CHATS-CRITIC, a reference-free chart summarization metric for scoring faithfulness. CHATS-CRITIC is composed of an image-to-text model to recover the table from a chart, and a tabular entailment model applied to score the summary sentence by sentence. We find that CHATS-CRITIC evaluates the summary quality according to human ratings better than reference-based metrics, either learned or n-gram based, and can be further used to fix candidate summaries by removing not supported sentences. We then introduce CHATS-PI, a chart-to-summary pipeline that leverages CHATS-CRITIC during inference to fix and rank sampled candidates from any chart-summarization model. We evaluate CHATS-PI and CHATS-CRITIC using human raters, establishing state-of-the-art results on two popular chart-to-summary datasets.
[ "Krichene, Syrine", "Piccinno, Francesco", "Liu, Fangyu", "Eisenschlos, Julian" ]
Faithful Chart Summarization with ChaTS-Pi
acl-long.472
Poster
2405.19094
[ "" ]
https://huggingface.co/papers/2405.19094
0
0
0
4
https://aclanthology.org/2024.acl-long.472/
[]
[]
[ "chats-pi/chats-pi" ]
1
https://aclanthology.org/2024.acl-long.473.bib
@inproceedings{niu-etal-2024-enhancing, title = "Enhancing Dialogue State Tracking Models through {LLM}-backed User-Agents Simulation", author = "Niu, Cheng and Wang, Xingguang and Cheng, Xuxin and Song, Juntong and Zhang, Tong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.473", pages = "8724--8741", abstract = "Dialogue State Tracking (DST) is designed to monitor the evolving dialogue state in the conversations and plays a pivotal role in developing task-oriented dialogue systems. However, obtaining the annotated data for the DST task is usually a costly endeavor. In this paper, we focus on employing LLMs to generate dialogue data to reduce dialogue collection and annotation costs. Specifically, GPT-4 is used to simulate the user and agent interaction, generating thousands of dialogues annotated with DST labels. Then a two-stage fine-tuning on LLaMA 2 is performed on the generated data and the real data for the DST prediction. Experimental results on two public DST benchmarks show that with the generated dialogue data, our model performs better than the baseline trained solely on real data. In addition, our approach is also capable of adapting to the dynamic demands in real-world scenarios, generating dialogues in new domains swiftly. After replacing dialogue segments in any domain with the corresponding generated ones, the model achieves comparable performance to the model trained on real data. The source code and generated dialogue data are available at https://github.com/ParticleMedia/LUAS.", }
Dialogue State Tracking (DST) is designed to monitor the evolving dialogue state in the conversations and plays a pivotal role in developing task-oriented dialogue systems. However, obtaining the annotated data for the DST task is usually a costly endeavor. In this paper, we focus on employing LLMs to generate dialogue data to reduce dialogue collection and annotation costs. Specifically, GPT-4 is used to simulate the user and agent interaction, generating thousands of dialogues annotated with DST labels. Then a two-stage fine-tuning on LLaMA 2 is performed on the generated data and the real data for the DST prediction. Experimental results on two public DST benchmarks show that with the generated dialogue data, our model performs better than the baseline trained solely on real data. In addition, our approach is also capable of adapting to the dynamic demands in real-world scenarios, generating dialogues in new domains swiftly. After replacing dialogue segments in any domain with the corresponding generated ones, the model achieves comparable performance to the model trained on real data. The source code and generated dialogue data are available at https://github.com/ParticleMedia/LUAS.
[ "Niu, Cheng", "Wang, Xingguang", "Cheng, Xuxin", "Song, Juntong", "Zhang, Tong" ]
Enhancing Dialogue State Tracking Models through LLM-backed User-Agents Simulation
acl-long.473
Poster
2405.13037
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.473/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.474.bib
@inproceedings{chen-etal-2024-metasumperceiver, title = "{M}eta{S}um{P}erceiver: Multimodal Multi-Document Evidence Summarization for Fact-Checking", author = "Chen, Ting-Chih and Tang, Chia-Wei and Thomas, Chris", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.474", pages = "8742--8757", abstract = "Fact-checking real-world claims often requires reviewing multiple multimodal documents in order to assess the claim{'}s truthfulness, a highly laborious and time-consuming task. In this paper, we present a summarization model crafted to generate claim-specific summaries useful for fact-checking from multimodal multi-document datasets. The model takes inputs in the form of documents, images, and a claim, with the objective of assisting in fact-checking tasks. We introduce a dynamic perceiver-based model that is able to handle inputs from multiple modalities of arbitrary lengths. To train our model, we leverage a novel reinforcement learning-based entailment objective in order to generate summaries that provide evidence distinguishing between different truthfulness labels. To assess the efficacy of our approach, we conduct experiments on both an existing benchmark as well as a new dataset of multi-document claims which we contribute. Our approach outperforms the SOTA approach by 4.6{\%} in the claim verification task on the MOCHEG dataset and demonstrates strong performance on our new Multi-News-Fact-Checking dataset.", }
Fact-checking real-world claims often requires reviewing multiple multimodal documents in order to assess the claim{'}s truthfulness, a highly laborious and time-consuming task. In this paper, we present a summarization model crafted to generate claim-specific summaries useful for fact-checking from multimodal multi-document datasets. The model takes inputs in the form of documents, images, and a claim, with the objective of assisting in fact-checking tasks. We introduce a dynamic perceiver-based model that is able to handle inputs from multiple modalities of arbitrary lengths. To train our model, we leverage a novel reinforcement learning-based entailment objective in order to generate summaries that provide evidence distinguishing between different truthfulness labels. To assess the efficacy of our approach, we conduct experiments on both an existing benchmark as well as a new dataset of multi-document claims which we contribute. Our approach outperforms the SOTA approach by 4.6{\%} in the claim verification task on the MOCHEG dataset and demonstrates strong performance on our new Multi-News-Fact-Checking dataset.
[ "Chen, Ting-Chih", "Tang, Chia-Wei", "Thomas, Chris" ]
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for Fact-Checking
acl-long.474
Poster
2407.13089
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.474/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.475.bib
@inproceedings{li-etal-2024-knowcoder, title = "{K}now{C}oder: Coding Structured Knowledge into {LLM}s for Universal Information Extraction", author = "Li, Zixuan and Zeng, Yutao and Zuo, Yuxin and Ren, Weicheng and Liu, Wenxuan and Su, Miao and Guo, Yucan and Liu, Yantao and Lixiang, Lixiang and Hu, Zhilei and Bai, Long and Li, Wei and Liu, Yidan and Yang, Pan and Jin, Xiaolong and Guo, Jiafeng and Cheng, Xueqi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.475", pages = "8758--8779", abstract = "", }
[ "Li, Zixuan", "Zeng, Yutao", "Zuo, Yuxin", "Ren, Weicheng", "Liu, Wenxuan", "Su, Miao", "Guo, Yucan", "Liu, Yantao", "Lixiang, Lixiang", "Hu, Zhilei", "Bai, Long", "Li, Wei", "Liu, Yidan", "Yang, Pan", "Jin, Xiaolong", "Guo, Jiafeng", "Cheng, Xueqi" ]
KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction
acl-long.475
Poster
2403.07969
[ "" ]
https://huggingface.co/papers/2403.07969
1
0
0
17
https://aclanthology.org/2024.acl-long.475/
[]
[ "golaxy/KnowCoder-Schema-Understanding-Data", "golaxy/KnowCoder-Schema-Library", "golaxy/KnowCoder-Schema-Following-Data" ]
[]
1
https://aclanthology.org/2024.acl-long.476.bib
@inproceedings{liu-etal-2024-era, title = "{ERA}-{C}o{T}: Improving Chain-of-Thought through Entity Relationship Analysis", author = "Liu, Yanming and Peng, Xinyue and Du, Tianyu and Yin, Jianwei and Liu, Weihao and Zhang, Xuhong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.476", pages = "8780--8794", abstract = "Large language models (LLMs) have achieved commendable accomplishments in various natural language processing tasks. However, LLMs still encounter significant challenges when dealing with complex scenarios involving multiple entities. These challenges arise from the presence of implicit relationships that demand multi-step reasoning. In this paper, we propose a novel approach ERA-CoT, which aids LLMs in understanding context by capturing relationships between entities and supports the reasoning of diverse tasks through Chain-of-Thoughts (CoT).Experimental results show that ERA-CoT demonstrates the superior performance of our proposed method compared to current CoT prompting methods, achieving a significant improvement of an average of 5.1{\%} on GPT3.5 compared to previous SOTA baselines. Our analysis indicates that ERA-CoT increases the LLM{'}s understanding of entity relationships, significantly improves the accuracy of question answering, and enhances the reasoning ability of LLMs.", }
Large language models (LLMs) have achieved commendable accomplishments in various natural language processing tasks. However, LLMs still encounter significant challenges when dealing with complex scenarios involving multiple entities. These challenges arise from the presence of implicit relationships that demand multi-step reasoning. In this paper, we propose a novel approach ERA-CoT, which aids LLMs in understanding context by capturing relationships between entities and supports the reasoning of diverse tasks through Chain-of-Thoughts (CoT).Experimental results show that ERA-CoT demonstrates the superior performance of our proposed method compared to current CoT prompting methods, achieving a significant improvement of an average of 5.1{\%} on GPT3.5 compared to previous SOTA baselines. Our analysis indicates that ERA-CoT increases the LLM{'}s understanding of entity relationships, significantly improves the accuracy of question answering, and enhances the reasoning ability of LLMs.
[ "Liu, Yanming", "Peng, Xinyue", "Du, Tianyu", "Yin, Jianwei", "Liu, Weihao", "Zhang, Xuhong" ]
ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis
acl-long.476
Poster
2403.06932
[ "https://github.com/oceanntwt/era-cot" ]
https://huggingface.co/papers/2403.06932
1
1
0
6
https://aclanthology.org/2024.acl-long.476/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.477.bib
@inproceedings{deng-etal-2024-multi, title = "On the Multi-turn Instruction Following for Conversational Web Agents", author = "Deng, Yang and Zhang, Xuan and Zhang, Wenxuan and Yuan, Yifei and Ng, See-Kiong and Chua, Tat-Seng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.477", pages = "8795--8812", abstract = "Web agents powered by Large Language Models (LLMs) have demonstrated remarkable abilities in planning and executing multi-step interactions within complex web-based environments, fulfilling a wide range of web navigation tasks. Despite these advancements, the potential for LLM-powered agents to effectively engage with sequential user instructions in real-world scenarios has not been fully explored. In this work, we introduce a new task of Conversational Web Navigation, which necessitates sophisticated interactions that span multiple turns with both the users and the environment, supported by a specially developed dataset named Multi-Turn Mind2Web (MT-Mind2Web). To tackle the limited context length of LLMs and the context-dependency issue of the conversational tasks, we further propose a novel framework, named self-reflective memory-augmented planning (Self-MAP), which employs memory utilization and self-reflection techniques. Extensive experiments are conducted to benchmark the MT-Mind2Web dataset, and validate the effectiveness of the proposed method.", }
Web agents powered by Large Language Models (LLMs) have demonstrated remarkable abilities in planning and executing multi-step interactions within complex web-based environments, fulfilling a wide range of web navigation tasks. Despite these advancements, the potential for LLM-powered agents to effectively engage with sequential user instructions in real-world scenarios has not been fully explored. In this work, we introduce a new task of Conversational Web Navigation, which necessitates sophisticated interactions that span multiple turns with both the users and the environment, supported by a specially developed dataset named Multi-Turn Mind2Web (MT-Mind2Web). To tackle the limited context length of LLMs and the context-dependency issue of the conversational tasks, we further propose a novel framework, named self-reflective memory-augmented planning (Self-MAP), which employs memory utilization and self-reflection techniques. Extensive experiments are conducted to benchmark the MT-Mind2Web dataset, and validate the effectiveness of the proposed method.
[ "Deng, Yang", "Zhang, Xuan", "Zhang, Wenxuan", "Yuan, Yifei", "Ng, See-Kiong", "Chua, Tat-Seng" ]
On the Multi-turn Instruction Following for Conversational Web Agents
acl-long.477
Poster
2402.15057
[ "https://github.com/magicgh/self-map" ]
https://huggingface.co/papers/2402.15057
0
0
0
6
https://aclanthology.org/2024.acl-long.477/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.478.bib
@inproceedings{deng-etal-2024-mobile, title = "Mobile-Bench: An Evaluation Benchmark for {LLM}-based Mobile Agents", author = "Deng, Shihan and Xu, Weikai and Sun, Hongda and Liu, Wei and Tan, Tao and Liujianfeng, Liujianfeng and Li, Ang and Luan, Jian and Wang, Bin and Yan, Rui and Shang, Shuo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.478", pages = "8813--8831", abstract = "With the remarkable advancements of large language models (LLMs), LLM-based agents have become a research hotspot in human-computer interaction.However, there is a scarcity of benchmarks available for LLM-based mobile agents.Benchmarking these agents generally faces three main challenges:(1) The inefficiency of UI-only operations imposes limitations to task evaluation.(2) Specific instructions within a singular application lack adequacy for assessing the multi-dimensional reasoning and decision-making capacities of LLM mobile agents.(3) Current evaluation metrics are insufficient to accurately assess the process of sequential actions. To this end, we propose Mobile-Bench, a novel benchmark for evaluating the capabilities of LLM-based mobile agents.First, we expand conventional UI operations by incorporating 103 collected APIs to accelerate the efficiency of task completion.Subsequently, we collect evaluation data by combining real user queries with augmentation from LLMs.To better evaluate different levels of planning capabilities for mobile agents, our data is categorized into three distinct groups: SAST, SAMT, and MAMT, reflecting varying levels of task complexity. Mobile-Bench comprises 832 data entries, with more than 200 tasks specifically designed to evaluate multi-APP collaboration scenarios.Furthermore, we introduce a more accurate evaluation metric, named CheckPoint, to assess whether LLM-based mobile agents reach essential points during their planning and reasoning steps. Dataset and platform will be released in the future.", }
With the remarkable advancements of large language models (LLMs), LLM-based agents have become a research hotspot in human-computer interaction.However, there is a scarcity of benchmarks available for LLM-based mobile agents.Benchmarking these agents generally faces three main challenges:(1) The inefficiency of UI-only operations imposes limitations to task evaluation.(2) Specific instructions within a singular application lack adequacy for assessing the multi-dimensional reasoning and decision-making capacities of LLM mobile agents.(3) Current evaluation metrics are insufficient to accurately assess the process of sequential actions. To this end, we propose Mobile-Bench, a novel benchmark for evaluating the capabilities of LLM-based mobile agents.First, we expand conventional UI operations by incorporating 103 collected APIs to accelerate the efficiency of task completion.Subsequently, we collect evaluation data by combining real user queries with augmentation from LLMs.To better evaluate different levels of planning capabilities for mobile agents, our data is categorized into three distinct groups: SAST, SAMT, and MAMT, reflecting varying levels of task complexity. Mobile-Bench comprises 832 data entries, with more than 200 tasks specifically designed to evaluate multi-APP collaboration scenarios.Furthermore, we introduce a more accurate evaluation metric, named CheckPoint, to assess whether LLM-based mobile agents reach essential points during their planning and reasoning steps. Dataset and platform will be released in the future.
[ "Deng, Shihan", "Xu, Weikai", "Sun, Hongda", "Liu, Wei", "Tan, Tao", "Liujianfeng, Liujianfeng", "Li, Ang", "Luan, Jian", "Wang, Bin", "Yan, Rui", "Shang, Shuo" ]
Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents
acl-long.478
Poster
2407.00993
[ "" ]
https://huggingface.co/papers/2407.00993
0
0
0
11
https://aclanthology.org/2024.acl-long.478/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.479.bib
@inproceedings{zhang-etal-2024-mc2, title = "{MC}$^2$: Towards Transparent and Culturally-Aware {NLP} for Minority Languages in {C}hina", author = "Zhang, Chen and Tao, Mingxu and Huang, Quzhe and Lin, Jiuheng and Chen, Zhibin and Feng, Yansong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.479", pages = "8832--8850", abstract = "Current large language models demonstrate deficiencies in understanding low-resource languages, particularly the minority languages in China. This limitation stems from the scarcity of available pre-training data. To address this accessibility challenge, we present MC$^2$, a Multilingual Corpus of Minority Languages in China, which is the largest open-source corpus of its kind so far. MC$^2$ includes four underrepresented languages: Tibetan, Uyghur, Kazakh, and Mongolian. Notably, we focus on the less common writing systems of Kazakh and Mongolian, i.e., Kazakh Arabic script and traditional Mongolian script, respectively, which have been long neglected in previous corpus construction efforts. Recognizing the prevalence of language contamination within existing corpora, we adopt a quality-centric solution for collecting MC$^2$, prioritizing accuracy while enhancing diversity. Furthermore, we underscore the importance of attending to the multiplicity of writing systems, which is closely related to the cultural awareness of the resulting models. The MC$^2$ corpus and related models are made public to the community.", }
Current large language models demonstrate deficiencies in understanding low-resource languages, particularly the minority languages in China. This limitation stems from the scarcity of available pre-training data. To address this accessibility challenge, we present MC$^2$, a Multilingual Corpus of Minority Languages in China, which is the largest open-source corpus of its kind so far. MC$^2$ includes four underrepresented languages: Tibetan, Uyghur, Kazakh, and Mongolian. Notably, we focus on the less common writing systems of Kazakh and Mongolian, i.e., Kazakh Arabic script and traditional Mongolian script, respectively, which have been long neglected in previous corpus construction efforts. Recognizing the prevalence of language contamination within existing corpora, we adopt a quality-centric solution for collecting MC$^2$, prioritizing accuracy while enhancing diversity. Furthermore, we underscore the importance of attending to the multiplicity of writing systems, which is closely related to the cultural awareness of the resulting models. The MC$^2$ corpus and related models are made public to the community.
[ "Zhang, Chen", "Tao, Mingxu", "Huang, Quzhe", "Lin, Jiuheng", "Chen, Zhibin", "Feng, Yansong" ]
MC^2: Towards Transparent and Culturally-Aware NLP for Minority Languages in China
acl-long.479
Poster
[ "https://github.com/luciusssss/mc2_corpus" ]
https://huggingface.co/papers/2311.08348
0
0
0
6
https://aclanthology.org/2024.acl-long.479/
[ "pkupie/mc2-llama-13b", "pkupie/mc2-xlmr-large" ]
[ "pkupie/mc2_corpus", "pkupie/mlic-eval" ]
[]
1
https://aclanthology.org/2024.acl-long.480.bib
@inproceedings{guo-etal-2024-decoder, title = "Decoder-only Streaming Transformer for Simultaneous Translation", author = "Guo, Shoutao and Zhang, Shaolei and Feng, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.480", pages = "8851--8864", abstract = "Simultaneous Machine Translation (SiMT) generates translation while reading source tokens, essentially producing the target prefix based on the source prefix. To achieve good performance, it leverages the relationship between source and target prefixes to exact a policy to guide the generation of translations. Although existing SiMT methods primarily focus on the Encoder-Decoder architecture, we explore the potential of Decoder-only architecture, owing to its superior performance in various tasks and its inherent compatibility with SiMT. However, directly applying the Decoder-only architecture to SiMT poses challenges in terms of training and inference. To alleviate the above problems, we propose the first Decoder-only SiMT model, named Decoder-only Streaming Transformer (DST). Specifically, DST separately encodes the positions of the source and target prefixes, ensuring that the position of the target prefix remains unaffected by the expansion of the source prefix. Furthermore, we propose a Streaming Self-Attention (SSA) mechanism tailored for the Decoder-only architecture. It is capable of obtaining translation policy by assessing the sufficiency of input source information and integrating with the soft-attention mechanism to generate translations. Experiments demonstrate that our approach achieves state-of-the-art performance on three translation tasks.", }
Simultaneous Machine Translation (SiMT) generates translation while reading source tokens, essentially producing the target prefix based on the source prefix. To achieve good performance, it leverages the relationship between source and target prefixes to exact a policy to guide the generation of translations. Although existing SiMT methods primarily focus on the Encoder-Decoder architecture, we explore the potential of Decoder-only architecture, owing to its superior performance in various tasks and its inherent compatibility with SiMT. However, directly applying the Decoder-only architecture to SiMT poses challenges in terms of training and inference. To alleviate the above problems, we propose the first Decoder-only SiMT model, named Decoder-only Streaming Transformer (DST). Specifically, DST separately encodes the positions of the source and target prefixes, ensuring that the position of the target prefix remains unaffected by the expansion of the source prefix. Furthermore, we propose a Streaming Self-Attention (SSA) mechanism tailored for the Decoder-only architecture. It is capable of obtaining translation policy by assessing the sufficiency of input source information and integrating with the soft-attention mechanism to generate translations. Experiments demonstrate that our approach achieves state-of-the-art performance on three translation tasks.
[ "Guo, Shoutao", "Zhang, Shaolei", "Feng, Yang" ]
Decoder-only Streaming Transformer for Simultaneous Translation
acl-long.480
Poster
2406.03878
[ "https://github.com/ictnlp/DST" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.480/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.481.bib
@inproceedings{zhang-etal-2024-defending, title = "Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization", author = "Zhang, Zhexin and Yang, Junxiao and Ke, Pei and Mi, Fei and Wang, Hongning and Huang, Minlie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.481", pages = "8865--8887", abstract = "While significant attention has been dedicated to exploiting weaknesses in LLMs through jailbreaking attacks, there remains a paucity of effort in defending against these attacks. We point out a pivotal factor contributing to the success of jailbreaks: the intrinsic conflict between the goals of being helpful and ensuring safety. Accordingly, we propose to integrate goal prioritization at both training and inference stages to counteract. Implementing goal prioritization during inference substantially diminishes the Attack Success Rate (ASR) of jailbreaking from 66.4{\%} to 3.6{\%} for ChatGPT. And integrating goal prioritization into model training reduces the ASR from 71.0{\%} to 6.6{\%} for Llama2-13B. Remarkably, even in scenarios where no jailbreaking samples are included during training, our approach slashes the ASR by half. Additionally, our findings reveal that while stronger LLMs face greater safety risks, they also possess a greater capacity to be steered towards defending against such attacks, both because of their stronger ability in instruction following. Our work thus contributes to the comprehension of jailbreaking attacks and defenses, and sheds light on the relationship between LLMs{'} capability and safety. Our code is available at https://github.com/thu-coai/JailbreakDefense{\_}GoalPriority.", }
While significant attention has been dedicated to exploiting weaknesses in LLMs through jailbreaking attacks, there remains a paucity of effort in defending against these attacks. We point out a pivotal factor contributing to the success of jailbreaks: the intrinsic conflict between the goals of being helpful and ensuring safety. Accordingly, we propose to integrate goal prioritization at both training and inference stages to counteract. Implementing goal prioritization during inference substantially diminishes the Attack Success Rate (ASR) of jailbreaking from 66.4{\%} to 3.6{\%} for ChatGPT. And integrating goal prioritization into model training reduces the ASR from 71.0{\%} to 6.6{\%} for Llama2-13B. Remarkably, even in scenarios where no jailbreaking samples are included during training, our approach slashes the ASR by half. Additionally, our findings reveal that while stronger LLMs face greater safety risks, they also possess a greater capacity to be steered towards defending against such attacks, both because of their stronger ability in instruction following. Our work thus contributes to the comprehension of jailbreaking attacks and defenses, and sheds light on the relationship between LLMs{'} capability and safety. Our code is available at https://github.com/thu-coai/JailbreakDefense{\_}GoalPriority.
[ "Zhang, Zhexin", "Yang, Junxiao", "Ke, Pei", "Mi, Fei", "Wang, Hongning", "Huang, Minlie" ]
Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization
acl-long.481
Poster
2311.09096
[ "https://github.com/thu-coai/jailbreakdefense_goalpriority" ]
https://huggingface.co/papers/2311.09096
2
0
0
4
https://aclanthology.org/2024.acl-long.481/
[]
[]
[ "TrustSafeAI/Defensive-Prompt-Patch-Jailbreak-Defense" ]
1
https://aclanthology.org/2024.acl-long.482.bib
@inproceedings{thrush-etal-2024-strange, title = "{I} am a Strange Dataset: Metalinguistic Tests for Language Models", author = "Thrush, Tristan and Moore, Jared and Monares, Miguel and Potts, Christopher and Kiela, Douwe", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.482", pages = "8888--8907", abstract = "Statements involving metalinguistic self-reference ({``}This paper has six sections.{''}) are prevalent in many domains. Can large language models (LLMs) handle such language? In this paper, we present {``}I am a Strange Dataset{''}, a new dataset for addressing this question. There are two subtasks: generation and verification. In generation, models continue statements like {``}The penultimate word in this sentence is{''} (where a correct continuation is {``}is{''}). In verification, models judge the truth of statements like {``}The penultimate word in this sentence is sentence.{''} (false). We also provide minimally different metalinguistic non-self-reference examples to complement the main dataset by probing for whether models can handle metalinguistic language at all. The dataset is hand-crafted by experts and validated by non-expert annotators. We test a variety of open-source LLMs (7B to 70B parameters) as well as closed-source LLMs through APIs. All models perform close to chance across both subtasks and even on the non-self-referential metalinguistic control data, though we find some steady improvement with model scale. GPT 4 is the only model to consistently do significantly better than chance, and it is still only in the 60{\%} range, while our untrained human annotators score well in the 89-93{\%} range. The dataset and evaluation toolkit are available at https://github.com/TristanThrush/i-am-a-strange-dataset", }
Statements involving metalinguistic self-reference ({``}This paper has six sections.{''}) are prevalent in many domains. Can large language models (LLMs) handle such language? In this paper, we present {``}I am a Strange Dataset{''}, a new dataset for addressing this question. There are two subtasks: generation and verification. In generation, models continue statements like {``}The penultimate word in this sentence is{''} (where a correct continuation is {``}is{''}). In verification, models judge the truth of statements like {``}The penultimate word in this sentence is sentence.{''} (false). We also provide minimally different metalinguistic non-self-reference examples to complement the main dataset by probing for whether models can handle metalinguistic language at all. The dataset is hand-crafted by experts and validated by non-expert annotators. We test a variety of open-source LLMs (7B to 70B parameters) as well as closed-source LLMs through APIs. All models perform close to chance across both subtasks and even on the non-self-referential metalinguistic control data, though we find some steady improvement with model scale. GPT 4 is the only model to consistently do significantly better than chance, and it is still only in the 60{\%} range, while our untrained human annotators score well in the 89-93{\%} range. The dataset and evaluation toolkit are available at https://github.com/TristanThrush/i-am-a-strange-dataset
[ "Thrush, Tristan", "Moore, Jared", "Monares, Miguel", "Potts, Christopher", "Kiela, Douwe" ]
I am a Strange Dataset: Metalinguistic Tests for Language Models
acl-long.482
Poster
2401.05300
[ "https://github.com/tristanthrush/i-am-a-strange-dataset" ]
https://huggingface.co/papers/2401.05300
1
4
0
5
https://aclanthology.org/2024.acl-long.482/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.483.bib
@inproceedings{zhang-etal-2024-truthx, title = "{T}ruth{X}: Alleviating Hallucinations by Editing Large Language Models in Truthful Space", author = "Zhang, Shaolei and Yu, Tian and Feng, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.483", pages = "8908--8949", abstract = "Large Language Models (LLMs) sometimes suffer from producing hallucinations, especially LLMs may generate untruthful responses despite knowing the correct knowledge. Activating the truthfulness within LLM is the key to fully unlocking LLM{'}s knowledge potential. In this paper, we propose TruthX, an inference-time intervention method to activate the truthfulness of LLM by identifying and editing the features within LLM{'}s internal representations that govern the truthfulness. TruthX employs an auto-encoder to map LLM{'}s representations into semantic and truthful latent spaces respectively, and applies contrastive learning to identify a truthful editing direction within the truthful space. During inference, by editing LLM{'}s internal representations in truthful space, TruthX effectively enhances the truthfulness of LLM. Experiments show that TruthX improves the truthfulness of 13 advanced LLMs by an average of 20{\%} on TruthfulQA benchmark. Further analyses suggest that TruthX can control LLM to produce truthful or hallucinatory responses via editing only one vector in LLM{'}s internal representations.", }
Large Language Models (LLMs) sometimes suffer from producing hallucinations, especially LLMs may generate untruthful responses despite knowing the correct knowledge. Activating the truthfulness within LLM is the key to fully unlocking LLM{'}s knowledge potential. In this paper, we propose TruthX, an inference-time intervention method to activate the truthfulness of LLM by identifying and editing the features within LLM{'}s internal representations that govern the truthfulness. TruthX employs an auto-encoder to map LLM{'}s representations into semantic and truthful latent spaces respectively, and applies contrastive learning to identify a truthful editing direction within the truthful space. During inference, by editing LLM{'}s internal representations in truthful space, TruthX effectively enhances the truthfulness of LLM. Experiments show that TruthX improves the truthfulness of 13 advanced LLMs by an average of 20{\%} on TruthfulQA benchmark. Further analyses suggest that TruthX can control LLM to produce truthful or hallucinatory responses via editing only one vector in LLM{'}s internal representations.
[ "Zhang, Shaolei", "Yu, Tian", "Feng, Yang" ]
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
acl-long.483
Poster
2402.17811
[ "https://github.com/ictnlp/truthx" ]
https://huggingface.co/papers/2402.17811
1
1
0
3
https://aclanthology.org/2024.acl-long.483/
[ "ICTNLP/Llama-2-7b-chat-TruthX", "ICTNLP/TruthX" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.484.bib
@inproceedings{zhuo-etal-2024-protllm, title = "{P}rot{LLM}: An Interleaved Protein-Language {LLM} with Protein-as-Word Pre-Training", author = "Zhuo, Le and Chi, Zewen and Xu, Minghao and Huang, Heyan and Zhao, Jianan and Zheng, Heqi and He, Conghui and Mao, Xian-Ling and Zhang, Wentao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.484", pages = "8950--8963", abstract = "We propose ProtLLM, a versatile cross-modal large language model (LLM) for both protein-centric and protein-language tasks. ProtLLM features a unique dynamic protein mounting mechanism, enabling it to handle complex inputs where the natural language text is interspersed with an arbitrary number of proteins. Besides, we propose the protein-as-word language modeling approach to train ProtLLM. By developing a specialized protein vocabulary, we equip the model with the capability to predict not just natural language but also proteins from a vast pool of candidates. Additionally, we construct a large-scale interleaved protein-text dataset, named InterPT, for pre-training. This dataset comprehensively encompasses both (1) structured data sources like protein annotations and (2) unstructured data sources like biological research papers, thereby endowing ProtLLM with crucial knowledge for understanding proteins. We evaluate ProtLLM on classic supervised protein-centric tasks and explore its novel protein-language applications. Experimental results demonstrate that ProtLLM not only achieves superior performance against protein-specialized baselines on protein-centric tasks but also induces zero-shot and in-context learning capabilities on protein-language tasks.", }
We propose ProtLLM, a versatile cross-modal large language model (LLM) for both protein-centric and protein-language tasks. ProtLLM features a unique dynamic protein mounting mechanism, enabling it to handle complex inputs where the natural language text is interspersed with an arbitrary number of proteins. Besides, we propose the protein-as-word language modeling approach to train ProtLLM. By developing a specialized protein vocabulary, we equip the model with the capability to predict not just natural language but also proteins from a vast pool of candidates. Additionally, we construct a large-scale interleaved protein-text dataset, named InterPT, for pre-training. This dataset comprehensively encompasses both (1) structured data sources like protein annotations and (2) unstructured data sources like biological research papers, thereby endowing ProtLLM with crucial knowledge for understanding proteins. We evaluate ProtLLM on classic supervised protein-centric tasks and explore its novel protein-language applications. Experimental results demonstrate that ProtLLM not only achieves superior performance against protein-specialized baselines on protein-centric tasks but also induces zero-shot and in-context learning capabilities on protein-language tasks.
[ "Zhuo, Le", "Chi, Zewen", "Xu, Minghao", "Huang, Heyan", "Zhao, Jianan", "Zheng, Heqi", "He, Conghui", "Mao, Xian-Ling", "Zhang, Wentao" ]
ProtLLM: An Interleaved Protein-Language LLM with Protein-as-Word Pre-Training
acl-long.484
Poster
2403.07920
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.484/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.485.bib
@inproceedings{zhang-etal-2024-streamspeech, title = "{S}tream{S}peech: Simultaneous Speech-to-Speech Translation with Multi-task Learning", author = "Zhang, Shaolei and Fang, Qingkai and Guo, Shoutao and Ma, Zhengrui and Zhang, Min and Feng, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.485", pages = "8964--8986", abstract = "Simultaneous speech-to-speech translation (Simul-S2ST, a.k.a streaming speech translation) outputs target speech while receiving streaming speech inputs, which is critical for real-time communication. Beyond accomplishing translation between speech, Simul-S2ST requires a policy to control the model to generate corresponding target speech at the opportune moment within speech inputs, thereby posing a double challenge of translation and policy. In this paper, we propose StreamSpeech, a direct Simul-S2ST model that jointly learns translation and simultaneous policy in a unified framework of multi-task learning. Adhering to a multi-task learning approach, StreamSpeech can perform offline and simultaneous speech recognition, speech translation and speech synthesis via an {``}All-in-One{''} seamless model. Experiments on CVSS benchmark demonstrate that StreamSpeech achieves state-of-the-art performance in both offline S2ST and Simul-S2ST tasks. Besides, StreamSpeech is able to present high-quality intermediate results (i.e., ASR or translation results) during simultaneous translation process, offering a more comprehensive real-time communication experience.", }
Simultaneous speech-to-speech translation (Simul-S2ST, a.k.a streaming speech translation) outputs target speech while receiving streaming speech inputs, which is critical for real-time communication. Beyond accomplishing translation between speech, Simul-S2ST requires a policy to control the model to generate corresponding target speech at the opportune moment within speech inputs, thereby posing a double challenge of translation and policy. In this paper, we propose StreamSpeech, a direct Simul-S2ST model that jointly learns translation and simultaneous policy in a unified framework of multi-task learning. Adhering to a multi-task learning approach, StreamSpeech can perform offline and simultaneous speech recognition, speech translation and speech synthesis via an {``}All-in-One{''} seamless model. Experiments on CVSS benchmark demonstrate that StreamSpeech achieves state-of-the-art performance in both offline S2ST and Simul-S2ST tasks. Besides, StreamSpeech is able to present high-quality intermediate results (i.e., ASR or translation results) during simultaneous translation process, offering a more comprehensive real-time communication experience.
[ "Zhang, Shaolei", "Fang, Qingkai", "Guo, Shoutao", "Ma, Zhengrui", "Zhang, Min", "Feng, Yang" ]
StreamSpeech: Simultaneous Speech-to-Speech Translation with Multi-task Learning
acl-long.485
Poster
2406.03049
[ "https://github.com/ictnlp/streamspeech" ]
https://huggingface.co/papers/2406.03049
0
0
0
6
https://aclanthology.org/2024.acl-long.485/
[ "ICTNLP/ComSpeech_Models" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.486.bib
@inproceedings{ju-etal-2024-investigating, title = "Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models", author = "Ju, Tianjie and Chen, Yijin and Yuan, Xinwei and Zhang, Zhuosheng and Du, Wei and Zheng, Yubin and Liu, Gongshen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.486", pages = "8987--9001", abstract = "Recent work has showcased the powerful capability of large language models (LLMs) in recalling knowledge and reasoning. However, the reliability of LLMs in combining these two capabilities into reasoning through multi-hop facts has not been widely explored. This paper systematically investigates the possibilities for LLMs to utilize shortcuts based on direct connections between the initial and terminal entities of multi-hop knowledge. We first explore the existence of factual shortcuts through Knowledge Neurons, revealing that: (i) the strength of factual shortcuts is highly correlated with the frequency of co-occurrence of initial and terminal entities in the pre-training corpora; (ii) few-shot prompting leverage more shortcuts in answering multi-hop questions compared to chain-of-thought prompting. Then, we analyze the risks posed by factual shortcuts from the perspective of multi-hop knowledge editing. Analysis shows that approximately 20{\%} of the failures are attributed to shortcuts, and the initial and terminal entities in these failure instances usually have higher co-occurrences in the pre-training corpus. Finally, we propose erasing shortcut neurons to mitigate the associated risks and find that this approach significantly reduces failures in multiple-hop knowledge editing caused by shortcuts. Code is publicly available at https://github.com/Jometeorie/MultiHopShortcuts.", }
Recent work has showcased the powerful capability of large language models (LLMs) in recalling knowledge and reasoning. However, the reliability of LLMs in combining these two capabilities into reasoning through multi-hop facts has not been widely explored. This paper systematically investigates the possibilities for LLMs to utilize shortcuts based on direct connections between the initial and terminal entities of multi-hop knowledge. We first explore the existence of factual shortcuts through Knowledge Neurons, revealing that: (i) the strength of factual shortcuts is highly correlated with the frequency of co-occurrence of initial and terminal entities in the pre-training corpora; (ii) few-shot prompting leverage more shortcuts in answering multi-hop questions compared to chain-of-thought prompting. Then, we analyze the risks posed by factual shortcuts from the perspective of multi-hop knowledge editing. Analysis shows that approximately 20{\%} of the failures are attributed to shortcuts, and the initial and terminal entities in these failure instances usually have higher co-occurrences in the pre-training corpus. Finally, we propose erasing shortcut neurons to mitigate the associated risks and find that this approach significantly reduces failures in multiple-hop knowledge editing caused by shortcuts. Code is publicly available at https://github.com/Jometeorie/MultiHopShortcuts.
[ "Ju, Tianjie", "Chen, Yijin", "Yuan, Xinwei", "Zhang, Zhuosheng", "Du, Wei", "Zheng, Yubin", "Liu, Gongshen" ]
Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models
acl-long.486
Poster
2402.11900
[ "https://github.com/jometeorie/multihopshortcuts" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.486/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.487.bib
@inproceedings{zayed-etal-2024-dont, title = "Why Don{'}t Prompt-Based Fairness Metrics Correlate?", author = "Zayed, Abdelrahman and Mordido, Goncalo and Baldini, Ioana and Chandar, Sarath", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.487", pages = "9002--9019", abstract = "The widespread use of large language models has brought up essential questions about the potential biases these models might learn. This led to the development of several metrics aimed at evaluating and mitigating these biases. In this paper, we first demonstrate that prompt-based fairness metrics exhibit poor agreement, as measured by correlation, raising important questions about the reliability of fairness assessment using prompts. Then, we outline six relevant reasons why such a low correlation is observed across existing metrics. Based on these insights, we propose a method called Correlated Fairness Output (CAIRO) to enhance the correlation between fairness metrics. CAIRO augments the original prompts of a given fairness metric by using several pre-trained language models and then selects the combination of the augmented prompts that achieves the highest correlation across metrics. We show a significant improvement in Pearson correlation from 0.3 and 0.18 to 0.90 and 0.98 across metrics for gender and religion biases, respectively. Our code is available at https://github.com/chandar-lab/CAIRO.", }
The widespread use of large language models has brought up essential questions about the potential biases these models might learn. This led to the development of several metrics aimed at evaluating and mitigating these biases. In this paper, we first demonstrate that prompt-based fairness metrics exhibit poor agreement, as measured by correlation, raising important questions about the reliability of fairness assessment using prompts. Then, we outline six relevant reasons why such a low correlation is observed across existing metrics. Based on these insights, we propose a method called Correlated Fairness Output (CAIRO) to enhance the correlation between fairness metrics. CAIRO augments the original prompts of a given fairness metric by using several pre-trained language models and then selects the combination of the augmented prompts that achieves the highest correlation across metrics. We show a significant improvement in Pearson correlation from 0.3 and 0.18 to 0.90 and 0.98 across metrics for gender and religion biases, respectively. Our code is available at https://github.com/chandar-lab/CAIRO.
[ "Zayed, Abdelrahman", "Mordido, Goncalo", "Baldini, Ioana", "Ch", "ar, Sarath" ]
Why Don't Prompt-Based Fairness Metrics Correlate?
acl-long.487
Poster
2406.05918
[ "https://github.com/chandar-lab/cairo" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.487/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.488.bib
@inproceedings{tonneau-etal-2024-naijahate, title = "{N}aija{H}ate: Evaluating Hate Speech Detection on {N}igerian {T}witter Using Representative Data", author = "Tonneau, Manuel and Quinta De Castro, Pedro and Lasri, Karim and Farouq, Ibrahim and Subramanian, Lakshmi and Orozco-Olvera, Victor and Fraiberger, Samuel", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.488", pages = "9020--9040", abstract = "To address the global issue of online hate, hate speech detection (HSD) systems are typically developed on datasets from the United States, thereby failing to generalize to English dialects from the Majority World. Furthermore, HSD models are often evaluated on non-representative samples, raising concerns about overestimating model performance in real-world settings. In this work, we introduce NaijaHate, the first dataset annotated for HSD which contains a representative sample of Nigerian tweets. We demonstrate that HSD evaluated on biased datasets traditionally used in the literature consistently overestimates real-world performance by at least two-fold. We then propose NaijaXLM-T, a pretrained model tailored to the Nigerian Twitter context, and establish the key role played by domain-adaptive pretraining and finetuning in maximizing HSD performance. Finally, owing to the modest performance of HSD systems in real-world conditions, we find that content moderators would need to review about ten thousand Nigerian tweets flagged as hateful daily to moderate 60{\%} of all hateful content, highlighting the challenges of moderating hate speech at scale as social media usage continues to grow globally. Taken together, these results pave the way towards robust HSD systems and a better protection of social media users from hateful content in low-resource settings.", }
To address the global issue of online hate, hate speech detection (HSD) systems are typically developed on datasets from the United States, thereby failing to generalize to English dialects from the Majority World. Furthermore, HSD models are often evaluated on non-representative samples, raising concerns about overestimating model performance in real-world settings. In this work, we introduce NaijaHate, the first dataset annotated for HSD which contains a representative sample of Nigerian tweets. We demonstrate that HSD evaluated on biased datasets traditionally used in the literature consistently overestimates real-world performance by at least two-fold. We then propose NaijaXLM-T, a pretrained model tailored to the Nigerian Twitter context, and establish the key role played by domain-adaptive pretraining and finetuning in maximizing HSD performance. Finally, owing to the modest performance of HSD systems in real-world conditions, we find that content moderators would need to review about ten thousand Nigerian tweets flagged as hateful daily to moderate 60{\%} of all hateful content, highlighting the challenges of moderating hate speech at scale as social media usage continues to grow globally. Taken together, these results pave the way towards robust HSD systems and a better protection of social media users from hateful content in low-resource settings.
[ "Tonneau, Manuel", "Quinta De Castro, Pedro", "Lasri, Karim", "Farouq, Ibrahim", "Subramanian, Lakshmi", "Orozco-Olvera, Victor", "Fraiberger, Samuel" ]
NaijaHate: Evaluating Hate Speech Detection on Nigerian Twitter Using Representative Data
acl-long.488
Poster
2403.19260
[ "https://github.com/manueltonneau/naijahate" ]
https://huggingface.co/papers/2403.19260
0
0
0
7
https://aclanthology.org/2024.acl-long.488/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.489.bib
@inproceedings{chen-etal-2024-m3av, title = "{M}$^3${AV}: A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset", author = "Chen, Zhe and Liu, Heyang and Yu, Wenyi and Sun, Guangzhi and Liu, Hongcheng and Wu, Ji and Zhang, Chao and Wang, Yu and Wang, Yanfeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.489", pages = "9041--9060", abstract = "Publishing open-source academic video recordings is an emergent and prevalent approach to sharing knowledge online. Such videos carry rich multimodal information including speech, the facial and body movements of the speakers, as well as the texts and pictures in the slides and possibly even the papers. Although multiple academic video datasets have been constructed and released, few of them support both multimodal content recognition and understanding tasks, which is partially due to the lack of high-quality human annotations. In this paper, we propose a novel multimodal, multigenre, and multipurpose audio-visual academic lecture dataset (M$^3$AV), which has almost 367 hours of videos from five sources covering computer science, mathematics, and medical and biology topics. With high-quality human annotations of the slide text and spoken words, in particular high-valued name entities, the dataset can be used for multiple audio-visual recognition and understanding tasks. Evaluations performed on contextual speech recognition, speech synthesis, and slide and script generation tasks demonstrate that the diversity of M$^3$AV makes it a challenging dataset.", }
Publishing open-source academic video recordings is an emergent and prevalent approach to sharing knowledge online. Such videos carry rich multimodal information including speech, the facial and body movements of the speakers, as well as the texts and pictures in the slides and possibly even the papers. Although multiple academic video datasets have been constructed and released, few of them support both multimodal content recognition and understanding tasks, which is partially due to the lack of high-quality human annotations. In this paper, we propose a novel multimodal, multigenre, and multipurpose audio-visual academic lecture dataset (M$^3$AV), which has almost 367 hours of videos from five sources covering computer science, mathematics, and medical and biology topics. With high-quality human annotations of the slide text and spoken words, in particular high-valued name entities, the dataset can be used for multiple audio-visual recognition and understanding tasks. Evaluations performed on contextual speech recognition, speech synthesis, and slide and script generation tasks demonstrate that the diversity of M$^3$AV makes it a challenging dataset.
[ "Chen, Zhe", "Liu, Heyang", "Yu, Wenyi", "Sun, Guangzhi", "Liu, Hongcheng", "Wu, Ji", "Zhang, Chao", "Wang, Yu", "Wang, Yanfeng" ]
M^3AV: A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset
acl-long.489
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.489/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.490.bib
@inproceedings{yang-etal-2024-mitigating, title = "Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination", author = "Yang, Nakyeong and Kang, Taegwan and Choi, Stanley Jungkyu and Lee, Honglak and Jung, Kyomin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.490", pages = "9061--9073", abstract = "Instruction-following language models often show undesirable biases. These undesirable biases may be accelerated in the real-world usage of language models, where a wide range of instructions is used through zero-shot example prompting. To solve this problem, we first define the bias neuron, which significantly affects biased outputs, and prove its existence empirically. Furthermore, we propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings. CRISPR automatically determines biased outputs and categorizes neurons that affect the biased outputs as bias neurons using an explainability method. Experimental results demonstrate the effectiveness of our method in mitigating biases under zero-shot instruction-following settings without losing the model{'}s task performance and existing knowledge. The experimental results reveal the generalizability of our method as it shows robustness under various instructions and datasets. Surprisingly, our method can mitigate the bias in language models by eliminating only a few neurons (at least three).", }
Instruction-following language models often show undesirable biases. These undesirable biases may be accelerated in the real-world usage of language models, where a wide range of instructions is used through zero-shot example prompting. To solve this problem, we first define the bias neuron, which significantly affects biased outputs, and prove its existence empirically. Furthermore, we propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings. CRISPR automatically determines biased outputs and categorizes neurons that affect the biased outputs as bias neurons using an explainability method. Experimental results demonstrate the effectiveness of our method in mitigating biases under zero-shot instruction-following settings without losing the model{'}s task performance and existing knowledge. The experimental results reveal the generalizability of our method as it shows robustness under various instructions and datasets. Surprisingly, our method can mitigate the bias in language models by eliminating only a few neurons (at least three).
[ "Yang, Nakyeong", "Kang, Taegwan", "Choi, Stanley Jungkyu", "Lee, Honglak", "Jung, Kyomin" ]
Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination
acl-long.490
Poster
2311.09627
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.490/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.491.bib
@inproceedings{zhang-etal-2024-domain, title = "Domain Adaptation for Subjective Induction Questions Answering on Products by Adversarial Disentangled Learning", author = "Zhang, Yufeng and Yu, Jianxing and Rao, Yanghui and Zheng, Libin and Su, Qinliang and Zhu, Huaijie and Yin, Jian", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.491", pages = "9074--9089", abstract = "This paper focuses on answering subjective questions about products. Different from the factoid question with a single answer span, this subjective one involves multiple viewpoints. For example, the question of {`}how the phone{'}s battery is?{'} not only involves facts of battery capacity but also contains users{'} opinions on the battery{'}s pros and cons. A good answer should be able to integrate these heterogeneous and even inconsistent viewpoints, which is formalized as a subjective induction QA task. For this task, the data distributions are often imbalanced across different product domains. It is hard for traditional methods to work well without considering the shift of domain patterns. To address this problem, we propose a novel domain-adaptive model. Concretely, for each sample in the source and target domain, we first retrieve answer-related knowledge and represent them independently. To facilitate knowledge transferring, we then disentangle the representations into domain-invariant and domain-specific latent factors. Moreover, we develop an adversarial discriminator with contrastive learning to reduce the impact of out-of-domain bias. Based on learned latent vectors in a target domain, we yield multi-perspective summaries as inductive answers. Experiments on popular datasets show the effectiveness of our method.", }
This paper focuses on answering subjective questions about products. Different from the factoid question with a single answer span, this subjective one involves multiple viewpoints. For example, the question of {`}how the phone{'}s battery is?{'} not only involves facts of battery capacity but also contains users{'} opinions on the battery{'}s pros and cons. A good answer should be able to integrate these heterogeneous and even inconsistent viewpoints, which is formalized as a subjective induction QA task. For this task, the data distributions are often imbalanced across different product domains. It is hard for traditional methods to work well without considering the shift of domain patterns. To address this problem, we propose a novel domain-adaptive model. Concretely, for each sample in the source and target domain, we first retrieve answer-related knowledge and represent them independently. To facilitate knowledge transferring, we then disentangle the representations into domain-invariant and domain-specific latent factors. Moreover, we develop an adversarial discriminator with contrastive learning to reduce the impact of out-of-domain bias. Based on learned latent vectors in a target domain, we yield multi-perspective summaries as inductive answers. Experiments on popular datasets show the effectiveness of our method.
[ "Zhang, Yufeng", "Yu, Jianxing", "Rao, Yanghui", "Zheng, Libin", "Su, Qinliang", "Zhu, Huaijie", "Yin, Jian" ]
Domain Adaptation for Subjective Induction Questions Answering on Products by Adversarial Disentangled Learning
acl-long.491
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.491/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.492.bib
@inproceedings{peng-etal-2024-revisiting, title = "Revisiting Demonstration Selection Strategies in In-Context Learning", author = "Peng, Keqin and Ding, Liang and Yuan, Yancheng and Liu, Xuebo and Zhang, Min and Ouyang, Yuanxin and Tao, Dacheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.492", pages = "9090--9101", abstract = "Large language models (LLMs) have shown an impressive ability to perform a wide range of tasks using in-context learning (ICL), where a few examples are used to describe a task to the model. However, the performance of ICL varies significantly with the choice of demonstrations, and previous research usually focuses on the data aspect ignoring the model{'}s effect. In this work, we first revisit the factors contributing to this variance from the model aspect, and find that the demonstration choice is both data- and model-dependent. We further propose a conjecture that the performance of a demonstration positively correlates with its contribution to the model{'}s understanding of the test samples, and accordingly propose a data- and model-dependent demonstration selection method, TopK + ConE. Empirically, our method yields consistent improvements in both language understanding and generation tasks with different model scales. Further analyses confirm that, besides the generality and stability under different circumstances, our method provides a unified explanation for the effectiveness of previous methods. Code is publicly available at https://github.com/Romainpkq/revisit{\_}demon{\_}selection{\_}in{\_}ICL.", }
Large language models (LLMs) have shown an impressive ability to perform a wide range of tasks using in-context learning (ICL), where a few examples are used to describe a task to the model. However, the performance of ICL varies significantly with the choice of demonstrations, and previous research usually focuses on the data aspect ignoring the model{'}s effect. In this work, we first revisit the factors contributing to this variance from the model aspect, and find that the demonstration choice is both data- and model-dependent. We further propose a conjecture that the performance of a demonstration positively correlates with its contribution to the model{'}s understanding of the test samples, and accordingly propose a data- and model-dependent demonstration selection method, TopK + ConE. Empirically, our method yields consistent improvements in both language understanding and generation tasks with different model scales. Further analyses confirm that, besides the generality and stability under different circumstances, our method provides a unified explanation for the effectiveness of previous methods. Code is publicly available at https://github.com/Romainpkq/revisit{\_}demon{\_}selection{\_}in{\_}ICL.
[ "Peng, Keqin", "Ding, Liang", "Yuan, Yancheng", "Liu, Xuebo", "Zhang, Min", "Ouyang, Yuanxin", "Tao, Dacheng" ]
Revisiting Demonstration Selection Strategies in In-Context Learning
acl-long.492
Poster
2401.12087
[ "https://github.com/romainpkq/revisit_demon_selection_in_icl" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.492/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.493.bib
@inproceedings{zheng-etal-2024-multimodal, title = "Multimodal Table Understanding", author = "Zheng, Mingyu and Feng, Xinwei and Si, Qingyi and She, Qiaoqiao and Lin, Zheng and Jiang, Wenbin and Wang, Weiping", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.493", pages = "9102--9124", abstract = "Although great progress has been made by previous table understanding methods including recent approaches based on large language models (LLMs), they rely heavily on the premise that given tables must be converted into a certain text sequence (such as Markdown or HTML) to serve as model input. However, it is difficult to access such high-quality textual table representations in some real-world scenarios, and table images are much more accessible. Therefore, how to directly understand tables using intuitive visual information is a crucial and urgent challenge for developing more practical applications. In this paper, we propose a new problem, multimodal table understanding, where the model needs to generate correct responses to various table-related requests based on the given table image. To facilitate both the model training and evaluation, we construct a large-scale dataset named MMTab, which covers a wide spectrum of table images, instructions and tasks. On this basis, we develop Table-LLaVA, a generalist tabular multimodal large language model (MLLM), which significantly outperforms recent open-source MLLM baselines on 23 benchmarks under held-in and held-out settings.", }
Although great progress has been made by previous table understanding methods including recent approaches based on large language models (LLMs), they rely heavily on the premise that given tables must be converted into a certain text sequence (such as Markdown or HTML) to serve as model input. However, it is difficult to access such high-quality textual table representations in some real-world scenarios, and table images are much more accessible. Therefore, how to directly understand tables using intuitive visual information is a crucial and urgent challenge for developing more practical applications. In this paper, we propose a new problem, multimodal table understanding, where the model needs to generate correct responses to various table-related requests based on the given table image. To facilitate both the model training and evaluation, we construct a large-scale dataset named MMTab, which covers a wide spectrum of table images, instructions and tasks. On this basis, we develop Table-LLaVA, a generalist tabular multimodal large language model (MLLM), which significantly outperforms recent open-source MLLM baselines on 23 benchmarks under held-in and held-out settings.
[ "Zheng, Mingyu", "Feng, Xinwei", "Si, Qingyi", "She, Qiaoqiao", "Lin, Zheng", "Jiang, Wenbin", "Wang, Weiping" ]
Multimodal Table Understanding
acl-long.493
Poster
2406.08100
[ "https://github.com/spursgozmy/table-llava" ]
https://huggingface.co/papers/2406.08100
0
0
0
7
https://aclanthology.org/2024.acl-long.493/
[ "SpursgoZmy/table-llava-v1.5-7b", "SpursgoZmy/table-llava-v1.5-13b", "adinath/tablellava" ]
[ "SpursgoZmy/MMTab" ]
[]
1
https://aclanthology.org/2024.acl-long.494.bib
@inproceedings{lei-etal-2024-ex3, title = "Ex3: Automatic Novel Writing by Extracting, Excelsior and Expanding", author = "Lei, Huang and Guo, Jiaming and He, Guanhua and Zhang, Xishan and Zhang, Rui and Peng, Shaohui and Liu, Shaoli and Chen, Tianshi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.494", pages = "9125--9146", abstract = "Generating long-term texts such as novels using artificial intelligence has always been a challenge. A common approach is to use large language models (LLMs) to construct a hierarchical framework that first plans and then writes. Despite the fact that the generated novels reach a sufficient length, they exhibit poor logical coherence and appeal in their plots and deficiencies in character and event depiction, ultimately compromising the overall narrative quality. In this paper, we propose a method named Extracting Excelsior and Expanding. Ex3 initially extract structural information by learning from raw novel data. By combining this structure information with the novel data, an instruction-following dataset is meticulously crafted. This dataset is then utilized to fine-tune the LLM, aiming for excelsior generation performance. In the final stage, a tree-like expansion method is deployed to facilitate the generation of arbitrarily long novels.Evaluation against previous methods showcases Ex3{'}s ability to produce higher-quality long-form novels.", }
Generating long-term texts such as novels using artificial intelligence has always been a challenge. A common approach is to use large language models (LLMs) to construct a hierarchical framework that first plans and then writes. Despite the fact that the generated novels reach a sufficient length, they exhibit poor logical coherence and appeal in their plots and deficiencies in character and event depiction, ultimately compromising the overall narrative quality. In this paper, we propose a method named Extracting Excelsior and Expanding. Ex3 initially extract structural information by learning from raw novel data. By combining this structure information with the novel data, an instruction-following dataset is meticulously crafted. This dataset is then utilized to fine-tune the LLM, aiming for excelsior generation performance. In the final stage, a tree-like expansion method is deployed to facilitate the generation of arbitrarily long novels.Evaluation against previous methods showcases Ex3{'}s ability to produce higher-quality long-form novels.
[ "Lei, Huang", "Guo, Jiaming", "He, Guanhua", "Zhang, Xishan", "Zhang, Rui", "Peng, Shaohui", "Liu, Shaoli", "Chen, Tianshi" ]
Ex3: Automatic Novel Writing by Extracting, Excelsior and Expanding
acl-long.494
Poster
2408.08506
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.494/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.495.bib
@inproceedings{patidar-etal-2024-shot, title = "Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning", author = "Patidar, Mayur and Sawhney, Riya and Singh, Avinash and Chatterjee, Biswajit and ., Mausam and Bhattacharya, Indrajit", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.495", pages = "9147--9165", abstract = "Existing Knowledge Base Question Answering (KBQA) architectures are hungry for annotated data, which make them costly and time-consuming to deploy. We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. We propose a novel KBQA architecture called FuSIC-KBQA that performs KB-retrieval using multiple source-trained retrievers, re-ranks using an LLM and uses this as input for LLM few-shot in-context learning to generate logical forms, which are further refined using execution-guided feedback. Experiments over four source-target KBQA pairs of varying complexity show that FuSIC-KBQA significantly outperforms adaptations of SoTA KBQA models for this setting. Additional experiments in the in-domain setting show that FuSIC-KBQA also outperforms SoTA KBQA models when training data is limited.", }
Existing Knowledge Base Question Answering (KBQA) architectures are hungry for annotated data, which make them costly and time-consuming to deploy. We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. We propose a novel KBQA architecture called FuSIC-KBQA that performs KB-retrieval using multiple source-trained retrievers, re-ranks using an LLM and uses this as input for LLM few-shot in-context learning to generate logical forms, which are further refined using execution-guided feedback. Experiments over four source-target KBQA pairs of varying complexity show that FuSIC-KBQA significantly outperforms adaptations of SoTA KBQA models for this setting. Additional experiments in the in-domain setting show that FuSIC-KBQA also outperforms SoTA KBQA models when training data is limited.
[ "Patidar, Mayur", "Sawhney, Riya", "Singh, Avinash", "Chatterjee, Biswajit", "., Mausam", "Bhattacharya, Indrajit" ]
Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning
acl-long.495
Oral
2311.08894
[ "https://github.com/dair-iitd/fusic-kbqa" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.495/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.496.bib
@inproceedings{chen-etal-2024-watme, title = "{W}at{ME}: Towards Lossless Watermarking Through Lexical Redundancy", author = "Chen, Liang and Bian, Yatao and Deng, Yang and Cai, Deng and Li, Shuaiyi and Zhao, Peilin and Wong, Kam-Fai", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.496", pages = "9166--9180", abstract = "Text watermarking has emerged as a pivotal technique for identifying machine-generated text. However, existing methods often rely on arbitrary vocabulary partitioning during decoding to embed watermarks, which compromises the availability of suitable tokens and significantly degrades the quality of responses. This study assesses the impact of watermarking on different capabilities of large language models (LLMs) from a cognitive science lens. Our finding highlights a significant disparity; knowledge recall and logical reasoning are more adversely affected than language generation. These results suggest a more profound effect of watermarking on LLMs than previously understood. To address these challenges, we introduce Watermarking with Mutual Exclusion (WatME), a novel approach leveraging linguistic prior knowledge of inherent lexical redundancy in LLM vocabularies to seamlessly integrate watermarks. Specifically, WatME dynamically optimizes token usage during the decoding process by applying a mutually exclusive rule to the identified lexical redundancies. This strategy effectively prevents the unavailability of appropriate tokens and preserves the expressive power of LLMs. We provide both theoretical analysis and empirical evidence showing that WatME effectively preserves the diverse capabilities of LLMs while ensuring watermark detectability.", }
Text watermarking has emerged as a pivotal technique for identifying machine-generated text. However, existing methods often rely on arbitrary vocabulary partitioning during decoding to embed watermarks, which compromises the availability of suitable tokens and significantly degrades the quality of responses. This study assesses the impact of watermarking on different capabilities of large language models (LLMs) from a cognitive science lens. Our finding highlights a significant disparity; knowledge recall and logical reasoning are more adversely affected than language generation. These results suggest a more profound effect of watermarking on LLMs than previously understood. To address these challenges, we introduce Watermarking with Mutual Exclusion (WatME), a novel approach leveraging linguistic prior knowledge of inherent lexical redundancy in LLM vocabularies to seamlessly integrate watermarks. Specifically, WatME dynamically optimizes token usage during the decoding process by applying a mutually exclusive rule to the identified lexical redundancies. This strategy effectively prevents the unavailability of appropriate tokens and preserves the expressive power of LLMs. We provide both theoretical analysis and empirical evidence showing that WatME effectively preserves the diverse capabilities of LLMs while ensuring watermark detectability.
[ "Chen, Liang", "Bian, Yatao", "Deng, Yang", "Cai, Deng", "Li, Shuaiyi", "Zhao, Peilin", "Wong, Kam-Fai" ]
WatME: Towards Lossless Watermarking Through Lexical Redundancy
acl-long.496
Poster
2311.09832
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.496/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.497.bib
@inproceedings{zhang-etal-2024-text, title = "Text-like Encoding of Collaborative Information in Large Language Models for Recommendation", author = "Zhang, Yang and Bao, Keqin and Yan, Ming and Wang, Wenjie and Feng, Fuli and He, Xiangnan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.497", pages = "9181--9191", abstract = "When adapting Large Language Models for Recommendation (LLMRec), it is crucial to integrate collaborative information. Existing methods achieve this by learning collaborative embeddings in LLMs{'} latent space from scratch or by mapping from external models. However, they fail to represent the information in a text-like format, which may not align optimally with LLMs. To bridge this gap, we introduce BinLLM, a novel LLMRec method that seamlessly integrates collaborative information through text-like encoding. BinLLM converts collaborative embeddings from external models into binary sequences {---} a specific text format that LLMs can understand and operate on directly, facilitating the direct usage of collaborative information in text-like format by LLMs. Additionally, BinLLM provides options to compress the binary sequence using dot-decimal notation to avoid excessively long lengths. Extensive experiments validate that BinLLM introduces collaborative information in a manner better aligned with LLMs, resulting in enhanced performance. We release our code at https://github.com/zyang1580/BinLLM.", }
When adapting Large Language Models for Recommendation (LLMRec), it is crucial to integrate collaborative information. Existing methods achieve this by learning collaborative embeddings in LLMs{'} latent space from scratch or by mapping from external models. However, they fail to represent the information in a text-like format, which may not align optimally with LLMs. To bridge this gap, we introduce BinLLM, a novel LLMRec method that seamlessly integrates collaborative information through text-like encoding. BinLLM converts collaborative embeddings from external models into binary sequences {---} a specific text format that LLMs can understand and operate on directly, facilitating the direct usage of collaborative information in text-like format by LLMs. Additionally, BinLLM provides options to compress the binary sequence using dot-decimal notation to avoid excessively long lengths. Extensive experiments validate that BinLLM introduces collaborative information in a manner better aligned with LLMs, resulting in enhanced performance. We release our code at https://github.com/zyang1580/BinLLM.
[ "Zhang, Yang", "Bao, Keqin", "Yan, Ming", "Wang, Wenjie", "Feng, Fuli", "He, Xiangnan" ]
Text-like Encoding of Collaborative Information in Large Language Models for Recommendation
acl-long.497
Poster
2406.03210
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.497/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.498.bib
@inproceedings{wang-etal-2024-mm, title = "{MM}-{SAP}: A Comprehensive Benchmark for Assessing Self-Awareness of Multimodal Large Language Models in Perception", author = "Wang, Yuhao and Liao, Yusheng and Liu, Heyang and Liu, Hongcheng and Wang, Yanfeng and Wang, Yu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.498", pages = "9192--9205", abstract = "Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in visual perception and understanding. However, these models also suffer from hallucinations, which limit their reliability as AI systems. We believe that these hallucinations are partially due to the models{'} struggle with understanding what they can and cannot perceive from images, a capability we refer to as self-awareness in perception. Despite its importance, this aspect of MLLMs has been overlooked in prior studies. In this paper, we aim to define and evaluate the self-awareness of MLLMs in perception. To do this, we first introduce the knowledge quadrant in perception, which helps define what MLLMs know and do not know about images. Using this framework, we propose a novel benchmark, the Self-Awareness in Perception for MLLMs (MM-SAP), specifically designed to assess this capability. We apply MM-SAP to a variety of popular MLLMs, offering a comprehensive analysis of their self-awareness and providing detailed insights. The experiment results reveal that current MLLMs possess limited self-awareness capabilities, pointing to a crucial area for future advancement in the development of trustworthy MLLMs. Code and data are available at https://github.com/YHWmz/MM-SAP.", }
Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in visual perception and understanding. However, these models also suffer from hallucinations, which limit their reliability as AI systems. We believe that these hallucinations are partially due to the models{'} struggle with understanding what they can and cannot perceive from images, a capability we refer to as self-awareness in perception. Despite its importance, this aspect of MLLMs has been overlooked in prior studies. In this paper, we aim to define and evaluate the self-awareness of MLLMs in perception. To do this, we first introduce the knowledge quadrant in perception, which helps define what MLLMs know and do not know about images. Using this framework, we propose a novel benchmark, the Self-Awareness in Perception for MLLMs (MM-SAP), specifically designed to assess this capability. We apply MM-SAP to a variety of popular MLLMs, offering a comprehensive analysis of their self-awareness and providing detailed insights. The experiment results reveal that current MLLMs possess limited self-awareness capabilities, pointing to a crucial area for future advancement in the development of trustworthy MLLMs. Code and data are available at https://github.com/YHWmz/MM-SAP.
[ "Wang, Yuhao", "Liao, Yusheng", "Liu, Heyang", "Liu, Hongcheng", "Wang, Yanfeng", "Wang, Yu" ]
MM-SAP: A Comprehensive Benchmark for Assessing Self-Awareness of Multimodal Large Language Models in Perception
acl-long.498
Poster
2401.07529
[ "https://github.com/yhwmz/mm-sap" ]
https://huggingface.co/papers/2401.07529
0
0
0
6
https://aclanthology.org/2024.acl-long.498/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.499.bib
@inproceedings{li-etal-2024-focus, title = "Focus on Your Question! Interpreting and Mitigating Toxic {C}o{T} Problems in Commonsense Reasoning", author = "Li, Jiachun and Cao, Pengfei and Wang, Chenhao and Jin, Zhuoran and Chen, Yubo and Zeng, Daojian and Liu, Kang and Zhao, Jun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.499", pages = "9206--9230", abstract = "Large language models exhibit high-level commonsense reasoning abilities, especially with enhancement methods like Chain-of-Thought (CoT). However, we find these CoT-like methods lead to a considerable number of originally correct answers turning wrong, which we define as the Toxic CoT problem. To interpret and mitigate this problem, we first utilize attribution tracing and causal tracing methods to probe the internal working mechanism of the LLM during CoT reasoning. Through comparisons, we prove that the model exhibits information loss from the question over the shallow attention layers when generating rationales or answers. Based on the probing findings, we design a novel method called RIDERS (Residual decodIng and sERial-position Swap), which compensates for the information deficit in the model from both decoding and serial-position perspectives. Through extensive experiments on multiple commonsense reasoning benchmarks, we validate that this method not only significantly eliminates Toxic CoT problems (decreased by $\textbf{23.6}${\%}), but also effectively improves the model{'}s overall commonsense reasoning performance (increased by $\textbf{5.5}${\%}).", }
Large language models exhibit high-level commonsense reasoning abilities, especially with enhancement methods like Chain-of-Thought (CoT). However, we find these CoT-like methods lead to a considerable number of originally correct answers turning wrong, which we define as the Toxic CoT problem. To interpret and mitigate this problem, we first utilize attribution tracing and causal tracing methods to probe the internal working mechanism of the LLM during CoT reasoning. Through comparisons, we prove that the model exhibits information loss from the question over the shallow attention layers when generating rationales or answers. Based on the probing findings, we design a novel method called RIDERS (Residual decodIng and sERial-position Swap), which compensates for the information deficit in the model from both decoding and serial-position perspectives. Through extensive experiments on multiple commonsense reasoning benchmarks, we validate that this method not only significantly eliminates Toxic CoT problems (decreased by $\textbf{23.6}${\%}), but also effectively improves the model{'}s overall commonsense reasoning performance (increased by $\textbf{5.5}${\%}).
[ "Li, Jiachun", "Cao, Pengfei", "Wang, Chenhao", "Jin, Zhuoran", "Chen, Yubo", "Zeng, Daojian", "Liu, Kang", "Zhao, Jun" ]
Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning
acl-long.499
Poster
2402.18344
[ "https://github.com/bugmakerzzz/toxic_cot" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.499/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.500.bib
@inproceedings{liu-etal-2024-multi, title = "Multi-Aspect Controllable Text Generation with Disentangled Counterfactual Augmentation", author = "Liu, Yi and Liu, Xiangyu and Zhu, Xiangrong and Hu, Wei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.500", pages = "9231--9253", abstract = "Multi-aspect controllable text generation aims to control the generated texts in attributes from multiple aspects (e.g., {``}positive{''} from sentiment and {``}sport{''} from topic). Existing works neglect attribute correlations formed by the intertwining of different attributes. Particularly, the stereotype formed by imbalanced attribute correlations significantly affects multi-aspect control. In this paper, we propose MAGIC, a new multi-aspect controllable text generation method with disentangled counterfactual augmentation. We alleviate the issue of imbalanced attribute correlations during training using counterfactual feature vectors in the attribute latent space by disentanglement. During inference, we enhance attribute correlations by target-guided counterfactual augmentation to further improve multi-aspect control. Experiments show that MAGIC outperforms state-of-the-art baselines in both imbalanced and balanced attribute correlation scenarios.", }
Multi-aspect controllable text generation aims to control the generated texts in attributes from multiple aspects (e.g., {``}positive{''} from sentiment and {``}sport{''} from topic). Existing works neglect attribute correlations formed by the intertwining of different attributes. Particularly, the stereotype formed by imbalanced attribute correlations significantly affects multi-aspect control. In this paper, we propose MAGIC, a new multi-aspect controllable text generation method with disentangled counterfactual augmentation. We alleviate the issue of imbalanced attribute correlations during training using counterfactual feature vectors in the attribute latent space by disentanglement. During inference, we enhance attribute correlations by target-guided counterfactual augmentation to further improve multi-aspect control. Experiments show that MAGIC outperforms state-of-the-art baselines in both imbalanced and balanced attribute correlation scenarios.
[ "Liu, Yi", "Liu, Xiangyu", "Zhu, Xiangrong", "Hu, Wei" ]
Multi-Aspect Controllable Text Generation with Disentangled Counterfactual Augmentation
acl-long.500
Poster
2405.19958
[ "https://github.com/nju-websoft/magic" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.500/
[]
[]
[]
0