Datasets:

bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
sequencelengths
1
45
title
stringlengths
21
199
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
28
upvotes
int64
-1
255
num_comments
int64
-1
23
n_authors
int64
-1
35
proceedings
stringlengths
38
47
Models
sequencelengths
0
57
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.acl-long.601.bib
@inproceedings{chen-etal-2024-fortify, title = "Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use", author = "Chen, Yuhan and Lv, Ang and Lin, Ting-En and Chen, Changyu and Wu, Yuchuan and Huang, Fei and Li, Yongbin and Yan, Rui", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.601", pages = "11160--11174", abstract = "In this paper, we demonstrate that an inherent waveform pattern in the attention allocation of large language models (LLMs) significantly affects their performance in tasks demanding a high degree of context awareness, such as utilizing LLMs for tool-use. Specifically, the crucial information in the context will be potentially overlooked by model when it is positioned in the trough zone of the attention waveform, leading to decreased performance. To address this issue, we propose a novel inference method named Attention Buckets. It allows LLMs to process their input through multiple parallel processes. Each process utilizes a distinct base angle for the rotary position embedding, thereby creating a unique attention waveform. By compensating an attention trough of a particular process with an attention peak of another process, our approach enhances LLM{'}s awareness to various contextual positions, thus mitigating the risk of overlooking crucial information. In the largest tool-use benchmark, our method elevates a 7B model to achieve state-of-the-art performance, comparable to that of GPT-4. On other benchmarks and some RAG tasks, which also demand a thorough understanding of contextual content, Attention Buckets also exhibited notable enhancements in performance.", }
In this paper, we demonstrate that an inherent waveform pattern in the attention allocation of large language models (LLMs) significantly affects their performance in tasks demanding a high degree of context awareness, such as utilizing LLMs for tool-use. Specifically, the crucial information in the context will be potentially overlooked by model when it is positioned in the trough zone of the attention waveform, leading to decreased performance. To address this issue, we propose a novel inference method named Attention Buckets. It allows LLMs to process their input through multiple parallel processes. Each process utilizes a distinct base angle for the rotary position embedding, thereby creating a unique attention waveform. By compensating an attention trough of a particular process with an attention peak of another process, our approach enhances LLM{'}s awareness to various contextual positions, thus mitigating the risk of overlooking crucial information. In the largest tool-use benchmark, our method elevates a 7B model to achieve state-of-the-art performance, comparable to that of GPT-4. On other benchmarks and some RAG tasks, which also demand a thorough understanding of contextual content, Attention Buckets also exhibited notable enhancements in performance.
[ "Chen, Yuhan", "Lv, Ang", "Lin, Ting-En", "Chen, Changyu", "Wu, Yuchuan", "Huang, Fei", "Li, Yongbin", "Yan, Rui" ]
Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use
acl-long.601
Poster
2312.04455
[ "https://github.com/fiorina1212/attention-buckets" ]
https://huggingface.co/papers/2312.04455
1
1
0
8
https://aclanthology.org/2024.acl-long.601/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.602.bib
@inproceedings{wu-tu-2024-layer, title = "Layer-Condensed {KV} Cache for Efficient Inference of Large Language Models", author = "Wu, Haoyi and Tu, Kewei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.602", pages = "11175--11188", abstract = "Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In addition to the large number of parameters, the key-value (KV) cache for the attention mechanism in the transformer architecture consumes a significant amount of memory, especially when the number of layers is large for deep language models. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput. Our experiments on large language models show that our method achieves up to 26$\times$ higher throughput than standard transformers and competitive performance in language modeling and downstream tasks. In addition, our method is orthogonal to existing transformer memory-saving techniques, so it is straightforward to integrate them with our model, achieving further improvement in inference efficiency. Our code is available at https://github.com/whyNLP/LCKV.", }
Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In addition to the large number of parameters, the key-value (KV) cache for the attention mechanism in the transformer architecture consumes a significant amount of memory, especially when the number of layers is large for deep language models. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput. Our experiments on large language models show that our method achieves up to 26$\times$ higher throughput than standard transformers and competitive performance in language modeling and downstream tasks. In addition, our method is orthogonal to existing transformer memory-saving techniques, so it is straightforward to integrate them with our model, achieving further improvement in inference efficiency. Our code is available at https://github.com/whyNLP/LCKV.
[ "Wu, Haoyi", "Tu, Kewei" ]
Layer-Condensed KV Cache for Efficient Inference of Large Language Models
acl-long.602
Poster
2405.10637
[ "https://github.com/whyNLP/LCKV" ]
https://huggingface.co/papers/2405.10637
1
18
1
2
https://aclanthology.org/2024.acl-long.602/
[ "whynlp/tinyllama-lckv-w2-2.5T-ft-100b" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.603.bib
@inproceedings{zhang-etal-2024-enhancing-multilingual, title = "Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages", author = "Zhang, Yuanchi and Wang, Yile and Liu, Zijun and Wang, Shuo and Wang, Xiaolong and Li, Peng and Sun, Maosong and Liu, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.603", pages = "11189--11204", abstract = "While large language models (LLMs) have been pre-trained on multilingual corpora, their performance still lags behind in most languages compared to a few resource-rich languages. One common approach to mitigate this issue is to translate training data from resource-rich languages into other languages and then continue training. However, using the data obtained solely relying on translation while ignoring the original capabilities of LLMs across languages is not always effective, which we show will limit the performance of cross-lingual knowledge transfer. In this work, we propose SDRRL, a method based on Self-Distillation from Resource-Rich Languages that effectively improve multilingual performance by leveraging the internal capabilities of LLMs on resource-rich languages. We evaluate on different LLMs (LLaMA-2 and SeaLLM) and source languages (English and French) across various comprehension and generation tasks, experimental results demonstrate that SDRRL can significantly enhance multilingual capabilities while minimizing the impact on original performance in resource-rich languages.", }
While large language models (LLMs) have been pre-trained on multilingual corpora, their performance still lags behind in most languages compared to a few resource-rich languages. One common approach to mitigate this issue is to translate training data from resource-rich languages into other languages and then continue training. However, using the data obtained solely relying on translation while ignoring the original capabilities of LLMs across languages is not always effective, which we show will limit the performance of cross-lingual knowledge transfer. In this work, we propose SDRRL, a method based on Self-Distillation from Resource-Rich Languages that effectively improve multilingual performance by leveraging the internal capabilities of LLMs on resource-rich languages. We evaluate on different LLMs (LLaMA-2 and SeaLLM) and source languages (English and French) across various comprehension and generation tasks, experimental results demonstrate that SDRRL can significantly enhance multilingual capabilities while minimizing the impact on original performance in resource-rich languages.
[ "Zhang, Yuanchi", "Wang, Yile", "Liu, Zijun", "Wang, Shuo", "Wang, Xiaolong", "Li, Peng", "Sun, Maosong", "Liu, Yang" ]
Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages
acl-long.603
Poster
2402.12204
[ "https://github.com/hiyouga/llama-factory" ]
https://huggingface.co/papers/2402.12204
2
1
0
8
https://aclanthology.org/2024.acl-long.603/
[ "sunatte/txt2sql" ]
[]
[ "Justinrune/LLaMA-Factory", "smarttang/blingsec" ]
1
https://aclanthology.org/2024.acl-long.604.bib
@inproceedings{sun-etal-2024-benchmarking-chinese, title = "Benchmarking {C}hinese Commonsense Reasoning of {LLM}s: From {C}hinese-Specifics to Reasoning-Memorization Correlations", author = "Sun, Jiaxing and Huang, Weiquan and Wu, Jiang and Gu, Chenya and Li, Wei and Zhang, Songyang and Yan, Hang and He, Conghui", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.604", pages = "11205--11228", abstract = "We introduce CHARM, the first benchmark for comprehensively and in-depth evaluating the commonsense reasoning ability of large language models (LLMs) in Chinese, which covers both globally known and Chinese-specific commonsense. We evaluated 7 English and 12 Chinese-oriented LLMs on CHARM, employing 5 representative prompt strategies for improving LLMs{'} reasoning ability, such as Chain-of-Thought. Our findings indicated that the LLM{'}s language orientation and the task{'}s domain influence the effectiveness of the prompt strategy, which enriches previous research findings. We built closely-interconnected reasoning and memorization tasks, and found that some LLMs struggle with memorizing Chinese commonsense, affecting their reasoning ability, while others show differences in reasoning despite similar memorization performance. We also evaluated the LLMs{'} memorization-independent reasoning abilities and analyzed the typical errors. Our study precisely identified the LLMs{'} strengths and weaknesses, providing the clear direction for optimization. It can also serve as a reference for studies in other fields. We will release CHARM at https://github.com/opendatalab/CHARM.", }
We introduce CHARM, the first benchmark for comprehensively and in-depth evaluating the commonsense reasoning ability of large language models (LLMs) in Chinese, which covers both globally known and Chinese-specific commonsense. We evaluated 7 English and 12 Chinese-oriented LLMs on CHARM, employing 5 representative prompt strategies for improving LLMs{'} reasoning ability, such as Chain-of-Thought. Our findings indicated that the LLM{'}s language orientation and the task{'}s domain influence the effectiveness of the prompt strategy, which enriches previous research findings. We built closely-interconnected reasoning and memorization tasks, and found that some LLMs struggle with memorizing Chinese commonsense, affecting their reasoning ability, while others show differences in reasoning despite similar memorization performance. We also evaluated the LLMs{'} memorization-independent reasoning abilities and analyzed the typical errors. Our study precisely identified the LLMs{'} strengths and weaknesses, providing the clear direction for optimization. It can also serve as a reference for studies in other fields. We will release CHARM at https://github.com/opendatalab/CHARM.
[ "Sun, Jiaxing", "Huang, Weiquan", "Wu, Jiang", "Gu, Chenya", "Li, Wei", "Zhang, Songyang", "Yan, Hang", "He, Conghui" ]
Benchmarking Chinese Commonsense Reasoning of LLMs: From Chinese-Specifics to Reasoning-Memorization Correlations
acl-long.604
Poster
2403.14112
[ "https://github.com/opendatalab/charm" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.604/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.605.bib
@inproceedings{wang-etal-2024-browse, title = "Browse and Concentrate: Comprehending Multimodal Content via Prior-{LLM} Context Fusion", author = "Wang, Ziyue and Chen, Chi and Zhu, Yiqi and Luo, Fuwen and Li, Peng and Yan, Ming and Zhang, Ji and Huang, Fei and Sun, Maosong and Liu, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.605", pages = "11229--11245", abstract = "With the bloom of Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks. However, they fall short to comprehend context involving multiple images. A primary reason for this shortcoming is that the visual features for each images are encoded individually by frozen encoders before feeding into the LLM backbone, lacking awareness of other images and the multimodal instructions. We term this issue as prior-LLM modality isolation and propose a two phase paradigm, browse-and-concentrate, to enable in-depth multimodal context fusion prior to feeding the features into LLMs. This paradigm initially {``}browses{''} through the inputs for essential insights, and then revisits the inputs to {``}concentrate{''} on crucial details, guided by these insights, to achieve a more comprehensive understanding of the multimodal inputs. Additionally, we develop training strategies specifically to enhance the understanding of multi-image inputs. Our method markedly boosts the performance on 7 multi-image scenarios, contributing to increments on average accuracy by 2.13{\%} and 7.60{\%} against strong MLLMs baselines with 3B and 11B LLMs, respectively.", }
With the bloom of Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks. However, they fall short to comprehend context involving multiple images. A primary reason for this shortcoming is that the visual features for each images are encoded individually by frozen encoders before feeding into the LLM backbone, lacking awareness of other images and the multimodal instructions. We term this issue as prior-LLM modality isolation and propose a two phase paradigm, browse-and-concentrate, to enable in-depth multimodal context fusion prior to feeding the features into LLMs. This paradigm initially {``}browses{''} through the inputs for essential insights, and then revisits the inputs to {``}concentrate{''} on crucial details, guided by these insights, to achieve a more comprehensive understanding of the multimodal inputs. Additionally, we develop training strategies specifically to enhance the understanding of multi-image inputs. Our method markedly boosts the performance on 7 multi-image scenarios, contributing to increments on average accuracy by 2.13{\%} and 7.60{\%} against strong MLLMs baselines with 3B and 11B LLMs, respectively.
[ "Wang, Ziyue", "Chen, Chi", "Zhu, Yiqi", "Luo, Fuwen", "Li, Peng", "Yan, Ming", "Zhang, Ji", "Huang, Fei", "Sun, Maosong", "Liu, Yang" ]
Browse and Concentrate: Comprehending Multimodal Content via Prior-LLM Context Fusion
acl-long.605
Oral
2402.12195
[ "https://github.com/thunlp-mt/brote" ]
https://huggingface.co/papers/2402.12195
2
0
0
10
https://aclanthology.org/2024.acl-long.605/
[ "wangphoebe/Brote-IM-XXL" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.606.bib
@inproceedings{chen-etal-2024-model, title = "Model Composition for Multimodal Large Language Models", author = "Chen, Chi and Du, Yiyang and Fang, Zheng and Wang, Ziyue and Luo, Fuwen and Li, Peng and Yan, Ming and Zhang, Ji and Huang, Fei and Sun, Maosong and Liu, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.606", pages = "11246--11262", abstract = "Recent developments in Multimodal Large Language Models (MLLMs) have shown rapid progress, moving towards the goal of creating versatile MLLMs that understand inputs from various modalities. However, existing methods typically rely on joint training with paired multimodal instruction data, which is resource-intensive and challenging to extend to new modalities. In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model. Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters. Furthermore, we introduce DAMC to address parameter interference and mismatch issues during the merging process, thereby enhancing the model performance. To facilitate research in this area, we propose MCUB, a benchmark for assessing ability of MLLMs to understand inputs from diverse modalities. Experiments on this benchmark and four other multimodal understanding tasks show significant improvements over baselines, proving that model composition can create a versatile model capable of processing inputs from multiple modalities.", }
Recent developments in Multimodal Large Language Models (MLLMs) have shown rapid progress, moving towards the goal of creating versatile MLLMs that understand inputs from various modalities. However, existing methods typically rely on joint training with paired multimodal instruction data, which is resource-intensive and challenging to extend to new modalities. In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model. Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters. Furthermore, we introduce DAMC to address parameter interference and mismatch issues during the merging process, thereby enhancing the model performance. To facilitate research in this area, we propose MCUB, a benchmark for assessing ability of MLLMs to understand inputs from diverse modalities. Experiments on this benchmark and four other multimodal understanding tasks show significant improvements over baselines, proving that model composition can create a versatile model capable of processing inputs from multiple modalities.
[ "Chen, Chi", "Du, Yiyang", "Fang, Zheng", "Wang, Ziyue", "Luo, Fuwen", "Li, Peng", "Yan, Ming", "Zhang, Ji", "Huang, Fei", "Sun, Maosong", "Liu, Yang" ]
Model Composition for Multimodal Large Language Models
acl-long.606
Poster
2402.12750
[ "https://github.com/thunlp-mt/modelcompose" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.606/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.607.bib
@inproceedings{zhang-etal-2024-draft, title = "Draft{\&} Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding", author = "Zhang, Jun and Wang, Jue and Li, Huan and Shou, Lidan and Chen, Ke and Chen, Gang and Mehrotra, Sharad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.607", pages = "11263--11282", abstract = "We present a novel inference scheme, self-speculative decoding, for accelerating Large Language Models (LLMs) without the need for an auxiliary model. This approach is characterized by a two-stage process: drafting and verification. The drafting stage generates draft tokens at a slightly lower quality but more quickly, which is achieved by selectively skipping certain intermediate layers during drafting. Subsequently, the verification stage employs the original LLM to validate those draft output tokens in one forward pass. This process ensures the final output remains identical to that produced by the unaltered LLM. Moreover, the proposed method requires no additional neural network training and no extra memory footprint, making it a plug-and-play and cost-effective solution for inference acceleration. Benchmarks with LLaMA-2 and its variants demonstrated a speedup up to 1.99$\times$.", }
We present a novel inference scheme, self-speculative decoding, for accelerating Large Language Models (LLMs) without the need for an auxiliary model. This approach is characterized by a two-stage process: drafting and verification. The drafting stage generates draft tokens at a slightly lower quality but more quickly, which is achieved by selectively skipping certain intermediate layers during drafting. Subsequently, the verification stage employs the original LLM to validate those draft output tokens in one forward pass. This process ensures the final output remains identical to that produced by the unaltered LLM. Moreover, the proposed method requires no additional neural network training and no extra memory footprint, making it a plug-and-play and cost-effective solution for inference acceleration. Benchmarks with LLaMA-2 and its variants demonstrated a speedup up to 1.99$\times$.
[ "Zhang, Jun", "Wang, Jue", "Li, Huan", "Shou, Lidan", "Chen, Ke", "Chen, Gang", "Mehrotra, Sharad" ]
Draft& Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
acl-long.607
Poster
[ "https://github.com/dilab-zju/self-speculative-decoding" ]
https://huggingface.co/papers/2309.08168
1
0
0
7
https://aclanthology.org/2024.acl-long.607/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.608.bib
@inproceedings{cheng-etal-2024-soul, title = "Soul-Mix: Enhancing Multimodal Machine Translation with Manifold Mixup", author = "Cheng, Xuxin and Yao, Ziyu and Xin, Yifei and An, Hao and Li, Hongxiang and Li, Yaowei and Zou, Yuexian", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.608", pages = "11283--11294", abstract = "Multimodal machine translation (MMT) aims to improve the performance of machine translation with the help of visual information, which has received widespread attention recently. It has been verified that visual information brings greater performance gains when the textual information is limited. However, most previous works ignore to take advantage of the complete textual inputs and the limited textual inputs at the same time, which limits the overall performance. To solve this issue, we propose a mixup method termed Soul-Mix to enhance MMT by using visual information more effectively. We mix the predicted translations of complete textual input and the limited textual inputs. Experimental results on the Multi30K dataset of three translation directions show that our Soul-Mix significantly outperforms existing approaches and achieves new state-of-the-art performance with fewer parameters than some previous models. Besides, the strength of Soul-Mix is more obvious on more challenging MSCOCO dataset which includes more out-of-domain instances with lots of ambiguous verbs.", }
Multimodal machine translation (MMT) aims to improve the performance of machine translation with the help of visual information, which has received widespread attention recently. It has been verified that visual information brings greater performance gains when the textual information is limited. However, most previous works ignore to take advantage of the complete textual inputs and the limited textual inputs at the same time, which limits the overall performance. To solve this issue, we propose a mixup method termed Soul-Mix to enhance MMT by using visual information more effectively. We mix the predicted translations of complete textual input and the limited textual inputs. Experimental results on the Multi30K dataset of three translation directions show that our Soul-Mix significantly outperforms existing approaches and achieves new state-of-the-art performance with fewer parameters than some previous models. Besides, the strength of Soul-Mix is more obvious on more challenging MSCOCO dataset which includes more out-of-domain instances with lots of ambiguous verbs.
[ "Cheng, Xuxin", "Yao, Ziyu", "Xin, Yifei", "An, Hao", "Li, Hongxiang", "Li, Yaowei", "Zou, Yuexian" ]
Soul-Mix: Enhancing Multimodal Machine Translation with Manifold Mixup
acl-long.608
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.608/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.609.bib
@inproceedings{gao-etal-2024-measuring, title = "Measuring Meaning Composition in the Human Brain with Composition Scores from Large Language Models", author = "Gao, Changjiang and Li, Jixing and Chen, Jiajun and Huang, Shujian", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.609", pages = "11295--11308", abstract = "The process of meaning composition, wherein smaller units like morphemes or words combine to form the meaning of phrases and sentences, is essential for human sentence comprehension. Despite extensive neurolinguistic research into the brain regions involved in meaning composition, a computational metric to quantify the extent of composition is still lacking. Drawing on the key-value memory interpretation of transformer feed-forward network blocks, we introduce the Composition Score, a novel model-based metric designed to quantify the degree of meaning composition during sentence comprehension. Experimental findings show that this metric correlates with brain clusters associated with word frequency, structural processing, and general sensitivity to words, suggesting the multifaceted nature of meaning composition during human sentence comprehension.", }
The process of meaning composition, wherein smaller units like morphemes or words combine to form the meaning of phrases and sentences, is essential for human sentence comprehension. Despite extensive neurolinguistic research into the brain regions involved in meaning composition, a computational metric to quantify the extent of composition is still lacking. Drawing on the key-value memory interpretation of transformer feed-forward network blocks, we introduce the Composition Score, a novel model-based metric designed to quantify the degree of meaning composition during sentence comprehension. Experimental findings show that this metric correlates with brain clusters associated with word frequency, structural processing, and general sensitivity to words, suggesting the multifaceted nature of meaning composition during human sentence comprehension.
[ "Gao, Changjiang", "Li, Jixing", "Chen, Jiajun", "Huang, Shujian" ]
Measuring Meaning Composition in the Human Brain with Composition Scores from Large Language Models
acl-long.609
Poster
2403.04325
[ "https://github.com/rivergao/ffn_composition_analysis" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.609/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.610.bib
@inproceedings{kamthawee-etal-2024-mist, title = "{MIST}: Mutual Information Maximization for Short Text Clustering", author = "Kamthawee, Krissanee and Udomcharoenchaikit, Can and Nutanong, Sarana", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.610", pages = "11309--11324", abstract = "Short text clustering poses substantial challenges due to the limited amount of information provided by each text sample. Previous efforts based on dense representations are still inadequate as texts are not sufficiently segregated in the embedding space before clustering. Even though the state-of-the-art method utilizes contrastive learning to boost performance, the process of summarizing all local tokens to form a sequence representation for the whole text includes noise that may obscure limited key information. We propose Mutual Information Maximization Framework for Short Text Clustering (MIST), which overcomes the information drown-out by including a mechanism to maximize the mutual information between representations on both sequence and token levels. Experimental results across eight standard short text datasets show that MIST outperforms the state-of-the-art method in terms of Accuracy or Normalized Mutual Information in most cases.", }
Short text clustering poses substantial challenges due to the limited amount of information provided by each text sample. Previous efforts based on dense representations are still inadequate as texts are not sufficiently segregated in the embedding space before clustering. Even though the state-of-the-art method utilizes contrastive learning to boost performance, the process of summarizing all local tokens to form a sequence representation for the whole text includes noise that may obscure limited key information. We propose Mutual Information Maximization Framework for Short Text Clustering (MIST), which overcomes the information drown-out by including a mechanism to maximize the mutual information between representations on both sequence and token levels. Experimental results across eight standard short text datasets show that MIST outperforms the state-of-the-art method in terms of Accuracy or Normalized Mutual Information in most cases.
[ "Kamthawee, Krissanee", "Udomcharoenchaikit, Can", "Nutanong, Sarana" ]
MIST: Mutual Information Maximization for Short Text Clustering
acl-long.610
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.610/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.611.bib
@inproceedings{zheng-etal-2024-self, title = "Self-chats from Large Language Models Make Small Emotional Support Chatbot Better", author = "Zheng, Zhonghua and Liao, Lizi and Deng, Yang and Qin, Libo and Nie, Liqiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.611", pages = "11325--11345", abstract = "Large Language Models (LLMs) have shown strong generalization abilities to excel in various tasks, including emotion support conversations. However, deploying such LLMs like GPT-3 (175B parameters) is resource-intensive and challenging at scale. In this study, we utilize LLMs as {``}Counseling Teacher{''} to enhance smaller models{'} emotion support response abilities, significantly reducing the necessity of scaling up model size. To this end, we first introduce an iterative expansion framework, aiming to prompt the large teacher model to curate an expansive emotion support dialogue dataset. This curated dataset, termed ExTES, encompasses a broad spectrum of scenarios and is crafted with meticulous strategies to ensure its quality and comprehensiveness. Based on this, we then devise a Diverse Response Inpainting (DRI) mechanism to harness the teacher model to produce multiple diverse responses by filling in the masked conversation context. This richness and variety serve as instructive examples, providing a robust foundation for fine-tuning smaller student models. Experiments across varied scenarios reveal that the teacher-student scheme with DRI notably improves the response abilities of smaller models, even outperforming the teacher model in some cases. The dataset and codes are available in https://github.com/pandazzh2020/ExTES.", }
Large Language Models (LLMs) have shown strong generalization abilities to excel in various tasks, including emotion support conversations. However, deploying such LLMs like GPT-3 (175B parameters) is resource-intensive and challenging at scale. In this study, we utilize LLMs as {``}Counseling Teacher{''} to enhance smaller models{'} emotion support response abilities, significantly reducing the necessity of scaling up model size. To this end, we first introduce an iterative expansion framework, aiming to prompt the large teacher model to curate an expansive emotion support dialogue dataset. This curated dataset, termed ExTES, encompasses a broad spectrum of scenarios and is crafted with meticulous strategies to ensure its quality and comprehensiveness. Based on this, we then devise a Diverse Response Inpainting (DRI) mechanism to harness the teacher model to produce multiple diverse responses by filling in the masked conversation context. This richness and variety serve as instructive examples, providing a robust foundation for fine-tuning smaller student models. Experiments across varied scenarios reveal that the teacher-student scheme with DRI notably improves the response abilities of smaller models, even outperforming the teacher model in some cases. The dataset and codes are available in https://github.com/pandazzh2020/ExTES.
[ "Zheng, Zhonghua", "Liao, Lizi", "Deng, Yang", "Qin, Libo", "Nie, Liqiang" ]
Self-chats from Large Language Models Make Small Emotional Support Chatbot Better
acl-long.611
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.611/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.612.bib
@inproceedings{lee-etal-2024-improving-conversational, title = "Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment", author = "Lee, Janghwan and Park, Seongmin and Hong, Sukjin and Kim, Minsoo and Chang, Du-Seong and Choi, Jungwook", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.612", pages = "11346--11364", abstract = "The rapid advancement of large language models (LLMs) has facilitated their transformation into conversational chatbots that can grasp contextual nuances and generate pertinent sentences, closely mirroring human values through advanced techniques such as instruction tuning and reinforcement learning from human feedback (RLHF). However, the computational efficiency required for LLMs, achieved through techniques like post-training quantization (PTQ), presents challenges such as token-flipping that can impair chatbot performance. In response, we propose a novel preference alignment approach, quantization-aware direct preference optimization (QDPO), that aligns quantized LLMs with their full-precision counterparts, improving conversational abilities. Evaluated on two instruction-tuned LLMs in various languages, QDPO demonstrated superior performance in improving conversational abilities compared to established PTQ and knowledge-distillation fine-tuning techniques, marking a significant step forward in the development of efficient and effective conversational LLMs.", }
The rapid advancement of large language models (LLMs) has facilitated their transformation into conversational chatbots that can grasp contextual nuances and generate pertinent sentences, closely mirroring human values through advanced techniques such as instruction tuning and reinforcement learning from human feedback (RLHF). However, the computational efficiency required for LLMs, achieved through techniques like post-training quantization (PTQ), presents challenges such as token-flipping that can impair chatbot performance. In response, we propose a novel preference alignment approach, quantization-aware direct preference optimization (QDPO), that aligns quantized LLMs with their full-precision counterparts, improving conversational abilities. Evaluated on two instruction-tuned LLMs in various languages, QDPO demonstrated superior performance in improving conversational abilities compared to established PTQ and knowledge-distillation fine-tuning techniques, marking a significant step forward in the development of efficient and effective conversational LLMs.
[ "Lee, Janghwan", "Park, Seongmin", "Hong, Sukjin", "Kim, Minsoo", "Chang, Du-Seong", "Choi, Jungwook" ]
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment
acl-long.612
Oral
2407.03051
[ "" ]
https://huggingface.co/papers/2407.03051
0
0
0
6
https://aclanthology.org/2024.acl-long.612/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.613.bib
@inproceedings{fang-etal-2024-complex, title = "Complex Reasoning over Logical Queries on Commonsense Knowledge Graphs", author = "Fang, Tianqing and Chen, Zeming and Song, Yangqiu and Bosselut, Antoine", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.613", pages = "11365--11384", abstract = "Event commonsense reasoning requires the ability to reason about the relationship between events, as well as infer implicit contextunderlying that relationship. However, data scarcity makes it challenging for language models to learn to generate commonsense infer-ences for contexts and questions involving interactions between complex events. To address this demand, we present COM2 (COMplexCOMmonsense), a new dataset created by sampling multi-hop logical queries (e.g., the joint effect or cause of both event A and B, or theeffect of the effect of event C) from an existing commonsense knowledge graph (CSKG), and verbalizing them using handcrafted rules andlarge language models into multiple-choice and text generation questions. Our experiments show that language models trained on COM2 exhibit significant improve ments in complex reasoning ability, resulting in enhanced zero-shot performance in both in-domain and out-of-domain tasks for question answering and generative commonsense reasoning, without expensive human annotations", }
Event commonsense reasoning requires the ability to reason about the relationship between events, as well as infer implicit contextunderlying that relationship. However, data scarcity makes it challenging for language models to learn to generate commonsense infer-ences for contexts and questions involving interactions between complex events. To address this demand, we present COM2 (COMplexCOMmonsense), a new dataset created by sampling multi-hop logical queries (e.g., the joint effect or cause of both event A and B, or theeffect of the effect of event C) from an existing commonsense knowledge graph (CSKG), and verbalizing them using handcrafted rules andlarge language models into multiple-choice and text generation questions. Our experiments show that language models trained on COM2 exhibit significant improve ments in complex reasoning ability, resulting in enhanced zero-shot performance in both in-domain and out-of-domain tasks for question answering and generative commonsense reasoning, without expensive human annotations
[ "Fang, Tianqing", "Chen, Zeming", "Song, Yangqiu", "Bosselut, Antoine" ]
Complex Reasoning over Logical Queries on Commonsense Knowledge Graphs
acl-long.613
Poster
2403.07398
[ "https://github.com/tqfang/complex-commonsense-reasoning" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.613/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.614.bib
@inproceedings{chai-etal-2024-expert, title = "An Expert is Worth One Token: Synergizing Multiple Expert {LLM}s as Generalist via Expert Token Routing", author = "Chai, Ziwei and Wang, Guoyin and Su, Jing and Zhang, Tianjie and Huang, Xuanwen and Wang, Xuwu and Xu, Jingjing and Yuan, Jianbo and Yang, Hongxia and Wu, Fei and Yang, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.614", pages = "11385--11396", abstract = "We present Expert-Token-Routing, a unified generalist framework that facilitates seamless integration of multiple expert LLMs. Our framework represents expert LLMs as special expert tokens within the vocabulary of a meta LLM. The meta LLM can route to an expert LLM like generating new tokens. Expert-Token-Routing not only supports learning the implicit expertise of expert LLMs from existing instruction dataset but also allows for dynamic extension of new expert LLMs in a plug-and-play manner. It also conceals the detailed collaboration process from the user{'}s perspective, facilitating interaction as though it were a singular LLM. Our framework outperforms various existing multi-LLM collaboration paradigms across benchmarks that incorporate six diverse expert domains, demonstrating effectiveness and robustness in building generalist LLM system via synergizing multiple expert LLMs.", }
We present Expert-Token-Routing, a unified generalist framework that facilitates seamless integration of multiple expert LLMs. Our framework represents expert LLMs as special expert tokens within the vocabulary of a meta LLM. The meta LLM can route to an expert LLM like generating new tokens. Expert-Token-Routing not only supports learning the implicit expertise of expert LLMs from existing instruction dataset but also allows for dynamic extension of new expert LLMs in a plug-and-play manner. It also conceals the detailed collaboration process from the user{'}s perspective, facilitating interaction as though it were a singular LLM. Our framework outperforms various existing multi-LLM collaboration paradigms across benchmarks that incorporate six diverse expert domains, demonstrating effectiveness and robustness in building generalist LLM system via synergizing multiple expert LLMs.
[ "Chai, Ziwei", "Wang, Guoyin", "Su, Jing", "Zhang, Tianjie", "Huang, Xuanwen", "Wang, Xuwu", "Xu, Jingjing", "Yuan, Jianbo", "Yang, Hongxia", "Wu, Fei", "Yang, Yang" ]
An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing
acl-long.614
Poster
2403.16854
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.614/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.615.bib
@inproceedings{fierro-etal-2024-learning, title = "Learning to Plan and Generate Text with Citations", author = "Fierro, Constanza and Amplayo, Reinald Kim and Huot, Fantine and De Cao, Nicola and Maynez, Joshua and Narayan, Shashi and Lapata, Mirella", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.615", pages = "11397--11417", abstract = "The increasing demand for the deployment of LLMs in information-seeking scenarios has spurred efforts in creating verifiable systems, which generate responses to queries along with supporting evidence. In this paper, we explore the attribution capabilities of plan-based models which have been recently shown to improve the faithfulness, grounding, and controllability of generated text. We conceptualize plans as a sequence of questions which serve as blueprints of the generated content and its organization. We propose two attribution models that utilize different variants of blueprints, an abstractive model where questions are generated from scratch, and an extractive model where questions are copied from the input. Experiments on long-form question-answering show that planning consistently improves attribution quality. Moreover, the citations generated by blueprint models are more accurate compared to those obtained from LLM-based pipelines lacking a planning component.", }
The increasing demand for the deployment of LLMs in information-seeking scenarios has spurred efforts in creating verifiable systems, which generate responses to queries along with supporting evidence. In this paper, we explore the attribution capabilities of plan-based models which have been recently shown to improve the faithfulness, grounding, and controllability of generated text. We conceptualize plans as a sequence of questions which serve as blueprints of the generated content and its organization. We propose two attribution models that utilize different variants of blueprints, an abstractive model where questions are generated from scratch, and an extractive model where questions are copied from the input. Experiments on long-form question-answering show that planning consistently improves attribution quality. Moreover, the citations generated by blueprint models are more accurate compared to those obtained from LLM-based pipelines lacking a planning component.
[ "Fierro, Constanza", "Amplayo, Reinald Kim", "Huot, Fantine", "De Cao, Nicola", "Maynez, Joshua", "Narayan, Shashi", "Lapata, Mirella" ]
Learning to Plan and Generate Text with Citations
acl-long.615
Poster
2404.03381
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.615/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.616.bib
@inproceedings{le-bronnec-etal-2024-exploring, title = "Exploring Precision and Recall to assess the quality and diversity of {LLM}s", author = "Le Bronnec, Florian and Verine, Alexandre and Negrevergne, Benjamin and Chevaleyre, Yann and Allauzen, Alexandre", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.616", pages = "11418--11441", abstract = "We introduce a novel evaluation framework for Large Language Models (LLMs) such as Llama-2 and Mistral, focusing on importing Precision and Recall metrics from image generation to text generation. This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora. By conducting a comprehensive evaluation of state-of-the-art language models, the study reveals new insights into their performance on open-ended generation tasks, which are not adequately captured by traditional benchmarks. The findings highlight a trade-off between the quality and diversity of generated samples, particularly when models are fine-tuned on instruction dataset or with human feedback. This work extends the toolkit for distribution-based NLP evaluation, offering insights into the practical capabilities and challenges that current LLMs face in generating diverse and high-quality text.", }
We introduce a novel evaluation framework for Large Language Models (LLMs) such as Llama-2 and Mistral, focusing on importing Precision and Recall metrics from image generation to text generation. This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora. By conducting a comprehensive evaluation of state-of-the-art language models, the study reveals new insights into their performance on open-ended generation tasks, which are not adequately captured by traditional benchmarks. The findings highlight a trade-off between the quality and diversity of generated samples, particularly when models are fine-tuned on instruction dataset or with human feedback. This work extends the toolkit for distribution-based NLP evaluation, offering insights into the practical capabilities and challenges that current LLMs face in generating diverse and high-quality text.
[ "Le Bronnec, Florian", "Verine, Alex", "re", "Negrevergne, Benjamin", "Chevaleyre, Yann", "Allauzen, Alex", "re" ]
Exploring Precision and Recall to assess the quality and diversity of LLMs
acl-long.616
Poster
2402.10693
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.616/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.617.bib
@inproceedings{lee-etal-2024-aligning, title = "Aligning Large Language Models by On-Policy Self-Judgment", author = "Lee, Sangkyu and Kim, Sungdong and Yousefpour, Ashkan and Seo, Minjoon and Yoo, Kang Min and Yu, Youngjae", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.617", pages = "11442--11459", abstract = "Existing approaches for aligning large language models with human preferences face a trade-off that requires a separate reward model (RM) for on-policy learning. In this paper, we present a novel alignment framework, SELF-JUDGE that (1) does on-policy learning and 2) is parameter efficient, as it does not require an additional RM for evaluating the samples for on-policy learning. To this end, we propose Judge-augmented Supervised Fine-Tuning (JSFT) to train a single model to act as both a policy and a judge. Specifically, we view the pairwise judgment task, choosing the better response from a response pair, as a special case of the instruction-following task. The resulting model can judge preferences of on-the-fly responses from current policy initialized from itself. Experimental results show the efficacy of SELF-JUDGE, outperforming baselines in preference benchmarks. We also show that the rejecting sampling by itself can improve performance further without an additional evaluator.", }
Existing approaches for aligning large language models with human preferences face a trade-off that requires a separate reward model (RM) for on-policy learning. In this paper, we present a novel alignment framework, SELF-JUDGE that (1) does on-policy learning and 2) is parameter efficient, as it does not require an additional RM for evaluating the samples for on-policy learning. To this end, we propose Judge-augmented Supervised Fine-Tuning (JSFT) to train a single model to act as both a policy and a judge. Specifically, we view the pairwise judgment task, choosing the better response from a response pair, as a special case of the instruction-following task. The resulting model can judge preferences of on-the-fly responses from current policy initialized from itself. Experimental results show the efficacy of SELF-JUDGE, outperforming baselines in preference benchmarks. We also show that the rejecting sampling by itself can improve performance further without an additional evaluator.
[ "Lee, Sangkyu", "Kim, Sungdong", "Yousefpour, Ashkan", "Seo, Minjoon", "Yoo, Kang Min", "Yu, Youngjae" ]
Aligning Large Language Models by On-Policy Self-Judgment
acl-long.617
Poster
2402.11253
[ "https://github.com/oddqueue/self-judge" ]
https://huggingface.co/papers/2402.11253
1
2
0
6
https://aclanthology.org/2024.acl-long.617/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.618.bib
@inproceedings{joshi-etal-2024-il, title = "{IL}-{TUR}: Benchmark for {I}ndian Legal Text Understanding and Reasoning", author = "Joshi, Abhinav and Paul, Shounak and Sharma, Akshat and Goyal, Pawan and Ghosh, Saptarshi and Modi, Ashutosh", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.618", pages = "11460--11499", abstract = "Legal systems worldwide are inundated with exponential growth in cases and documents. There is an imminent need to develop NLP and ML techniques for automatically processing and understanding legal documents to streamline the legal system. However, evaluating and comparing various NLP models designed specifically for the legal domain is challenging. This paper addresses this challenge by proposing : Benchmark for Indian Legal Text Understanding and Reasoning. contains monolingual (English, Hindi) and multi-lingual (9 Indian languages) domain-specific tasks that address different aspects of the legal system from the point of view of understanding and reasoning over Indian legal documents. We present baseline models (including LLM-based) for each task, outlining the gap between models and the ground truth. To foster further research in the legal domain, we create a leaderboard (available at: https://exploration-lab.github.io/IL-TUR/ ) where the research community can upload and compare legal text understanding systems.", }
Legal systems worldwide are inundated with exponential growth in cases and documents. There is an imminent need to develop NLP and ML techniques for automatically processing and understanding legal documents to streamline the legal system. However, evaluating and comparing various NLP models designed specifically for the legal domain is challenging. This paper addresses this challenge by proposing : Benchmark for Indian Legal Text Understanding and Reasoning. contains monolingual (English, Hindi) and multi-lingual (9 Indian languages) domain-specific tasks that address different aspects of the legal system from the point of view of understanding and reasoning over Indian legal documents. We present baseline models (including LLM-based) for each task, outlining the gap between models and the ground truth. To foster further research in the legal domain, we create a leaderboard (available at: https://exploration-lab.github.io/IL-TUR/ ) where the research community can upload and compare legal text understanding systems.
[ "Joshi, Abhinav", "Paul, Shounak", "Sharma, Akshat", "Goyal, Pawan", "Ghosh, Saptarshi", "Modi, Ashutosh" ]
IL-TUR: Benchmark for Indian Legal Text Understanding and Reasoning
acl-long.618
Poster
2407.05399
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.618/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.619.bib
@inproceedings{chen-etal-2024-jumpcoder, title = "{J}ump{C}oder: Go Beyond Autoregressive Coder via Online Modification", author = "Chen, Mouxiang and Tian, Hao and Liu, Zhongxin and Ren, Xiaoxue and Sun, Jianling", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.619", pages = "11500--11520", abstract = "While existing code large language models (code LLMs) exhibit impressive capabilities in code generation, their autoregressive sequential generation inherently lacks reversibility. This limitation hinders them from timely correcting previous missing statements during coding as humans do, often leading to error propagation and suboptimal performance. We introduce JumpCoder, a novel model-agnostic framework that enables human-like online modification and non-sequential generation to augment code LLMs. The key idea behind JumpCoder is to insert new code into the currently generated code when necessary during generation, which is achieved through an auxiliary infilling model that works in tandem with the code LLM. Since identifying the best infill position beforehand is intractable, we adopt an infill-first, judge-later strategy, which experiments with filling at the $k$ most critical positions following the generation of each line, and uses an Abstract Syntax Tree (AST) parser alongside the Generation Model Scoring to effectively judge the validity of each potential infill. Extensive experiments using six state-of-the-art code LLMs across multiple and multilingual benchmarks consistently indicate significant improvements over all baselines. Our code is available in the uploaded attachment.", }
While existing code large language models (code LLMs) exhibit impressive capabilities in code generation, their autoregressive sequential generation inherently lacks reversibility. This limitation hinders them from timely correcting previous missing statements during coding as humans do, often leading to error propagation and suboptimal performance. We introduce JumpCoder, a novel model-agnostic framework that enables human-like online modification and non-sequential generation to augment code LLMs. The key idea behind JumpCoder is to insert new code into the currently generated code when necessary during generation, which is achieved through an auxiliary infilling model that works in tandem with the code LLM. Since identifying the best infill position beforehand is intractable, we adopt an infill-first, judge-later strategy, which experiments with filling at the $k$ most critical positions following the generation of each line, and uses an Abstract Syntax Tree (AST) parser alongside the Generation Model Scoring to effectively judge the validity of each potential infill. Extensive experiments using six state-of-the-art code LLMs across multiple and multilingual benchmarks consistently indicate significant improvements over all baselines. Our code is available in the uploaded attachment.
[ "Chen, Mouxiang", "Tian, Hao", "Liu, Zhongxin", "Ren, Xiaoxue", "Sun, Jianling" ]
JumpCoder: Go Beyond Autoregressive Coder via Online Modification
acl-long.619
Poster
2401.07870
[ "https://github.com/keytoyze/jumpcoder" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.619/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.620.bib
@inproceedings{singh-etal-2024-aya, title = "Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning", author = {Singh, Shivalika and Vargus, Freddie and D{'}souza, Daniel and Karlsson, B{\"o}rje and Mahendiran, Abinaya and Ko, Wei-Yin and Shandilya, Herumb and Patel, Jay and Mataciunas, Deividas and O{'}Mahony, Laura and Zhang, Mike and Hettiarachchi, Ramith and Wilson, Joseph and Machado, Marina and Moura, Luisa and Krzemi{\'n}ski, Dominik and Fadaei, Hakimeh and Ergun, Irem and Okoh, Ifeoma and Alaagib, Aisha and Mudannayake, Oshan and Alyafeai, Zaid and Chien, Vu and Ruder, Sebastian and Guthikonda, Surya and Alghamdi, Emad and Gehrmann, Sebastian and Muennighoff, Niklas and Bartolo, Max and Kreutzer, Julia and {\"U}st{\"u}n, Ahmet and Fadaee, Marzieh and Hooker, Sara}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.620", pages = "11521--11567", abstract = "Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the fine-tuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and augmenting existing datasets across 114 languages. In total, we contribute three key resources: we develop and open-source the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as an important framework for future research collaborations that aim to bridge gaps in resources.", }
Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the fine-tuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and augmenting existing datasets across 114 languages. In total, we contribute three key resources: we develop and open-source the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as an important framework for future research collaborations that aim to bridge gaps in resources.
[ "Singh, Shivalika", "Vargus, Freddie", "D{'}souza, Daniel", "Karlsson, B{\\\"o}rje", "Mahendiran, Abinaya", "Ko, Wei-Yin", "Sh", "ilya, Herumb", "Patel, Jay", "Mataciunas, Deividas", "O{'}Mahony, Laura", "Zhang, Mike", "Hettiarachchi, Ramith", "Wilson, Joseph", "Machado, Marina", "Moura, Luisa", "Krzemi{\\'n}ski, Dominik", "Fadaei, Hakimeh", "Ergun, Irem", "Okoh, Ifeoma", "Alaagib, Aisha", "Mudannayake, Oshan", "Alyafeai, Zaid", "Chien, Vu", "Ruder, Sebastian", "Guthikonda, Surya", "Alghamdi, Emad", "Gehrmann, Sebastian", "Muennighoff, Niklas", "Bartolo, Max", "Kreutzer, Julia", "{\\\"U}st{\\\"u}n, Ahmet", "Fadaee, Marzieh", "Hooker, Sara" ]
Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning
acl-long.620
Poster
2402.06619
[ "" ]
https://huggingface.co/papers/2402.06619
28
51
1
33
https://aclanthology.org/2024.acl-long.620/
[ "AhmadMustafa/MobiLLama-Urdu-Article-Generation", "darkshapes/aya-23-8b-gguf" ]
[ "CohereForAI/aya_dataset", "CohereForAI/aya_collection", "CohereForAI/aya_collection_language_split", "CohereForAI/aya_evaluation_suite", "2A2I/Arabic_Aya", "Heng666/Traditional_Chinese-aya_collection", "Heng666/Traditional_Chinese-aya_evaluation_suite", "Cognitive-Lab/Aya_Gujarati", "Cognitive-Lab/Aya_Tamil", "SEACrowd/aya_evaluation_suite", "SEACrowd/aya_collection_templated", "SEACrowd/aya_dataset", "SEACrowd/aya_collection_translated", "Heng666/Traditional_Chinese-aya_dataset", "Cognitive-Lab/Aya_Kannada", "Cognitive-Lab/Aya_Hindi", "Cognitive-Lab/Aya_Telgu", "Cognitive-Lab/Aya_Malayalam", "Henok/aya_amharic_dataset" ]
[]
1
https://aclanthology.org/2024.acl-long.621.bib
@inproceedings{chatterjee-etal-2024-language, title = "Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel Tasks", author = "Chatterjee, Anwoy and Tanwar, Eshaan and Dutta, Subhabrata and Chakraborty, Tanmoy", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.621", pages = "11568--11587", abstract = "Large Language Models (LLMs) have transformed NLP with their remarkable In-context Learning (ICL) capabilities. Automated assistants based on LLMs are gaining popularity; however, adapting them to novel tasks is still challenging. While colossal models excel in zero-shot performance, their computational demands limit widespread use, and smaller language models struggle without context. This paper investigates whether LLMs can generalize from labeled examples of predefined tasks to novel tasks. Drawing inspiration from biological neurons and the mechanistic interpretation of the Transformer architecture, we explore the potential for information sharing across tasks. We design a cross-task prompting setup with three LLMs and show that LLMs achieve significant performance improvements despite no examples from the target task in the context. Cross-task prompting leads to a remarkable performance boost of 107{\%} for LLaMA-2 7B, 18.6{\%} for LLaMA-2 13B, and 3.2{\%} for GPT 3.5 on average over zero-shot prompting, and performs comparable to standard in-context learning. The effectiveness of generating pseudo-labels for in-task examples is demonstrated, and our analyses reveal a strong correlation between the effect of cross-task examples and model activation similarities in source and target input tokens. This paper offers a first-of-its-kind exploration of LLMs{'} ability to solve novel tasks based on contextual signals from different task examples.", }
Large Language Models (LLMs) have transformed NLP with their remarkable In-context Learning (ICL) capabilities. Automated assistants based on LLMs are gaining popularity; however, adapting them to novel tasks is still challenging. While colossal models excel in zero-shot performance, their computational demands limit widespread use, and smaller language models struggle without context. This paper investigates whether LLMs can generalize from labeled examples of predefined tasks to novel tasks. Drawing inspiration from biological neurons and the mechanistic interpretation of the Transformer architecture, we explore the potential for information sharing across tasks. We design a cross-task prompting setup with three LLMs and show that LLMs achieve significant performance improvements despite no examples from the target task in the context. Cross-task prompting leads to a remarkable performance boost of 107{\%} for LLaMA-2 7B, 18.6{\%} for LLaMA-2 13B, and 3.2{\%} for GPT 3.5 on average over zero-shot prompting, and performs comparable to standard in-context learning. The effectiveness of generating pseudo-labels for in-task examples is demonstrated, and our analyses reveal a strong correlation between the effect of cross-task examples and model activation similarities in source and target input tokens. This paper offers a first-of-its-kind exploration of LLMs{'} ability to solve novel tasks based on contextual signals from different task examples.
[ "Chatterjee, Anwoy", "Tanwar, Eshaan", "Dutta, Subhabrata", "Chakraborty, Tanmoy" ]
Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel Tasks
acl-long.621
Poster
2405.10548
[ "https://github.com/c-anwoy/cross-task-icl" ]
https://huggingface.co/papers/2405.10548
1
0
0
4
https://aclanthology.org/2024.acl-long.621/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.622.bib
@inproceedings{ponce-martinez-etal-2024-split, title = "Split and Rephrase with Large Language Models", author = "Ponce Mart{\'\i}nez, Antonio David and Etchegoyhen, Thierry and Calleja Perez, Jesus Javier and Gete, Harritxu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.622", pages = "11588--11607", abstract = "The Split and Rephrase (SPRP) task, which consists in splitting complex sentences into a sequence of shorter grammatical sentences, while preserving the original meaning, can facilitate the processing of complex texts for humans and machines alike. It is also a valuable testbed to evaluate natural language processing models, as it requires modelling complex grammatical aspects. In this work, we evaluate large language models on the task, showing that they can provide large improvements over the state of the art on the main metrics, although still lagging in terms of splitting compliance. Results from two human evaluations further support the conclusions drawn from automated metric results. We provide a comprehensive study that includes prompting variants, domain shift, fine-tuned pretrained language models of varying parameter size and training data volumes, contrasted with both zero-shot and few-shot approaches on instruction-tuned language models. Although the latter were markedly outperformed by fine-tuned models, they may constitute a reasonable off-the-shelf alternative. Our results provide a fine-grained analysis of the potential and limitations of large language models for SPRP, with significant improvements achievable using relatively small amounts of training data and model parameters overall, and remaining limitations for all models on the task.", }
The Split and Rephrase (SPRP) task, which consists in splitting complex sentences into a sequence of shorter grammatical sentences, while preserving the original meaning, can facilitate the processing of complex texts for humans and machines alike. It is also a valuable testbed to evaluate natural language processing models, as it requires modelling complex grammatical aspects. In this work, we evaluate large language models on the task, showing that they can provide large improvements over the state of the art on the main metrics, although still lagging in terms of splitting compliance. Results from two human evaluations further support the conclusions drawn from automated metric results. We provide a comprehensive study that includes prompting variants, domain shift, fine-tuned pretrained language models of varying parameter size and training data volumes, contrasted with both zero-shot and few-shot approaches on instruction-tuned language models. Although the latter were markedly outperformed by fine-tuned models, they may constitute a reasonable off-the-shelf alternative. Our results provide a fine-grained analysis of the potential and limitations of large language models for SPRP, with significant improvements achievable using relatively small amounts of training data and model parameters overall, and remaining limitations for all models on the task.
[ "Ponce Mart{\\'\\i}nez, Antonio David", "Etchegoyhen, Thierry", "Calleja Perez, Jesus Javier", "Gete, Harritxu" ]
Split and Rephrase with Large Language Models
acl-long.622
Poster
2312.11075
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.622/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.623.bib
@inproceedings{ye-etal-2024-chunkattention, title = "{C}hunk{A}ttention: Efficient Self-Attention with Prefix-Aware {KV} Cache and Two-Phase Partition", author = "Ye, Lu and Tao, Ze and Huang, Yong and Li, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.623", pages = "11608--11620", abstract = "Self-attention is an essential component of large language models (LLM) but a significant source of inference latency for long sequences. In multi-tenant LLMs serving scenarios, the compute and memory operation cost of self-attention can be optimized by using the probability that multiple LLM requests have shared system prompts in prefixes. In this paper, we introduce ChunkAttention, a prefix-aware self-attention module that can detect matching prompt prefixes across multiple requests and share their key/value tensors in memory at runtime to improve the memory utilization of KV cache. This is achieved by breaking monolithic key/value tensors into smaller chunks and structuring them into the auxiliary prefix tree. Consequently, on top of the prefix-tree based KV cache, we design an efficient self-attention kernel, where a two-phase partition algorithm is implemented to improve the data locality during self-attention computation in the presence of shared system prompts. Experiments show that ChunkAttention can speed up the self-attention kernel by 3.2-4.8$\times$ compared to the start-of-the-art implementation, with the length of the system prompt ranging from 1024 to 4096.", }
Self-attention is an essential component of large language models (LLM) but a significant source of inference latency for long sequences. In multi-tenant LLMs serving scenarios, the compute and memory operation cost of self-attention can be optimized by using the probability that multiple LLM requests have shared system prompts in prefixes. In this paper, we introduce ChunkAttention, a prefix-aware self-attention module that can detect matching prompt prefixes across multiple requests and share their key/value tensors in memory at runtime to improve the memory utilization of KV cache. This is achieved by breaking monolithic key/value tensors into smaller chunks and structuring them into the auxiliary prefix tree. Consequently, on top of the prefix-tree based KV cache, we design an efficient self-attention kernel, where a two-phase partition algorithm is implemented to improve the data locality during self-attention computation in the presence of shared system prompts. Experiments show that ChunkAttention can speed up the self-attention kernel by 3.2-4.8$\times$ compared to the start-of-the-art implementation, with the length of the system prompt ranging from 1024 to 4096.
[ "Ye, Lu", "Tao, Ze", "Huang, Yong", "Li, Yang" ]
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition
acl-long.623
Poster
2402.15220
[ "https://github.com/microsoft/chunk-attention" ]
https://huggingface.co/papers/2402.15220
0
19
3
4
https://aclanthology.org/2024.acl-long.623/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.624.bib
@inproceedings{liu-etal-2024-alignbench, title = "{A}lign{B}ench: Benchmarking {C}hinese Alignment of Large Language Models", author = "Liu, Xiao and Lei, Xuanyu and Wang, Shengyuan and Huang, Yue and Feng, Andrew and Wen, Bosi and Cheng, Jiale and Ke, Pei and Xu, Yifan and Tam, Weng Lam and Zhang, Xiaohan and Sun, Lichao and Gu, Xiaotao and Wang, Hongning and Zhang, Jing and Huang, Minlie and Dong, Yuxiao and Tang, Jie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.624", pages = "11621--11640", abstract = "Alignment has become a critical step for instruction-tuned Large Language Models (LLMs) to become helpful assistants. However, effective evaluation of alignment for emerging Chinese LLMs is still significantly lacking, calling for real-scenario grounded, open-ended, challenging and automatic evaluations tailored for alignment. To fill in this gap, we introduce AlignBench, a comprehensive multi-dimensional benchmark for evaluating LLMs{'} alignment in Chinese. We tailor a human-in-the-loop data curation pipeline, containing 8 main categories, 683 real-scenario rooted queries and corresponding human verified references.To ensure references{'} correctness, each knowledge-intensive query is accompanied with evidences collected from reliable webpages (including the url and quotation) by our annotators.For automatic evaluation, our benchmark employs a rule-calibrated multi-dimensional LLM-as-Judge (CITATION) with Chain-of-Thought to generate explanations and final ratings as evaluations, ensuring high reliability and interpretability.All evaluation codes and data are publicly available at \url{https://github.com/THUDM/AlignBench}", }
Alignment has become a critical step for instruction-tuned Large Language Models (LLMs) to become helpful assistants. However, effective evaluation of alignment for emerging Chinese LLMs is still significantly lacking, calling for real-scenario grounded, open-ended, challenging and automatic evaluations tailored for alignment. To fill in this gap, we introduce AlignBench, a comprehensive multi-dimensional benchmark for evaluating LLMs{'} alignment in Chinese. We tailor a human-in-the-loop data curation pipeline, containing 8 main categories, 683 real-scenario rooted queries and corresponding human verified references.To ensure references{'} correctness, each knowledge-intensive query is accompanied with evidences collected from reliable webpages (including the url and quotation) by our annotators.For automatic evaluation, our benchmark employs a rule-calibrated multi-dimensional LLM-as-Judge (CITATION) with Chain-of-Thought to generate explanations and final ratings as evaluations, ensuring high reliability and interpretability.All evaluation codes and data are publicly available at \url{https://github.com/THUDM/AlignBench}
[ "Liu, Xiao", "Lei, Xuanyu", "Wang, Shengyuan", "Huang, Yue", "Feng, Andrew", "Wen, Bosi", "Cheng, Jiale", "Ke, Pei", "Xu, Yifan", "Tam, Weng Lam", "Zhang, Xiaohan", "Sun, Lichao", "Gu, Xiaotao", "Wang, Hongning", "Zhang, Jing", "Huang, Minlie", "Dong, Yuxiao", "Tang, Jie" ]
AlignBench: Benchmarking Chinese Alignment of Large Language Models
acl-long.624
Poster
2311.18743
[ "https://github.com/thudm/alignbench" ]
https://huggingface.co/papers/2311.18743
0
1
0
17
https://aclanthology.org/2024.acl-long.624/
[ "deepseek-ai/DeepSeek-V2-Chat", "deepseek-ai/DeepSeek-V2", "CofeAI/FLM-2-52B-Instruct-2407" ]
[]
[ "allenai/WildBench", "allenai/ZebraLogic", "Justinrune/LLaMA-Factory", "xzuyn/Token-Count-Comparison", "concedo/WebTokenizer", "kenken999/fastapi_django_main_live" ]
1
https://aclanthology.org/2024.acl-long.625.bib
@inproceedings{zhao-etal-2024-sapt, title = "{SAPT}: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models", author = "Zhao, Weixiang and Wang, Shilong and Hu, Yulin and Zhao, Yanyan and Qin, Bing and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang and Che, Wanxiang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.625", pages = "11641--11661", abstract = "The continual learning (CL) ability is vital for deploying large language models (LLMs) in the dynamic world. Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL. However, these methods tend to address only one of the challenges, ignoring the potential of aligning the two modules to effectively address catastrophic forgetting and knowledge transfer simultaneously. To this end, we propose a novel Shared Attention Framework (SAPT), to align the PET learning and selection via the Shared Attentive Learning {\&} Selection module. Extensive Experiments on two CL benchmarks demonstrate the superiority of SAPT. Moreover, SAPT consistently demonstrates its superiority when we scale it to different model sizes (from 770M to 13B), different model architectures (T5 and LLaMA-2) and unseen tasks.", }
The continual learning (CL) ability is vital for deploying large language models (LLMs) in the dynamic world. Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL. However, these methods tend to address only one of the challenges, ignoring the potential of aligning the two modules to effectively address catastrophic forgetting and knowledge transfer simultaneously. To this end, we propose a novel Shared Attention Framework (SAPT), to align the PET learning and selection via the Shared Attentive Learning {\&} Selection module. Extensive Experiments on two CL benchmarks demonstrate the superiority of SAPT. Moreover, SAPT consistently demonstrates its superiority when we scale it to different model sizes (from 770M to 13B), different model architectures (T5 and LLaMA-2) and unseen tasks.
[ "Zhao, Weixiang", "Wang, Shilong", "Hu, Yulin", "Zhao, Yanyan", "Qin, Bing", "Zhang, Xuanyu", "Yang, Qing", "Xu, Dongliang", "Che, Wanxiang" ]
SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models
acl-long.625
Poster
2401.08295
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.625/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.626.bib
@inproceedings{mao-etal-2024-dora, title = "{D}o{RA}: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution", author = "Mao, Yulong and Huang, Kaiyu and Guan, Changhao and Bao, Ganglin and Mo, Fengran and Xu, Jinan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.626", pages = "11662--11675", abstract = "Fine-tuning large-scale pre-trained models is inherently a resource-intensive task. While it can enhance the capabilities of the model, it also incurs substantial computational costs, posing challenges to the practical application of downstream tasks. Existing parameter-efficient fine-tuning (PEFT) methods such as Low-Rank Adaptation (LoRA) rely on a bypass framework that ignores the differential parameter budget requirements across weight matrices, which may lead to suboptimal fine-tuning outcomes. To address this issue, we introduce the Dynamic Low-Rank Adaptation (DoRA) method. DoRA decomposes high-rank LoRA layers into structured single-rank components, allowing for dynamic pruning of parameter budget based on their importance to specific tasks during training, which makes the most of the limited parameter budget. Experimental results demonstrate that DoRA can achieve competitive performance compared with LoRA and full model fine-tuning, and outperform various strong baselines with the same storage parameter budget. Our code is available at [github](https://github.com/MIkumikumi0116/DoRA)", }
Fine-tuning large-scale pre-trained models is inherently a resource-intensive task. While it can enhance the capabilities of the model, it also incurs substantial computational costs, posing challenges to the practical application of downstream tasks. Existing parameter-efficient fine-tuning (PEFT) methods such as Low-Rank Adaptation (LoRA) rely on a bypass framework that ignores the differential parameter budget requirements across weight matrices, which may lead to suboptimal fine-tuning outcomes. To address this issue, we introduce the Dynamic Low-Rank Adaptation (DoRA) method. DoRA decomposes high-rank LoRA layers into structured single-rank components, allowing for dynamic pruning of parameter budget based on their importance to specific tasks during training, which makes the most of the limited parameter budget. Experimental results demonstrate that DoRA can achieve competitive performance compared with LoRA and full model fine-tuning, and outperform various strong baselines with the same storage parameter budget. Our code is available at [github](https://github.com/MIkumikumi0116/DoRA)
[ "Mao, Yulong", "Huang, Kaiyu", "Guan, Changhao", "Bao, Ganglin", "Mo, Fengran", "Xu, Jinan" ]
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution
acl-long.626
Oral
2405.17357
[ "https://github.com/yulongmao1/dora" ]
https://huggingface.co/papers/2405.17357
0
0
0
6
https://aclanthology.org/2024.acl-long.626/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.627.bib
@inproceedings{wang-etal-2024-cross, title = "Cross-Lingual Knowledge Editing in Large Language Models", author = "Wang, Jiaan and Liang, Yunlong and Sun, Zengkui and Cao, Yuxuan and Xu, Jiarong and Meng, Fandong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.627", pages = "11676--11686", abstract = "Knowledge editing aims to change language models{'} performance on several special cases (i.e., editing scope) by infusing the corresponding expected knowledge into them. With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch. However, most of the previous studies neglect the multi-lingual nature of some main-stream LLMs (e.g., LLaMA, ChatGPT and GPT-4), and typically focus on monolingual scenarios, where LLMs are edited and evaluated in the same language. As a result, it is still unknown the effect of source language editing on a different target language. In this paper, we aim to figure out this cross-lingual effect in knowledge editing. Specifically, we first collect a large-scale cross-lingual synthetic dataset by translating ZsRE from English to Chinese. Then, we conduct English editing on various knowledge editing methods covering different paradigms, and evaluate their performance in Chinese, and vice versa. To give deeper analyses of the cross-lingual effect, the evaluation includes four aspects, i.e., reliability, generality, locality and portability. Furthermore, we analyze the inconsistent behaviors of the edited models and discuss their specific challenges.", }
Knowledge editing aims to change language models{'} performance on several special cases (i.e., editing scope) by infusing the corresponding expected knowledge into them. With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch. However, most of the previous studies neglect the multi-lingual nature of some main-stream LLMs (e.g., LLaMA, ChatGPT and GPT-4), and typically focus on monolingual scenarios, where LLMs are edited and evaluated in the same language. As a result, it is still unknown the effect of source language editing on a different target language. In this paper, we aim to figure out this cross-lingual effect in knowledge editing. Specifically, we first collect a large-scale cross-lingual synthetic dataset by translating ZsRE from English to Chinese. Then, we conduct English editing on various knowledge editing methods covering different paradigms, and evaluate their performance in Chinese, and vice versa. To give deeper analyses of the cross-lingual effect, the evaluation includes four aspects, i.e., reliability, generality, locality and portability. Furthermore, we analyze the inconsistent behaviors of the edited models and discuss their specific challenges.
[ "Wang, Jiaan", "Liang, Yunlong", "Sun, Zengkui", "Cao, Yuxuan", "Xu, Jiarong", "Meng, F", "ong" ]
Cross-Lingual Knowledge Editing in Large Language Models
acl-long.627
Poster
2309.08952
[ "https://github.com/krystalan/bi_zsre" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.627/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.628.bib
@inproceedings{yeginbergen-etal-2024-argument, title = "Argument Mining in Data Scarce Settings: Cross-lingual Transfer and Few-shot Techniques", author = "Yeginbergen, Anar and Oronoz, Maite and Agerri, Rodrigo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.628", pages = "11687--11699", abstract = "Recent research on sequence labelling has been exploring different strategies to mitigate the lack of manually annotated data for the large majority of the world languages. Among others, the most successful approaches have been based on (i) the crosslingual transfer capabilities of multilingual pre-trained language models (model-transfer), (ii) data translation and label projection (data-transfer) and (iii), prompt-based learning by reusing the mask objective to exploit the few-shot capabilities of pre-trained language models (few-shot). Previous work seems to conclude that model-transfer outperform data-transfer methods and that few-shot techniques based on prompting are superior to updating the model{'}s weights via fine-tuning. In this paper we empirically demonstrate that, for Argument Mining, a sequence labelling task which requires the detection of long and complex discourse structures, previous insights on crosslingual transfer or few-shot learning do not apply. Contrary to previous work, we show that for Argument Mining data-transfer obtains better results than model-transfer and that fine-tuning outperforms few-shot methods. Regarding the former, the domain of the dataset used for data-transfer seems to be a deciding factor, while, for few-shot, the type of task (length and complexity of the sequence spans) and sampling method proves to be crucial.", }
Recent research on sequence labelling has been exploring different strategies to mitigate the lack of manually annotated data for the large majority of the world languages. Among others, the most successful approaches have been based on (i) the crosslingual transfer capabilities of multilingual pre-trained language models (model-transfer), (ii) data translation and label projection (data-transfer) and (iii), prompt-based learning by reusing the mask objective to exploit the few-shot capabilities of pre-trained language models (few-shot). Previous work seems to conclude that model-transfer outperform data-transfer methods and that few-shot techniques based on prompting are superior to updating the model{'}s weights via fine-tuning. In this paper we empirically demonstrate that, for Argument Mining, a sequence labelling task which requires the detection of long and complex discourse structures, previous insights on crosslingual transfer or few-shot learning do not apply. Contrary to previous work, we show that for Argument Mining data-transfer obtains better results than model-transfer and that fine-tuning outperforms few-shot methods. Regarding the former, the domain of the dataset used for data-transfer seems to be a deciding factor, while, for few-shot, the type of task (length and complexity of the sequence spans) and sampling method proves to be crucial.
[ "Yeginbergen, Anar", "Oronoz, Maite", "Agerri, Rodrigo" ]
Argument Mining in Data Scarce Settings: Cross-lingual Transfer and Few-shot Techniques
acl-long.628
Poster
2407.03748
[ "https://github.com/anaryegen/few_shot_argument_mining" ]
https://huggingface.co/papers/2407.03748
1
0
0
3
https://aclanthology.org/2024.acl-long.628/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.629.bib
@inproceedings{wen-etal-2024-learning, title = "Learning Task Decomposition to Assist Humans in Competitive Programming", author = "Wen, Jiaxin and Zhong, Ruiqi and Ke, Pei and Shao, Zhihong and Wang, Hongning and Huang, Minlie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.629", pages = "11700--11723", abstract = "When using language models (LMs) to solve complex problems, humans might struggle to understand the LM-generated solutions and repair the flawed ones. To assist humans in repairing them, we propose to automatically decompose complex solutions into multiple simpler pieces that correspond to specific subtasks. We introduce a novel objective for learning task decomposition, termed assistive value (AssistV), which measures the feasibility and speed for humans to repair the decomposed solution. We collect a dataset of human repair experiences on different decomposed solutions. Utilizing the collected data as in-context examples, we then learn to critique, refine, and rank decomposed solutions to improve AssistV. We validate our method under competitive programming problems: under 177 hours of human study, our method enables non-experts to solve 33.3{\%} more problems, speeds them up by 3.3x, and empowers them to match unassisted experts.", }
When using language models (LMs) to solve complex problems, humans might struggle to understand the LM-generated solutions and repair the flawed ones. To assist humans in repairing them, we propose to automatically decompose complex solutions into multiple simpler pieces that correspond to specific subtasks. We introduce a novel objective for learning task decomposition, termed assistive value (AssistV), which measures the feasibility and speed for humans to repair the decomposed solution. We collect a dataset of human repair experiences on different decomposed solutions. Utilizing the collected data as in-context examples, we then learn to critique, refine, and rank decomposed solutions to improve AssistV. We validate our method under competitive programming problems: under 177 hours of human study, our method enables non-experts to solve 33.3{\%} more problems, speeds them up by 3.3x, and empowers them to match unassisted experts.
[ "Wen, Jiaxin", "Zhong, Ruiqi", "Ke, Pei", "Shao, Zhihong", "Wang, Hongning", "Huang, Minlie" ]
Learning Task Decomposition to Assist Humans in Competitive Programming
acl-long.629
Poster
2406.04604
[ "" ]
https://huggingface.co/papers/2406.04604
5
2
2
6
https://aclanthology.org/2024.acl-long.629/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.630.bib
@inproceedings{lu-etal-2024-entropy, title = "An Entropy-based Text Watermarking Detection Method", author = "Lu, Yijian and Liu, Aiwei and Yu, Dianzhi and Li, Jingjing and King, Irwin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.630", pages = "11724--11735", abstract = "Text watermarking algorithms for large language models (LLMs) can effectively identify machine-generated texts by embedding and detecting hidden features in the text. Although the current text watermarking algorithms perform well in most high-entropy scenarios, its performance in low-entropy scenarios still needs to be improved. In this work, we opine that the influence of token entropy should be fully considered in the watermark detection process, $i.e.$, the weight of each token during watermark detection should be customized according to its entropy, rather than setting the weights of all tokens to the same value as in previous methods. Specifically, we propose \textbf{E}ntropy-based Text \textbf{W}atermarking \textbf{D}etection (\textbf{EWD}) that gives higher-entropy tokens higher influence weights during watermark detection, so as to better reflect the degree of watermarking. Furthermore, the proposed detection process is training-free and fully automated. From the experiments, we demonstrate that our EWD can achieve better detection performance in low-entropy scenarios, and our method is also general and can be applied to texts with different entropy distributions. Our code and data is available. Additionally, our algorithm could be accessed through MarkLLM (CITATION).", }
Text watermarking algorithms for large language models (LLMs) can effectively identify machine-generated texts by embedding and detecting hidden features in the text. Although the current text watermarking algorithms perform well in most high-entropy scenarios, its performance in low-entropy scenarios still needs to be improved. In this work, we opine that the influence of token entropy should be fully considered in the watermark detection process, $i.e.$, the weight of each token during watermark detection should be customized according to its entropy, rather than setting the weights of all tokens to the same value as in previous methods. Specifically, we propose \textbf{E}ntropy-based Text \textbf{W}atermarking \textbf{D}etection (\textbf{EWD}) that gives higher-entropy tokens higher influence weights during watermark detection, so as to better reflect the degree of watermarking. Furthermore, the proposed detection process is training-free and fully automated. From the experiments, we demonstrate that our EWD can achieve better detection performance in low-entropy scenarios, and our method is also general and can be applied to texts with different entropy distributions. Our code and data is available. Additionally, our algorithm could be accessed through MarkLLM (CITATION).
[ "Lu, Yijian", "Liu, Aiwei", "Yu, Dianzhi", "Li, Jingjing", "King, Irwin" ]
An Entropy-based Text Watermarking Detection Method
acl-long.630
Poster
2403.13485
[ "https://github.com/luyijian3/ewd" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.630/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.631.bib
@inproceedings{zhou-etal-2024-enhancing-explainable, title = "Enhancing Explainable Rating Prediction through Annotated Macro Concepts", author = "Zhou, Huachi and Zhou, Shuang and Chen, Hao and Liu, Ninghao and Yang, Fan and Huang, Xiao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.631", pages = "11736--11748", abstract = "Generating recommendation reasons for recommendation results is a long-standing problem because it is challenging to explain the underlying reasons for recommending an item based on user and item IDs. Existing models usually learn semantic embeddings for each user and item, and generate the reasons according to the embeddings of the user-item pair. However, user and item IDs do not carry inherent semantic meaning, thus the limited number of reviews cannot model users{'} preferences and item characteristics effectively, negatively affecting the model generalization for unseen user-item pairs.To tackle the problem, we propose the Concept Enhanced Explainable Recommendation framework (CEER), which utilizes macro concepts as the intermediary to bridge the gap between the user/item embeddings and the recommendation reasons. Specifically, we maximize the information bottleneck to extract macro concepts from user-item reviews. Then, for recommended user-item pairs, we jointly train the concept embeddings with the user and item embeddings, and generate the explanation according to the concepts. Extensive experiments on three datasets verify the superiority of our CEER model.", }
Generating recommendation reasons for recommendation results is a long-standing problem because it is challenging to explain the underlying reasons for recommending an item based on user and item IDs. Existing models usually learn semantic embeddings for each user and item, and generate the reasons according to the embeddings of the user-item pair. However, user and item IDs do not carry inherent semantic meaning, thus the limited number of reviews cannot model users{'} preferences and item characteristics effectively, negatively affecting the model generalization for unseen user-item pairs.To tackle the problem, we propose the Concept Enhanced Explainable Recommendation framework (CEER), which utilizes macro concepts as the intermediary to bridge the gap between the user/item embeddings and the recommendation reasons. Specifically, we maximize the information bottleneck to extract macro concepts from user-item reviews. Then, for recommended user-item pairs, we jointly train the concept embeddings with the user and item embeddings, and generate the explanation according to the concepts. Extensive experiments on three datasets verify the superiority of our CEER model.
[ "Zhou, Huachi", "Zhou, Shuang", "Chen, Hao", "Liu, Ninghao", "Yang, Fan", "Huang, Xiao" ]
Enhancing Explainable Rating Prediction through Annotated Macro Concepts
acl-long.631
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.631/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.632.bib
@inproceedings{cui-etal-2024-engage, title = "How to Engage your Readers? Generating Guiding Questions to Promote Active Reading", author = "Cui, Peng and Zouhar, Vil{\'e}m and Zhang, Xiaoyu and Sachan, Mrinmaya", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.632", pages = "11749--11765", abstract = "Using questions in written text is an effective strategy to enhance readability. However, what makes an active reading question good, what the linguistic role of these questions is, and what is their impact on human reading remains understudied. We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles. By analyzing the dataset, we present a comprehensive understanding of the use, distribution, and linguistic characteristics of these questions. Then, we explore various approaches to generate such questions using language models. Our results highlight the importance of capturing inter-question relationships and the challenge of question position identification in generating these questions. Finally, we conduct a human study to understand the implication of such questions on reading comprehension. We find that the generated questions are of high quality and are almost as effective as human-written questions in terms of improving readers{'} memorization and comprehension.", }
Using questions in written text is an effective strategy to enhance readability. However, what makes an active reading question good, what the linguistic role of these questions is, and what is their impact on human reading remains understudied. We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles. By analyzing the dataset, we present a comprehensive understanding of the use, distribution, and linguistic characteristics of these questions. Then, we explore various approaches to generate such questions using language models. Our results highlight the importance of capturing inter-question relationships and the challenge of question position identification in generating these questions. Finally, we conduct a human study to understand the implication of such questions on reading comprehension. We find that the generated questions are of high quality and are almost as effective as human-written questions in terms of improving readers{'} memorization and comprehension.
[ "Cui, Peng", "Zouhar, Vil{\\'e}m", "Zhang, Xiaoyu", "Sachan, Mrinmaya" ]
How to Engage your Readers? Generating Guiding Questions to Promote Active Reading
acl-long.632
Poster
2407.14309
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.632/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.633.bib
@inproceedings{yue-etal-2024-less, title = "Less is More: Mitigating Multimodal Hallucination from an {EOS} Decision Perspective", author = "Yue, Zihao and Zhang, Liang and Jin, Qin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.633", pages = "11766--11781", abstract = "Large Multimodal Models (LMMs) often suffer from multimodal hallucinations, wherein they may create content that is not present in the visual inputs. In this paper, we explore a new angle of this issue: overly detailed training data hinders the model{'}s ability to timely terminate generation, leading to continued outputs beyond visual perception limits. By investigating how the model decides to terminate generation with EOS, the special end-of-sentence token, we find that the model assesses the completeness of the entire sequence by comparing the generated text with the image. This observation suggests that the model possesses an inherent potential of making proper EOS decisions based on its visual perception to avoid overly lengthy outputs. To take advantage of such potential, we explore two methods to mitigate multimodal hallucinations: a training objective that enables the model to reduce hallucinations by learning from regular instruction data, and a data filtering strategy to prevent harmful training data from exacerbating model hallucinations. Both methods significantly improve the hallucination performance of LMMs, without requiring any additional data or knowledge.", }
Large Multimodal Models (LMMs) often suffer from multimodal hallucinations, wherein they may create content that is not present in the visual inputs. In this paper, we explore a new angle of this issue: overly detailed training data hinders the model{'}s ability to timely terminate generation, leading to continued outputs beyond visual perception limits. By investigating how the model decides to terminate generation with EOS, the special end-of-sentence token, we find that the model assesses the completeness of the entire sequence by comparing the generated text with the image. This observation suggests that the model possesses an inherent potential of making proper EOS decisions based on its visual perception to avoid overly lengthy outputs. To take advantage of such potential, we explore two methods to mitigate multimodal hallucinations: a training objective that enables the model to reduce hallucinations by learning from regular instruction data, and a data filtering strategy to prevent harmful training data from exacerbating model hallucinations. Both methods significantly improve the hallucination performance of LMMs, without requiring any additional data or knowledge.
[ "Yue, Zihao", "Zhang, Liang", "Jin, Qin" ]
Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective
acl-long.633
Poster
2402.14545
[ "https://github.com/yuezih/less-is-more" ]
https://huggingface.co/papers/2402.14545
0
0
0
3
https://aclanthology.org/2024.acl-long.633/
[ "yuezih/llava-v1.5-7b-selective-150k-lora", "yuezih/llava-v1.5-7b-selective-23k-lora" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.634.bib
@inproceedings{wang-etal-2024-integrate, title = "Integrate the Essence and Eliminate the Dross: Fine-Grained Self-Consistency for Free-Form Language Generation", author = "Wang, Xinglin and Li, Yiwei and Feng, Shaoxiong and Yuan, Peiwen and Pan, Boyuan and Wang, Heda and Hu, Yao and Li, Kan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.634", pages = "11782--11794", abstract = "Self-consistency (SC), leveraging multiple samples from LLMs, shows significant gains on various reasoning tasks but struggles with free-form generation due to the difficulty of aggregating answers. Its variants, UCS and USC, rely on sample selection or voting mechanisms to improve output quality. These methods, however, face limitations due to their inability to fully utilize the nuanced consensus knowledge present within multiple candidate samples, often resulting in suboptimal outputs. We propose Fine-Grained Self-Consistency (FSC) to addresses these limitations by extracting and integrating segment-level commonalities from candidate samples, enhancing the performance of LLMs both in open-ended and reasoning tasks. Based on this, we present two additional strategies: candidate filtering, which enhances overall quality by identifying highly similar candidate sets, and merging, which reduces input token requirements by combining similar samples. The effectiveness of FSC is demonstrated through extensive experiments on various tasks, including summarization, code generation, and mathematical reasoning, using GPT-3.5-turbo and GPT-4. The results indicate significant improvements over baseline methods, showcasing the potential of FSC to optimize output quality by effectively synthesizing fine-grained consensus knowledge from multiple samples.", }
Self-consistency (SC), leveraging multiple samples from LLMs, shows significant gains on various reasoning tasks but struggles with free-form generation due to the difficulty of aggregating answers. Its variants, UCS and USC, rely on sample selection or voting mechanisms to improve output quality. These methods, however, face limitations due to their inability to fully utilize the nuanced consensus knowledge present within multiple candidate samples, often resulting in suboptimal outputs. We propose Fine-Grained Self-Consistency (FSC) to addresses these limitations by extracting and integrating segment-level commonalities from candidate samples, enhancing the performance of LLMs both in open-ended and reasoning tasks. Based on this, we present two additional strategies: candidate filtering, which enhances overall quality by identifying highly similar candidate sets, and merging, which reduces input token requirements by combining similar samples. The effectiveness of FSC is demonstrated through extensive experiments on various tasks, including summarization, code generation, and mathematical reasoning, using GPT-3.5-turbo and GPT-4. The results indicate significant improvements over baseline methods, showcasing the potential of FSC to optimize output quality by effectively synthesizing fine-grained consensus knowledge from multiple samples.
[ "Wang, Xinglin", "Li, Yiwei", "Feng, Shaoxiong", "Yuan, Peiwen", "Pan, Boyuan", "Wang, Heda", "Hu, Yao", "Li, Kan" ]
Integrate the Essence and Eliminate the Dross: Fine-Grained Self-Consistency for Free-Form Language Generation
acl-long.634
Poster
2407.02056
[ "https://github.com/WangXinglin/FSC" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.634/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.635.bib
@inproceedings{tao-etal-2024-frequent, title = "More frequent verbs are associated with more diverse valency frames: Efficient principles at the lexicon-grammar interface", author = "Tao, Siyu and Donatelli, Lucia and Hahn, Michael", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.635", pages = "11795--11810", abstract = "A substantial body of work has provided evidence that the lexicons of natural languages are organized to support efficient communication. However, existing work has largely focused on word-internal properties, such as Zipf{'}s observation that more frequent words are optimized in form to minimize communicative cost. Here, we investigate the hypothesis that efficient lexicon organization is also reflected in valency, or the combinations and orders of additional words and phrases a verb selects for in a sentence. We consider two measures of valency diversity for verbs: valency frame count (VFC), the number of distinct frames associated with a verb, and valency frame entropy (VFE), the average information content of frame selection associated with a verb. Using data from 79 languages, we provide evidence that more frequent verbs are associated with a greater diversity of valency frames, suggesting that the organization of valency is consistent with communicative efficiency principles. We discuss our findings in relation to classical findings such as Zipf{'}s meaning-frequency law and the principle of least effort, as well as implications for theories of valency and communicative efficiency principles.", }
A substantial body of work has provided evidence that the lexicons of natural languages are organized to support efficient communication. However, existing work has largely focused on word-internal properties, such as Zipf{'}s observation that more frequent words are optimized in form to minimize communicative cost. Here, we investigate the hypothesis that efficient lexicon organization is also reflected in valency, or the combinations and orders of additional words and phrases a verb selects for in a sentence. We consider two measures of valency diversity for verbs: valency frame count (VFC), the number of distinct frames associated with a verb, and valency frame entropy (VFE), the average information content of frame selection associated with a verb. Using data from 79 languages, we provide evidence that more frequent verbs are associated with a greater diversity of valency frames, suggesting that the organization of valency is consistent with communicative efficiency principles. We discuss our findings in relation to classical findings such as Zipf{'}s meaning-frequency law and the principle of least effort, as well as implications for theories of valency and communicative efficiency principles.
[ "Tao, Siyu", "Donatelli, Lucia", "Hahn, Michael" ]
More frequent verbs are associated with more diverse valency frames: Efficient principles at the lexicon-grammar interface
acl-long.635
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.635/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.636.bib
@inproceedings{collacciani-etal-2024-quantifying, title = "Quantifying Generalizations: Exploring the Divide Between Human and {LLM}s{'} Sensitivity to Quantification", author = "Collacciani, Claudia and Rambelli, Giulia and Bolognesi, Marianna", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.636", pages = "11811--11822", abstract = "Generics are expressions used to communicate abstractions about categories. While conveying general truths (e.g., {``}Birds fly{''}), generics have the interesting property to admit exceptions (e.g., penguins do not fly). Statements of this type help us organizing our knowledge of the world, and form the basis of how we express it (Hampton, 2012; Leslie, 2014).This study investigates how Large Language Models (LLMs) interpret generics, drawing upon psycholinguistic experimental methodologies. Understanding how LLMs interpret generic statements serves not only as a measure of their ability to abstract but also arguably plays a role in their encoding of stereotypes. Given that generics interpretation necessitates a comparison with explicitly quantified sentences, we explored i.) whether LLMs can correctly associate a quantifier with the generic structure, and ii.) whether the presence of a generic sentence as context influences the outcomes of quantifiers. We evaluated LLMs using both Surprisal distributions and prompting techniques.The findings indicate that models do not exhibit a strong sensitivity to quantification. Nevertheless, they seem to encode a meaning linked with the generic structure, which leads them to adjust their answers accordingly when a generalization is provided as context.", }
Generics are expressions used to communicate abstractions about categories. While conveying general truths (e.g., {``}Birds fly{''}), generics have the interesting property to admit exceptions (e.g., penguins do not fly). Statements of this type help us organizing our knowledge of the world, and form the basis of how we express it (Hampton, 2012; Leslie, 2014).This study investigates how Large Language Models (LLMs) interpret generics, drawing upon psycholinguistic experimental methodologies. Understanding how LLMs interpret generic statements serves not only as a measure of their ability to abstract but also arguably plays a role in their encoding of stereotypes. Given that generics interpretation necessitates a comparison with explicitly quantified sentences, we explored i.) whether LLMs can correctly associate a quantifier with the generic structure, and ii.) whether the presence of a generic sentence as context influences the outcomes of quantifiers. We evaluated LLMs using both Surprisal distributions and prompting techniques.The findings indicate that models do not exhibit a strong sensitivity to quantification. Nevertheless, they seem to encode a meaning linked with the generic structure, which leads them to adjust their answers accordingly when a generalization is provided as context.
[ "Collacciani, Claudia", "Rambelli, Giulia", "Bolognesi, Marianna" ]
Quantifying Generalizations: Exploring the Divide Between Human and LLMs' Sensitivity to Quantification
acl-long.636
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.636/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.637.bib
@inproceedings{rambelli-etal-2024-large, title = "Can Large Language Models Interpret Noun-Noun Compounds? A Linguistically-Motivated Study on Lexicalized and Novel Compounds", author = "Rambelli, Giulia and Chersoni, Emmanuele and Collacciani, Claudia and Bolognesi, Marianna", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.637", pages = "11823--11835", abstract = "Noun-noun compounds interpretation is the task where a model is given one of such constructions, and it is asked to provide a paraphrase, making the semantic relation between the nouns explicit, as in carrot cake is {``}a cake made of carrots.{''} Such a task requires the ability to understand the implicit structured representation of the compound meaning. In this paper, we test to what extent the recent Large Language Models can interpret the semantic relation between the constituents of lexicalized English compounds and whether they can abstract from such semantic knowledge to predict the semantic relation between the constituents of similar but novel compounds by relying on analogical comparisons (e.g., carrot dessert). We test both Surprisal metrics and prompt-based methods to see whether i.) they can correctly predict the relation between constituents, and ii.) the semantic representation of the relation is robust to paraphrasing. Using a dataset of lexicalized and annotated noun-noun compounds, we find that LLMs can infer some semantic relations better than others (with a preference for compounds involving concrete concepts). When challenged to perform abstractions and transfer their interpretations to semantically similar but novel compounds, LLMs show serious limitations.", }
Noun-noun compounds interpretation is the task where a model is given one of such constructions, and it is asked to provide a paraphrase, making the semantic relation between the nouns explicit, as in carrot cake is {``}a cake made of carrots.{''} Such a task requires the ability to understand the implicit structured representation of the compound meaning. In this paper, we test to what extent the recent Large Language Models can interpret the semantic relation between the constituents of lexicalized English compounds and whether they can abstract from such semantic knowledge to predict the semantic relation between the constituents of similar but novel compounds by relying on analogical comparisons (e.g., carrot dessert). We test both Surprisal metrics and prompt-based methods to see whether i.) they can correctly predict the relation between constituents, and ii.) the semantic representation of the relation is robust to paraphrasing. Using a dataset of lexicalized and annotated noun-noun compounds, we find that LLMs can infer some semantic relations better than others (with a preference for compounds involving concrete concepts). When challenged to perform abstractions and transfer their interpretations to semantically similar but novel compounds, LLMs show serious limitations.
[ "Rambelli, Giulia", "Chersoni, Emmanuele", "Collacciani, Claudia", "Bolognesi, Marianna" ]
Can Large Language Models Interpret Noun-Noun Compounds? A Linguistically-Motivated Study on Lexicalized and Novel Compounds
acl-long.637
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.637/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.638.bib
@inproceedings{tu-etal-2024-charactereval, title = "{C}haracter{E}val: A {C}hinese Benchmark for Role-Playing Conversational Agent Evaluation", author = "Tu, Quan and Fan, Shilong and Tian, Zihang and Shen, Tianhao and Shang, Shuo and Gao, Xin and Yan, Rui", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.638", pages = "11836--11850", abstract = "Recently, the advent of large language models (LLMs) has revolutionized generative agents. Among them, Role-Playing Conversational Agents (RPCAs) attract considerable attention due to their ability to emotionally engage users. However, the absence of a comprehensive benchmark impedes progress in this field. To bridge this gap, we introduce \textit{CharacterEval}, a Chinese benchmark for comprehensive RPCA assessment, complemented by a tailored high-quality dataset. The dataset comprises 1,785 multi-turn role-playing dialogues, encompassing 11,376 examples and featuring 77 characters derived from Chinese novels and scripts. It was carefully constructed, beginning with initial dialogue extraction via GPT-4, followed by rigorous human-led quality control, and enhanced with in-depth character profiles sourced from Baidu Baike. \textit{CharacterEval} employs a multifaceted evaluation approach, encompassing thirteen targeted metrics on four dimensions. To facilitate the convenient evaluation for these subjective metrics in \textit{CharacterEval}, we further developed CharacterRM, a role-playing reward model based on human annotations, which has a higher correlation with human judgment compared to GPT-4. Comprehensive experiments on \textit{CharacterEval} demonstrate that Chinese LLMs exhibit more promising capabilities than GPT-4 in Chinese role-playing conversation.", }
Recently, the advent of large language models (LLMs) has revolutionized generative agents. Among them, Role-Playing Conversational Agents (RPCAs) attract considerable attention due to their ability to emotionally engage users. However, the absence of a comprehensive benchmark impedes progress in this field. To bridge this gap, we introduce \textit{CharacterEval}, a Chinese benchmark for comprehensive RPCA assessment, complemented by a tailored high-quality dataset. The dataset comprises 1,785 multi-turn role-playing dialogues, encompassing 11,376 examples and featuring 77 characters derived from Chinese novels and scripts. It was carefully constructed, beginning with initial dialogue extraction via GPT-4, followed by rigorous human-led quality control, and enhanced with in-depth character profiles sourced from Baidu Baike. \textit{CharacterEval} employs a multifaceted evaluation approach, encompassing thirteen targeted metrics on four dimensions. To facilitate the convenient evaluation for these subjective metrics in \textit{CharacterEval}, we further developed CharacterRM, a role-playing reward model based on human annotations, which has a higher correlation with human judgment compared to GPT-4. Comprehensive experiments on \textit{CharacterEval} demonstrate that Chinese LLMs exhibit more promising capabilities than GPT-4 in Chinese role-playing conversation.
[ "Tu, Quan", "Fan, Shilong", "Tian, Zihang", "Shen, Tianhao", "Shang, Shuo", "Gao, Xin", "Yan, Rui" ]
CharacterEval: A Chinese Benchmark for Role-Playing Conversational Agent Evaluation
acl-long.638
Poster
2401.01275
[ "https://github.com/morecry/charactereval" ]
https://huggingface.co/papers/2401.01275
0
1
0
4
https://aclanthology.org/2024.acl-long.638/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.639.bib
@inproceedings{li-etal-2024-generative, title = "Generative Cross-Modal Retrieval: Memorizing Images in Multimodal Language Models for Retrieval and Beyond", author = "Li, Yongqi and Wang, Wenjie and Qu, Leigang and Nie, Liqiang and Li, Wenjie and Chua, Tat-Seng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.639", pages = "11851--11861", abstract = "The recent advancements in generative language models have demonstrated their ability to memorize knowledge from documents and recall knowledge to respond to user queries effectively. Building upon this capability, we propose to enable multimodal large language models (MLLMs) to memorize and recall images within their parameters. Given a user query for visual content, the MLLM is anticipated to {``}recall{''} the relevant image from its parameters as the response. Achieving this target presents notable challenges, including inbuilt visual memory and visual recall schemes within MLLMs. To address these challenges, we introduce a generative cross-modal retrieval framework, which assigns unique identifier strings to represent images and involves two training steps: learning to memorize and learning to retrieve. The first step focuses on training the MLLM to memorize the association between images and their respective identifiers. The latter step teaches the MLLM to generate the corresponding identifier of the target image, given the textual query input. By memorizing images in MLLMs, we introduce a new paradigm to cross-modal retrieval, distinct from previous discriminative approaches. The experiments demonstrate that the generative paradigm performs effectively and efficiently even with large-scale image candidate sets.", }
The recent advancements in generative language models have demonstrated their ability to memorize knowledge from documents and recall knowledge to respond to user queries effectively. Building upon this capability, we propose to enable multimodal large language models (MLLMs) to memorize and recall images within their parameters. Given a user query for visual content, the MLLM is anticipated to {``}recall{''} the relevant image from its parameters as the response. Achieving this target presents notable challenges, including inbuilt visual memory and visual recall schemes within MLLMs. To address these challenges, we introduce a generative cross-modal retrieval framework, which assigns unique identifier strings to represent images and involves two training steps: learning to memorize and learning to retrieve. The first step focuses on training the MLLM to memorize the association between images and their respective identifiers. The latter step teaches the MLLM to generate the corresponding identifier of the target image, given the textual query input. By memorizing images in MLLMs, we introduce a new paradigm to cross-modal retrieval, distinct from previous discriminative approaches. The experiments demonstrate that the generative paradigm performs effectively and efficiently even with large-scale image candidate sets.
[ "Li, Yongqi", "Wang, Wenjie", "Qu, Leigang", "Nie, Liqiang", "Li, Wenjie", "Chua, Tat-Seng" ]
Generative Cross-Modal Retrieval: Memorizing Images in Multimodal Language Models for Retrieval and Beyond
acl-long.639
Poster
2402.10805
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.639/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.640.bib
@inproceedings{zhang-etal-2024-self-training, title = "Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction", author = "Zhang, Yice and Zeng, Jie and Hu, Weiming and Wang, Ziyi and Chen, Shiwei and Xu, Ruifeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.640", pages = "11862--11875", abstract = "Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review, which is the most representative and challenging task in aspect-based sentiment analysis. A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods. To tackle this issue, we propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels, aiming to filter out mismatches and thereby enhance the effectiveness of self-training. We highlight two critical aspects to ensure the scorer{'}s effectiveness and reliability: the quality of the training dataset and its model architecture. To this end, we create a human-annotated comparison dataset and train a generative model on it using ranking-based objectives. Extensive experiments on public ASQP datasets reveal that using our scorer can greatly and consistently improve the effectiveness of self-training. Moreover, we explore the possibility of replacing humans with large language models for comparison dataset annotation, and experiments demonstrate its feasibility. We will release our code and data via GitHub.", }
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review, which is the most representative and challenging task in aspect-based sentiment analysis. A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods. To tackle this issue, we propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels, aiming to filter out mismatches and thereby enhance the effectiveness of self-training. We highlight two critical aspects to ensure the scorer{'}s effectiveness and reliability: the quality of the training dataset and its model architecture. To this end, we create a human-annotated comparison dataset and train a generative model on it using ranking-based objectives. Extensive experiments on public ASQP datasets reveal that using our scorer can greatly and consistently improve the effectiveness of self-training. Moreover, we explore the possibility of replacing humans with large language models for comparison dataset annotation, and experiments demonstrate its feasibility. We will release our code and data via GitHub.
[ "Zhang, Yice", "Zeng, Jie", "Hu, Weiming", "Wang, Ziyi", "Chen, Shiwei", "Xu, Ruifeng" ]
Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction
acl-long.640
Poster
2406.18078
[ "https://github.com/hitsz-hlt/st-w-scorer-absa" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.640/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.641.bib
@inproceedings{aly-etal-2024-learning, title = "Learning to Generate Answers with Citations via Factual Consistency Models", author = "Aly, Rami and Tang, Zhiqiang and Tan, Samson and Karypis, George", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.641", pages = "11876--11896", abstract = "Large Language Models (LLMs) frequently hallucinate, impeding their reliability in mission-critical situations. One approach to address this issue is to provide citations to relevant sources alongside generated content, enhancing the verifiability of generations. However, citing passages accurately in answers remains a substantial challenge. This paper proposes a weakly-supervised fine-tuning method leveraging factual consistency models (FCMs). Our approach alternates between generating texts with citations and supervised fine-tuning with FCM-filtered citation data. Focused learning is integrated into the objective, directing the fine-tuning process to emphasise the factual unit tokens, as measured by an FCM. Results on the ALCE few-shot citation benchmark with various instruction-tuned LLMs demonstrate superior performance compared to in-context learning, vanilla supervised fine-tuning, and state-of-the-art methods, with an average improvement of 34.1, 15.5, and 10.5 citation F$_1$ points, respectively. Moreover, in a domain transfer setting we show that the obtained citation generation ability robustly transfers to unseen datasets. Notably, our citation improvements contribute to the lowest factual error rate across baselines.", }
Large Language Models (LLMs) frequently hallucinate, impeding their reliability in mission-critical situations. One approach to address this issue is to provide citations to relevant sources alongside generated content, enhancing the verifiability of generations. However, citing passages accurately in answers remains a substantial challenge. This paper proposes a weakly-supervised fine-tuning method leveraging factual consistency models (FCMs). Our approach alternates between generating texts with citations and supervised fine-tuning with FCM-filtered citation data. Focused learning is integrated into the objective, directing the fine-tuning process to emphasise the factual unit tokens, as measured by an FCM. Results on the ALCE few-shot citation benchmark with various instruction-tuned LLMs demonstrate superior performance compared to in-context learning, vanilla supervised fine-tuning, and state-of-the-art methods, with an average improvement of 34.1, 15.5, and 10.5 citation F$_1$ points, respectively. Moreover, in a domain transfer setting we show that the obtained citation generation ability robustly transfers to unseen datasets. Notably, our citation improvements contribute to the lowest factual error rate across baselines.
[ "Aly, Rami", "Tang, Zhiqiang", "Tan, Samson", "Karypis, George" ]
Learning to Generate Answers with Citations via Factual Consistency Models
acl-long.641
Poster
2406.13124
[ "https://github.com/amazon-science/learning-to-generate-answers-with-citations" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.641/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.642.bib
@inproceedings{wang-etal-2024-improving-text, title = "Improving Text Embeddings with Large Language Models", author = "Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.642", pages = "11897--11916", abstract = "In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks.", }
In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks.
[ "Wang, Liang", "Yang, Nan", "Huang, Xiaolong", "Yang, Linjun", "Majumder, Rangan", "Wei, Furu" ]
Improving Text Embeddings with Large Language Models
acl-long.642
Poster
2401.00368
[ "" ]
https://huggingface.co/papers/2401.00368
6
79
15
6
https://aclanthology.org/2024.acl-long.642/
[ "intfloat/e5-mistral-7b-instruct", "Salesforce/SFR-Embedding-Mistral", "intfloat/multilingual-e5-large-instruct", "Linq-AI-Research/Linq-Embed-Mistral", "nvidia/NV-Retriever-v1", "mlx-community/e5-mistral-7b-instruct-mlx", "KennethEnevoldsen/munin-7b-e5", "KennethEnevoldsen/munin-neuralbeagle-7b-e5", "rlsChapters/Chapters-SFR-Embedding-Mistral", "atian-chapters/Chapters-SFR-Embedding-Mistral", "arcdev/SFR-Embedding-Mistral", "arcdev/e5-mistral-7b-instruct", "krilecy/e5-mistral-7b-instruct" ]
[ "intfloat/personalized_passkey_retrieval", "alvarobartt/improving-text-embeddings-with-llms", "SebastianBodza/RAG_Aufgaben" ]
[ "mteb/leaderboard", "mteb/arena", "lfoppiano/document-qa", "Tonic/e5", "LordFarquaad42/Groove-GPT", "HonestAnnie/svghenfpkob", "ujwal09/Salesforce-SFR-Embedding-Mistral", "qfisch/pdf-rag-mistral-7b", "adildhkh/intfloat-e5-mistral-7b-instruct", "kidathome/e5", "Bofandra/quran-finders", "HonestAnnie/sorhwphuo", "maxju/Linq-AI-Research-Linq-Embed-Mistral", "aquulsmurf/Salesforce-SFR-Embedding-Mistral", "jmdu/SFR-Embedding-Mistral", "Luminogics/similarity_score", "rahulkrishna/Salesforce-SFR-Embedding-Mistral-demo", "LiamVDB/SFR-Embedding-Mistral-Test", "Pavankumar0619/model_upload", "ThunderRedStar/intfloat-e5-mistral-7b-instruct", "iaratagram/intfloat-e5-mistral-7b-instruct", "shreyankg/intfloat-e5-mistral-7b-instruct", "Chris4K/api-rag-index-chat", "pngwn/df_scroll_bug_fix-two", "pngwn/df_scroll_bug_repo", "pngwn/df_scroll_bug_fix", "Nymbo/MTEB-Arena", "panuthept/thai_sentence_embedding_benchmark", "leandrocarneiro/BotNews", "Jiddah/intfloat-multilingual-e5-large-instruct", "anubhav-singh/AskAboutMe", "wt3639/Course_rec", "wt3639/Course_recommendation", "Bofandra/moslem-bot", "Bofandra/quran-finder", "Bofandra/hadiths-finder" ]
1
https://aclanthology.org/2024.acl-long.643.bib
@inproceedings{wang-etal-2024-self-training, title = "Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning", author = "Wang, Tianduo and Li, Shichen and Lu, Wei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.643", pages = "11917--11928", abstract = "Teaching small-scale language models to perform math reasoning is a valuable yet challenging task. Besides obtaining labeled data from human experts, one of the most common ways to collect high-quality data is by sampling from a larger and more powerful language model. Although previous works have demonstrated the effectiveness of this method, such a knowledge distillation paradigm can be costly and unstable, especially considering that many large language models, such as GPT-4, are closed-sourced, proprietary, and their behaviors are unpredictable. In this work, to avoid relying on outputs from large models, we demonstrate that the reasoning abilities of small-scale language models can be enhanced through self-training, which involves training models with their own outputs. We also show that the vanilla self-training can be further augmented by an alignment algorithm, direct preference optimization (DPO). We empirically found that models trained with the DPO objective are capable of making better generations that largely benefit multi-turn self-training. The experiments show our models outperform the state-of-the-art models with comparable sizes on a series of downstream math reasoning tasks with minimal resource requirements.", }
Teaching small-scale language models to perform math reasoning is a valuable yet challenging task. Besides obtaining labeled data from human experts, one of the most common ways to collect high-quality data is by sampling from a larger and more powerful language model. Although previous works have demonstrated the effectiveness of this method, such a knowledge distillation paradigm can be costly and unstable, especially considering that many large language models, such as GPT-4, are closed-sourced, proprietary, and their behaviors are unpredictable. In this work, to avoid relying on outputs from large models, we demonstrate that the reasoning abilities of small-scale language models can be enhanced through self-training, which involves training models with their own outputs. We also show that the vanilla self-training can be further augmented by an alignment algorithm, direct preference optimization (DPO). We empirically found that models trained with the DPO objective are capable of making better generations that largely benefit multi-turn self-training. The experiments show our models outperform the state-of-the-art models with comparable sizes on a series of downstream math reasoning tasks with minimal resource requirements.
[ "Wang, Ti", "uo", "Li, Shichen", "Lu, Wei" ]
Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning
acl-long.643
Poster
2407.18248
[ "https://github.com/tianduowang/dpo-st" ]
https://huggingface.co/papers/2407.18248
2
30
2
3
https://aclanthology.org/2024.acl-long.643/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.644.bib
@inproceedings{wang-etal-2024-ultralink, title = "{U}ltra{L}ink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset", author = "Wang, Haoyu and Wang, Shuo and Yan, Yukun and Wang, Xujia and Yang, Zhiyu and Xu, Yuzhuang and Liu, Zhenghao and Yang, Liner and Ding, Ning and Han, Xu and Liu, Zhiyuan and Sun, Maosong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.644", pages = "11929--11942", abstract = "Open-source large language models (LLMs) have gained significant strength across diverse fields. Nevertheless, the majority of studies primarily concentrate on English, with only limited exploration into the realm of multilingual abilities.In this work, we therefore construct an open-source multilingual supervised fine-tuning dataset.Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs. Firstly, we introduce a knowledge-grounded data augmentation approach to elicit more language-specific knowledge of LLMs, improving their ability to serve users from different countries. Moreover, we find modern LLMs possess strong cross-lingual transfer capabilities, thus repeatedly learning identical content in various languages is not necessary. Consequently, we can substantially prune the language-agnostic supervised fine-tuning (SFT) data without any performance degradation, making multilingual SFT more efficient.The resulting UltraLink dataset comprises approximately 1 million samples across five languages (i.e., En, Zh, Ru, Fr, Es), and the proposed data construction method can be easily extended to other languages.UltraLink-LM, which is trained on the UltraLink dataset, outperforms several representative baselines across many tasks.", }
Open-source large language models (LLMs) have gained significant strength across diverse fields. Nevertheless, the majority of studies primarily concentrate on English, with only limited exploration into the realm of multilingual abilities.In this work, we therefore construct an open-source multilingual supervised fine-tuning dataset.Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs. Firstly, we introduce a knowledge-grounded data augmentation approach to elicit more language-specific knowledge of LLMs, improving their ability to serve users from different countries. Moreover, we find modern LLMs possess strong cross-lingual transfer capabilities, thus repeatedly learning identical content in various languages is not necessary. Consequently, we can substantially prune the language-agnostic supervised fine-tuning (SFT) data without any performance degradation, making multilingual SFT more efficient.The resulting UltraLink dataset comprises approximately 1 million samples across five languages (i.e., En, Zh, Ru, Fr, Es), and the proposed data construction method can be easily extended to other languages.UltraLink-LM, which is trained on the UltraLink dataset, outperforms several representative baselines across many tasks.
[ "Wang, Haoyu", "Wang, Shuo", "Yan, Yukun", "Wang, Xujia", "Yang, Zhiyu", "Xu, Yuzhuang", "Liu, Zhenghao", "Yang, Liner", "Ding, Ning", "Han, Xu", "Liu, Zhiyuan", "Sun, Maosong" ]
UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset
acl-long.644
Poster
2402.04588
[ "https://github.com/openbmb/ultralink" ]
https://huggingface.co/papers/2402.04588
0
2
0
11
https://aclanthology.org/2024.acl-long.644/
[ "R0k1e/UltraLink-LM" ]
[ "R0k1e/UltraLink" ]
[]
1
https://aclanthology.org/2024.acl-long.645.bib
@inproceedings{deng-etal-2024-document, title = "Document-level Claim Extraction and Decontextualisation for Fact-Checking", author = "Deng, Zhenyun and Schlichtkrull, Michael and Vlachos, Andreas", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.645", pages = "11943--11954", abstract = "Selecting which claims to check is a time-consuming task for human fact-checkers, especially from documents consisting of multiple sentences and containing multiple claims. However, existing claim extraction approaches focus more on identifying and extracting claims from individual sentences, e.g., identifying whether a sentence contains a claim or the exact boundaries of the claim within a sentence. In this paper, we propose a method for document-level claim extraction for fact-checking, which aims to extract check-worthy claims from documents and decontextualise them so that they can be understood out of context. Specifically, we first recast claim extraction as extractive summarization in order to identify central sentences from documents, then rewrite them to include necessary context from the originating document through sentence decontextualisation. Evaluation with both automatic metrics and a fact-checking professional shows that our method is able to extract check-worthy claims from documents at a higher rate than previous work, while also improving evidence retrieval.", }
Selecting which claims to check is a time-consuming task for human fact-checkers, especially from documents consisting of multiple sentences and containing multiple claims. However, existing claim extraction approaches focus more on identifying and extracting claims from individual sentences, e.g., identifying whether a sentence contains a claim or the exact boundaries of the claim within a sentence. In this paper, we propose a method for document-level claim extraction for fact-checking, which aims to extract check-worthy claims from documents and decontextualise them so that they can be understood out of context. Specifically, we first recast claim extraction as extractive summarization in order to identify central sentences from documents, then rewrite them to include necessary context from the originating document through sentence decontextualisation. Evaluation with both automatic metrics and a fact-checking professional shows that our method is able to extract check-worthy claims from documents at a higher rate than previous work, while also improving evidence retrieval.
[ "Deng, Zhenyun", "Schlichtkrull, Michael", "Vlachos, Andreas" ]
Document-level Claim Extraction and Decontextualisation for Fact-Checking
acl-long.645
Poster
2406.03239
[ "https://github.com/Tswings/AVeriTeC-DCE" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.645/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.646.bib
@inproceedings{qiu-etal-2024-paircfr, title = "{P}air{CFR}: Enhancing Model Training on Paired Counterfactually Augmented Data through Contrastive Learning", author = "Qiu, Xiaoqi and Wang, Yongjie and Guo, Xu and Zeng, Zhiwei and Yue, Yu and Feng, Yuhong and Miao, Chunyan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.646", pages = "11955--11971", abstract = "Counterfactually Augmented Data (CAD) involves creating new data samples by applying minimal yet sufficient modifications to flip the label of existing data samples to other classes. Training with CAD enhances model robustness against spurious features that happen to correlate with labels by spreading the casual relationships across different classes. Yet, recent research reveals that training with CAD may lead models to overly focus on modified features while ignoring other important contextual information, inadvertently introducing biases that may impair performance on out-of-distribution (OOD) datasets. To mitigate this issue, we employ contrastive learning to promote global feature alignment in addition to learning counterfactual clues. We theoretically prove that contrastive loss can encourage models to leverage a broader range of features beyond those modified ones. Comprehensive experiments on two human-edited CAD datasets demonstrate that our proposed method outperforms the state-of-the-art on OOD datasets.", }
Counterfactually Augmented Data (CAD) involves creating new data samples by applying minimal yet sufficient modifications to flip the label of existing data samples to other classes. Training with CAD enhances model robustness against spurious features that happen to correlate with labels by spreading the casual relationships across different classes. Yet, recent research reveals that training with CAD may lead models to overly focus on modified features while ignoring other important contextual information, inadvertently introducing biases that may impair performance on out-of-distribution (OOD) datasets. To mitigate this issue, we employ contrastive learning to promote global feature alignment in addition to learning counterfactual clues. We theoretically prove that contrastive loss can encourage models to leverage a broader range of features beyond those modified ones. Comprehensive experiments on two human-edited CAD datasets demonstrate that our proposed method outperforms the state-of-the-art on OOD datasets.
[ "Qiu, Xiaoqi", "Wang, Yongjie", "Guo, Xu", "Zeng, Zhiwei", "Yue, Yu", "Feng, Yuhong", "Miao, Chunyan" ]
PairCFR: Enhancing Model Training on Paired Counterfactually Augmented Data through Contrastive Learning
acl-long.646
Poster
2406.06633
[ "https://github.com/Siki-cloud/PairCFR" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.646/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.647.bib
@inproceedings{zhou-etal-2024-llms, title = "{LLM}s Learn Task Heuristics from Demonstrations: A Heuristic-Driven Prompting Strategy for Document-Level Event Argument Extraction", author = "Zhou, Hanzhang and Qian, Junlang and Feng, Zijian and Hui, Lu and Zhu, Zixiao and Mao, Kezhi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.647", pages = "11972--11990", abstract = "In this study, we explore in-context learning (ICL) in document-level event argument extraction (EAE) to alleviate the dependency on large-scale labeled data for this task. We introduce the Heuristic-Driven Link-of-Analogy (HD-LoA) prompting tailored for the EAE task. Specifically, we hypothesize and validate that LLMs learn task-specific heuristics from demonstrations in ICL. Building upon this hypothesis, we introduce an explicit heuristic-driven demonstration construction approach, which transforms the haphazard example selection process into a systematic method that emphasizes task heuristics. Additionally, inspired by the analogical reasoning of human, we propose the link-of-analogy prompting, which enables LLMs to process new situations by drawing analogies to known situations, enhancing their performance on unseen classes beyond limited ICL examples. Experiments show that our method outperforms existing prompting methods and few-shot supervised learning methods on document-level EAE datasets. Additionally, the HD-LoA prompting shows effectiveness in other tasks like sentiment analysis and natural language inference, demonstrating its broad adaptability.", }
In this study, we explore in-context learning (ICL) in document-level event argument extraction (EAE) to alleviate the dependency on large-scale labeled data for this task. We introduce the Heuristic-Driven Link-of-Analogy (HD-LoA) prompting tailored for the EAE task. Specifically, we hypothesize and validate that LLMs learn task-specific heuristics from demonstrations in ICL. Building upon this hypothesis, we introduce an explicit heuristic-driven demonstration construction approach, which transforms the haphazard example selection process into a systematic method that emphasizes task heuristics. Additionally, inspired by the analogical reasoning of human, we propose the link-of-analogy prompting, which enables LLMs to process new situations by drawing analogies to known situations, enhancing their performance on unseen classes beyond limited ICL examples. Experiments show that our method outperforms existing prompting methods and few-shot supervised learning methods on document-level EAE datasets. Additionally, the HD-LoA prompting shows effectiveness in other tasks like sentiment analysis and natural language inference, demonstrating its broad adaptability.
[ "Zhou, Hanzhang", "Qian, Junlang", "Feng, Zijian", "Hui, Lu", "Zhu, Zixiao", "Mao, Kezhi" ]
LLMs Learn Task Heuristics from Demonstrations: A Heuristic-Driven Prompting Strategy for Document-Level Event Argument Extraction
acl-long.647
Poster
2311.06555
[ "https://github.com/hzzhou01/hd-loa-prompting" ]
https://huggingface.co/papers/2311.06555
0
0
0
6
https://aclanthology.org/2024.acl-long.647/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.648.bib
@inproceedings{zhong-etal-2024-investigating, title = "Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models", author = "Zhong, Weihong and Feng, Xiaocheng and Zhao, Liang and Li, Qiming and Huang, Lei and Gu, Yuxuan and Ma, Weitao and Xu, Yuan and Qin, Bing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.648", pages = "11991--12011", abstract = "Though advanced in understanding visual information with human languages, Large Vision-Language Models (LVLMs) still suffer from multimodal hallucinations. A natural concern is that during multimodal interaction, the generated hallucinations could influence the LVLMs{'} subsequent generation. Thus, we raise a question: $\textit{When presented with a query relevant to the previously generated hallucination, will LVLMs be misled and respond incorrectly, even though the ground visual information exists?}$ To answer this, we propose a framework called $\\textit{MMHalSnowball}$ to evaluate LVLMs{'} behaviors when encountering generated hallucinations, where LVLMs are required to answer specific visual questions within a curated hallucinatory conversation. Crucially, our experiment shows that the performance of open-source LVLMs drops by at least $31\\%$, indicating that LVLMs are prone to accept the generated hallucinations and make false claims that they would not have supported without distractions. We term this $\textit{Multimodal Hallucination Snowballing}$. To mitigate this issue, we further propose a training-free method called $\textit{Residual Visual Decoding},$ where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information. Experiments show that our method can mitigate more than $24\\%$ of the snowballed multimodal hallucination while maintaining capabilities.", }
Though advanced in understanding visual information with human languages, Large Vision-Language Models (LVLMs) still suffer from multimodal hallucinations. A natural concern is that during multimodal interaction, the generated hallucinations could influence the LVLMs{'} subsequent generation. Thus, we raise a question: $\textit{When presented with a query relevant to the previously generated hallucination, will LVLMs be misled and respond incorrectly, even though the ground visual information exists?}$ To answer this, we propose a framework called $\\textit{MMHalSnowball}$ to evaluate LVLMs{'} behaviors when encountering generated hallucinations, where LVLMs are required to answer specific visual questions within a curated hallucinatory conversation. Crucially, our experiment shows that the performance of open-source LVLMs drops by at least $31\\%$, indicating that LVLMs are prone to accept the generated hallucinations and make false claims that they would not have supported without distractions. We term this $\textit{Multimodal Hallucination Snowballing}$. To mitigate this issue, we further propose a training-free method called $\textit{Residual Visual Decoding},$ where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information. Experiments show that our method can mitigate more than $24\\%$ of the snowballed multimodal hallucination while maintaining capabilities.
[ "Zhong, Weihong", "Feng, Xiaocheng", "Zhao, Liang", "Li, Qiming", "Huang, Lei", "Gu, Yuxuan", "Ma, Weitao", "Xu, Yuan", "Qin, Bing" ]
Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models
acl-long.648
Poster
2407.00569
[ "https://github.com/whongzhong/MMHalSnowball" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.648/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.649.bib
@inproceedings{lai-nissim-2024-mcot, title = "m{C}o{T}: Multilingual Instruction Tuning for Reasoning Consistency in Language Models", author = "Lai, Huiyuan and Nissim, Malvina", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.649", pages = "12012--12026", abstract = "Large language models (LLMs) with Chain-of-thought (CoT) have recently emerged as a powerful technique for eliciting reasoning to improve various downstream tasks. As most research mainly focuses on English, with few explorations in a multilingual context, the question of how reliable this reasoning capability is in different languages is still open. To address it directly, we study multilingual reasoning consistency across multiple languages, using popular open-source LLMs. First, we compile the first large-scale multilingual math reasoning dataset, *mCoT-MATH*, covering eleven diverse languages. Then, we introduce multilingual CoT instruction tuning to boost reasoning capability across languages, thereby improving model consistency. While existing LLMs show substantial variation across the languages we consider, and especially low performance for lesser resourced languages, our 7B parameter model *mCoT* achieves impressive consistency across languages, and superior or comparable performance to close- and open-source models even of much larger sizes.", }
Large language models (LLMs) with Chain-of-thought (CoT) have recently emerged as a powerful technique for eliciting reasoning to improve various downstream tasks. As most research mainly focuses on English, with few explorations in a multilingual context, the question of how reliable this reasoning capability is in different languages is still open. To address it directly, we study multilingual reasoning consistency across multiple languages, using popular open-source LLMs. First, we compile the first large-scale multilingual math reasoning dataset, *mCoT-MATH*, covering eleven diverse languages. Then, we introduce multilingual CoT instruction tuning to boost reasoning capability across languages, thereby improving model consistency. While existing LLMs show substantial variation across the languages we consider, and especially low performance for lesser resourced languages, our 7B parameter model *mCoT* achieves impressive consistency across languages, and superior or comparable performance to close- and open-source models even of much larger sizes.
[ "Lai, Huiyuan", "Nissim, Malvina" ]
mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models
acl-long.649
Poster
2406.02301
[ "https://github.com/laihuiyuan/mcot" ]
https://huggingface.co/papers/2406.02301
0
0
0
2
https://aclanthology.org/2024.acl-long.649/
[ "laihuiyuan/mCoT" ]
[ "laihuiyuan/mCoT-MATH" ]
[]
1
https://aclanthology.org/2024.acl-long.650.bib
@inproceedings{gyawali-etal-2024-gunstance, title = "{G}un{S}tance: Stance Detection for Gun Control and Gun Regulation", author = "Gyawali, Nikesh and Sirbu, Iustin and Sosea, Tiberiu and Khanal, Sarthak and Caragea, Doina and Rebedea, Traian and Caragea, Cornelia", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.650", pages = "12027--12044", abstract = "The debate surrounding gun control and gun regulation in the United States has intensified in the wake of numerous mass shooting events. As perspectives on this matter vary, it becomes increasingly important to comprehend individuals{'} positions. Stance detection, the task of determining an author{'}s position towards a proposition or target, has gained attention for its potential use in understanding public perceptions towards controversial topics and identifying the best strategies to address public concerns. In this paper, we present GunStance, a dataset of tweets pertaining to shooting events, focusing specifically on the controversial topics of {``}banning guns{''} versus {``}regulating guns.{''} The tweets in the dataset are sourced from discussions on Twitter following various shooting incidents in the United States. Amazon Mechanical Turk was used to manually annotate a subset of the tweets relevant to the targets of interest ({``}banning guns{''} and {``}regulating guns{''}) into three classes: In-Favor, Against, and Neutral. The remaining unlabeled tweets are included in the dataset to facilitate studies on semi-supervised learning (SSL) approaches that can help address the scarcity of the labeled data in stance detection tasks. Furthermore, we propose a hybrid approach that combines curriculum-based SSL and Large Language Models (LLM), and show that the proposed approach outperforms supervised, semi-supervised, and LLM-based zero-shot models in most experiments on our assembled dataset.", }
The debate surrounding gun control and gun regulation in the United States has intensified in the wake of numerous mass shooting events. As perspectives on this matter vary, it becomes increasingly important to comprehend individuals{'} positions. Stance detection, the task of determining an author{'}s position towards a proposition or target, has gained attention for its potential use in understanding public perceptions towards controversial topics and identifying the best strategies to address public concerns. In this paper, we present GunStance, a dataset of tweets pertaining to shooting events, focusing specifically on the controversial topics of {``}banning guns{''} versus {``}regulating guns.{''} The tweets in the dataset are sourced from discussions on Twitter following various shooting incidents in the United States. Amazon Mechanical Turk was used to manually annotate a subset of the tweets relevant to the targets of interest ({``}banning guns{''} and {``}regulating guns{''}) into three classes: In-Favor, Against, and Neutral. The remaining unlabeled tweets are included in the dataset to facilitate studies on semi-supervised learning (SSL) approaches that can help address the scarcity of the labeled data in stance detection tasks. Furthermore, we propose a hybrid approach that combines curriculum-based SSL and Large Language Models (LLM), and show that the proposed approach outperforms supervised, semi-supervised, and LLM-based zero-shot models in most experiments on our assembled dataset.
[ "Gyawali, Nikesh", "Sirbu, Iustin", "Sosea, Tiberiu", "Khanal, Sarthak", "Caragea, Doina", "Rebedea, Traian", "Caragea, Cornelia" ]
GunStance: Stance Detection for Gun Control and Gun Regulation
acl-long.650
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.650/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.651.bib
@inproceedings{kasner-dusek-2024-beyond, title = "Beyond Traditional Benchmarks: Analyzing Behaviors of Open {LLM}s on Data-to-Text Generation", author = "Kasner, Zden{\v{e}}k and Dusek, Ondrej", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.651", pages = "12045--12072", abstract = "We analyze the behaviors of open large language models (LLMs) on the task of data-to-text (D2T) generation, i.e., generating coherent and relevant text from structured data. To avoid the issue of LLM training data contamination with standard benchmarks, we design Quintd - a tool for collecting novel structured data records from public APIs. We find that open LLMs (Llama 2, Mistral, and Zephyr) can generate fluent and coherent texts in zero-shot settings from data in common formats collected with Quintd. However, we show that the semantic accuracy of the outputs is a major issue: both according to human annotators and our reference-free metric based on GPT-4, more than 80{\%} of the outputs of open LLMs contain at least one semantic error. We publicly release the code, data, and model outputs.", }
We analyze the behaviors of open large language models (LLMs) on the task of data-to-text (D2T) generation, i.e., generating coherent and relevant text from structured data. To avoid the issue of LLM training data contamination with standard benchmarks, we design Quintd - a tool for collecting novel structured data records from public APIs. We find that open LLMs (Llama 2, Mistral, and Zephyr) can generate fluent and coherent texts in zero-shot settings from data in common formats collected with Quintd. However, we show that the semantic accuracy of the outputs is a major issue: both according to human annotators and our reference-free metric based on GPT-4, more than 80{\%} of the outputs of open LLMs contain at least one semantic error. We publicly release the code, data, and model outputs.
[ "Kasner, Zden{\\v{e}}k", "Dusek, Ondrej" ]
Beyond Traditional Benchmarks: Analyzing Behaviors of Open LLMs on Data-to-Text Generation
acl-long.651
Poster
2401.10186
[ "" ]
https://huggingface.co/papers/2401.10186
1
0
0
2
https://aclanthology.org/2024.acl-long.651/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.652.bib
@inproceedings{zhang-etal-2024-dont-go, title = "Don{'}t Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of {LLM}s in Implicit Hate Speech Detection", author = "Zhang, Min and He, Jianfeng and Ji, Taoran and Lu, Chang-Tien", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.652", pages = "12073--12086", abstract = "The fairness and trustworthiness of Large Language Models (LLMs) are receiving increasing attention. Implicit hate speech, which employs indirect language to convey hateful intentions, occupies a significant portion of practice. However, the extent to which LLMs effectively address this issue remains insufficiently examined. This paper delves into the capability of LLMs to detect implicit hate speech and express confidence in their responses. Our evaluation meticulously considers various prompt patterns and mainstream uncertainty estimation methods. Our findings highlight that LLMs exhibit two extremes: (1) LLMs display excessive sensitivity towards groups or topics that may cause fairness issues, resulting in misclassifying benign statements as hate speech. (2) LLMs{'} confidence scores for each method excessively concentrate on a fixed range, remaining unchanged regardless of the dataset{'}s complexity. Consequently, the calibration performance is heavily reliant on primary classification accuracy. These discoveries unveil new limitations of LLMs, underscoring the need for caution when optimizing models to ensure they do not veer towards extremes. This serves as a reminder to carefully consider sensitivity and confidence in the pursuit of model fairness.", }
The fairness and trustworthiness of Large Language Models (LLMs) are receiving increasing attention. Implicit hate speech, which employs indirect language to convey hateful intentions, occupies a significant portion of practice. However, the extent to which LLMs effectively address this issue remains insufficiently examined. This paper delves into the capability of LLMs to detect implicit hate speech and express confidence in their responses. Our evaluation meticulously considers various prompt patterns and mainstream uncertainty estimation methods. Our findings highlight that LLMs exhibit two extremes: (1) LLMs display excessive sensitivity towards groups or topics that may cause fairness issues, resulting in misclassifying benign statements as hate speech. (2) LLMs{'} confidence scores for each method excessively concentrate on a fixed range, remaining unchanged regardless of the dataset{'}s complexity. Consequently, the calibration performance is heavily reliant on primary classification accuracy. These discoveries unveil new limitations of LLMs, underscoring the need for caution when optimizing models to ensure they do not veer towards extremes. This serves as a reminder to carefully consider sensitivity and confidence in the pursuit of model fairness.
[ "Zhang, Min", "He, Jianfeng", "Ji, Taoran", "Lu, Chang-Tien" ]
Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection
acl-long.652
Poster
2402.11406
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.652/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.653.bib
@inproceedings{vernikos-popescu-belis-2024-dont, title = "Don{'}t Rank, Combine! Combining Machine Translation Hypotheses Using Quality Estimation", author = "Vernikos, Giorgos and Popescu-Belis, Andrei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.653", pages = "12087--12105", abstract = "Neural machine translation systems estimate probabilities of target sentences given source sentences, yet these estimates may not align with human preferences. This work introduces QE-fusion, a method that synthesizes translations using a quality estimation metric (QE), which correlates better with human judgments. QE-fusion leverages a pool of candidates sampled from a model, combining spans from different candidates using a QE metric such as CometKiwi. We compare QE-fusion against beam search and recent reranking techniques, such as Minimum Bayes Risk decoding or QE-reranking. Our method consistently improves translation quality in terms of COMET and BLEURT scores when applied to large language models (LLMs) used for translation (PolyLM, XGLM, Llama2, Mistral, ALMA, and Tower) and to multilingual translation models (NLLB), over five language pairs. Notably, QE-fusion exhibits larger improvements for LLMs due to their ability to generate diverse outputs. We demonstrate that our approach generates novel translations in over half of the cases and consistently outperforms other methods across varying numbers of candidates (5{--}200). Furthermore, we empirically establish that QE-fusion scales linearly with the number of candidates in the pool.", }
Neural machine translation systems estimate probabilities of target sentences given source sentences, yet these estimates may not align with human preferences. This work introduces QE-fusion, a method that synthesizes translations using a quality estimation metric (QE), which correlates better with human judgments. QE-fusion leverages a pool of candidates sampled from a model, combining spans from different candidates using a QE metric such as CometKiwi. We compare QE-fusion against beam search and recent reranking techniques, such as Minimum Bayes Risk decoding or QE-reranking. Our method consistently improves translation quality in terms of COMET and BLEURT scores when applied to large language models (LLMs) used for translation (PolyLM, XGLM, Llama2, Mistral, ALMA, and Tower) and to multilingual translation models (NLLB), over five language pairs. Notably, QE-fusion exhibits larger improvements for LLMs due to their ability to generate diverse outputs. We demonstrate that our approach generates novel translations in over half of the cases and consistently outperforms other methods across varying numbers of candidates (5{--}200). Furthermore, we empirically establish that QE-fusion scales linearly with the number of candidates in the pool.
[ "Vernikos, Giorgos", "Popescu-Belis, Andrei" ]
Don't Rank, Combine! Combining Machine Translation Hypotheses Using Quality Estimation
acl-long.653
Poster
2401.06688
[ "" ]
https://huggingface.co/papers/2401.06688
0
0
0
2
https://aclanthology.org/2024.acl-long.653/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.654.bib
@inproceedings{di-mauro-etal-2024-generating, title = "Generating and Evaluating Plausible Explanations for Knowledge Graph Completion", author = "Di Mauro, Antonio and Xu, Zhao and Ben Rim, Wiem and Sztyler, Timo and Lawrence, Carolin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.654", pages = "12106--12118", abstract = "Explanations for AI should aid human users, yet this ultimate goal remains under-explored. This paper aims to bridge this gap by investigating the specific explanatory needs of human users in the context of Knowledge Graph Completion (KGC) systems. In contrast to the prevailing approaches that primarily focus on mathematical theories, we recognize the potential limitations of explanations that may end up being overly complex or nonsensical for users. Through in-depth user interviews, we gain valuable insights into the types of KGC explanations users seek. Building upon these insights, we introduce GradPath, a novel path-based explanation method designed to meet human-centric explainability constraints and enhance plausibility. Additionally, GradPath harnesses the gradients of the trained KGC model to maintain a certain level of faithfulness. We verify the effectiveness of GradPath through well-designed human-centric evaluations. The results confirm that our method provides explanations that users consider more plausible than previous ones.", }
Explanations for AI should aid human users, yet this ultimate goal remains under-explored. This paper aims to bridge this gap by investigating the specific explanatory needs of human users in the context of Knowledge Graph Completion (KGC) systems. In contrast to the prevailing approaches that primarily focus on mathematical theories, we recognize the potential limitations of explanations that may end up being overly complex or nonsensical for users. Through in-depth user interviews, we gain valuable insights into the types of KGC explanations users seek. Building upon these insights, we introduce GradPath, a novel path-based explanation method designed to meet human-centric explainability constraints and enhance plausibility. Additionally, GradPath harnesses the gradients of the trained KGC model to maintain a certain level of faithfulness. We verify the effectiveness of GradPath through well-designed human-centric evaluations. The results confirm that our method provides explanations that users consider more plausible than previous ones.
[ "Di Mauro, Antonio", "Xu, Zhao", "Ben Rim, Wiem", "Sztyler, Timo", "Lawrence, Carolin" ]
Generating and Evaluating Plausible Explanations for Knowledge Graph Completion
acl-long.654
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.654/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.655.bib
@inproceedings{siledar-etal-2024-one, title = "One Prompt To Rule Them All: {LLM}s for Opinion Summary Evaluation", author = "Siledar, Tejpalsingh and Nath, Swaroop and Muddu, Sankara and Rangaraju, Rupasai and Nath, Swaprava and Bhattacharyya, Pushpak and Banerjee, Suman and Patil, Amey and Singh, Sudhanshu and Chelliah, Muthusamy and Garera, Nikesh", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.655", pages = "12119--12134", abstract = "Evaluation of opinion summaries using conventional reference-based metrics often fails to provide a comprehensive assessment and exhibits limited correlation with human judgments. While Large Language Models (LLMs) have shown promise as reference-free metrics for NLG evaluation, their potential remains unexplored for opinion summary evaluation. Furthermore, the absence of sufficient opinion summary evaluation datasets hinders progress in this area. In response, we introduce the SUMMEVAL-OP dataset, encompassing 7 dimensions crucial to the evaluation of opinion summaries: fluency, coherence, relevance, faithfulness, aspect coverage, sentiment consistency, and specificity. We propose OP-I-PROMPT, a dimension-independent prompt, along with OP-PROMPTS, a dimension-dependent set of prompts for opinion summary evaluation. Our experiments demonstrate that OP-I-PROMPT emerges as a good alternative for evaluating opinion summaries, achieving an average Spearman correlation of 0.70 with human judgments, surpassing prior methodologies. Remarkably, we are the first to explore the efficacy of LLMs as evaluators, both on closed-source and open-source models, in the opinion summary evaluation domain.", }
Evaluation of opinion summaries using conventional reference-based metrics often fails to provide a comprehensive assessment and exhibits limited correlation with human judgments. While Large Language Models (LLMs) have shown promise as reference-free metrics for NLG evaluation, their potential remains unexplored for opinion summary evaluation. Furthermore, the absence of sufficient opinion summary evaluation datasets hinders progress in this area. In response, we introduce the SUMMEVAL-OP dataset, encompassing 7 dimensions crucial to the evaluation of opinion summaries: fluency, coherence, relevance, faithfulness, aspect coverage, sentiment consistency, and specificity. We propose OP-I-PROMPT, a dimension-independent prompt, along with OP-PROMPTS, a dimension-dependent set of prompts for opinion summary evaluation. Our experiments demonstrate that OP-I-PROMPT emerges as a good alternative for evaluating opinion summaries, achieving an average Spearman correlation of 0.70 with human judgments, surpassing prior methodologies. Remarkably, we are the first to explore the efficacy of LLMs as evaluators, both on closed-source and open-source models, in the opinion summary evaluation domain.
[ "Siledar, Tejpalsingh", "Nath, Swaroop", "Muddu, Sankara", "Rangaraju, Rupasai", "Nath, Swaprava", "Bhattacharyya, Pushpak", "Banerjee, Suman", "Patil, Amey", "Singh, Sudhanshu", "Chelliah, Muthusamy", "Garera, Nikesh" ]
One Prompt To Rule Them All: LLMs for Opinion Summary Evaluation
acl-long.655
Poster
2402.11683
[ "https://github.com/tjsiledar/summeval-op" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.655/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.656.bib
@inproceedings{zhu-etal-2024-landermt, title = "{LAND}e{RMT}: Dectecting and Routing Language-Aware Neurons for Selectively Finetuning {LLM}s to Machine Translation", author = "Zhu, Shaolin and Pan, Leiyu and Li, Bo and Xiong, Deyi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.656", pages = "12135--12148", abstract = "Recent advancements in large language models (LLMs) have shown promising results in multilingual translation even with limited bilingual supervision. The major challenges are catastrophic forgetting and parameter interference for finetuning LLMs when provided parallel training data. To address these challenges, we propose LANDeRMT, a Language-Aware Neuron Detecting and Routing framework that selectively finetunes LLMs to Machine Translation with diverse translation training data. In LANDeRMT, we evaluate the awareness of neurons to MT tasks and categorize them into language-general and language-specific neurons. This categorization enables selective parameter updates during finetuning, mitigating parameter interference and catastrophic forgetting issues. For the detected neurons, we further propose a conditional awareness-based routing mechanism to dynamically adjust language-general and language-specific capacity within LLMs, guided by translation signals. Experimental results demonstrate that the proposed LANDeRMT is very effective in learning translation knowledge, significantly improving translation quality over various strong baselines for multiple language pairs.", }
Recent advancements in large language models (LLMs) have shown promising results in multilingual translation even with limited bilingual supervision. The major challenges are catastrophic forgetting and parameter interference for finetuning LLMs when provided parallel training data. To address these challenges, we propose LANDeRMT, a Language-Aware Neuron Detecting and Routing framework that selectively finetunes LLMs to Machine Translation with diverse translation training data. In LANDeRMT, we evaluate the awareness of neurons to MT tasks and categorize them into language-general and language-specific neurons. This categorization enables selective parameter updates during finetuning, mitigating parameter interference and catastrophic forgetting issues. For the detected neurons, we further propose a conditional awareness-based routing mechanism to dynamically adjust language-general and language-specific capacity within LLMs, guided by translation signals. Experimental results demonstrate that the proposed LANDeRMT is very effective in learning translation knowledge, significantly improving translation quality over various strong baselines for multiple language pairs.
[ "Zhu, Shaolin", "Pan, Leiyu", "Li, Bo", "Xiong, Deyi" ]
LANDeRMT: Dectecting and Routing Language-Aware Neurons for Selectively Finetuning LLMs to Machine Translation
acl-long.656
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.656/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.657.bib
@inproceedings{cai-etal-2024-joint, title = "A Joint Coreference-Aware Approach to Document-Level Target Sentiment Analysis", author = "Cai, Hongjie and Ma, Heqing and Yu, Jianfei and Xia, Rui", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.657", pages = "12149--12160", abstract = "Most existing work on aspect-based sentiment analysis (ABSA) focuses on the sentence level, while research at the document level has not received enough attention. Compared to sentence-level ABSA, the document-level ABSA is not only more practical but also requires holistic document-level understanding capabilities such as coreference resolution. To investigate the impact of coreference information on document-level ABSA, we conduct a three-stage research for the document-level target sentiment analysis (DTSA) task: 1) exploring the effectiveness of coreference information for the DTSA task; 2) reducing the reliance on manually annotated coreference information; 3) alleviating the evaluation bias caused by missing the coreference information of opinion targets. Specifically, we first manually annotate the coreferential opinion targets and propose a multi-task learning framework to jointly model the DTSA task and the coreference resolution task. Then we annotate the coreference information with ChatGPT for joint training. Finally, to address the issue of missing coreference targets, we modify the metrics from strict matching to a loose matching method based on the clusters of targets. The experimental results not only demonstrate the effectiveness of our framework but also reflect the feasibility of using ChatGPT-annotated coreferential entities and the applicability of the modified metrics. Our source code is publicly released at https://github.com/NUSTM/DTSA-Coref.", }
Most existing work on aspect-based sentiment analysis (ABSA) focuses on the sentence level, while research at the document level has not received enough attention. Compared to sentence-level ABSA, the document-level ABSA is not only more practical but also requires holistic document-level understanding capabilities such as coreference resolution. To investigate the impact of coreference information on document-level ABSA, we conduct a three-stage research for the document-level target sentiment analysis (DTSA) task: 1) exploring the effectiveness of coreference information for the DTSA task; 2) reducing the reliance on manually annotated coreference information; 3) alleviating the evaluation bias caused by missing the coreference information of opinion targets. Specifically, we first manually annotate the coreferential opinion targets and propose a multi-task learning framework to jointly model the DTSA task and the coreference resolution task. Then we annotate the coreference information with ChatGPT for joint training. Finally, to address the issue of missing coreference targets, we modify the metrics from strict matching to a loose matching method based on the clusters of targets. The experimental results not only demonstrate the effectiveness of our framework but also reflect the feasibility of using ChatGPT-annotated coreferential entities and the applicability of the modified metrics. Our source code is publicly released at https://github.com/NUSTM/DTSA-Coref.
[ "Cai, Hongjie", "Ma, Heqing", "Yu, Jianfei", "Xia, Rui" ]
A Joint Coreference-Aware Approach to Document-Level Target Sentiment Analysis
acl-long.657
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.657/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.658.bib
@inproceedings{cao-etal-2024-visdiahalbench, title = "{V}is{D}ia{H}al{B}ench: A Visual Dialogue Benchmark For Diagnosing Hallucination in Large Vision-Language Models", author = "Cao, Qingxing and Cheng, Junhao and Liang, Xiaodan and Lin, Liang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.658", pages = "12161--12176", abstract = "Despite the significant success of large vision-language models (LVLMs), some studies have revealed that LVLMs suffer from the hallucination problem, where the LVLMs{'} response contains descriptions of non-existent objects. Although various benchmarks have been proposed to investigate this problem, they mostly focus on single-turn evaluation and overlook the hallucination raised by textual inputs. To investigate the hallucination problem of LVLMs when given long-term misleading textual history, we propose a novel visual dialogue hallucination evaluation benchmark VisDiaHalBench. The benchmark consists of samples with five-turn questions about an edited image and its original version. VisDiaHalBench differs from previous hallucination benchmarks in the following three points: 1) The questions and answers are unambiguously grounded by annotated scene graphs. 2) The images are uncommonly edited to inspect the visual model and common-object hallucination in LLMs. 3) The carefully designed dialogue refers a same object in different turns to assess the image consistency and influence of history for LVLMs. The detailed analysis of several state-of-the-art LVLMs across image consistency, visual understanding, history influence, and other dimensions reveals their substantial performance gap with single-turn VQA tasks. The benchmark is released in: https://github.com/qingxingcao/VisDiaHalBench", }
Despite the significant success of large vision-language models (LVLMs), some studies have revealed that LVLMs suffer from the hallucination problem, where the LVLMs{'} response contains descriptions of non-existent objects. Although various benchmarks have been proposed to investigate this problem, they mostly focus on single-turn evaluation and overlook the hallucination raised by textual inputs. To investigate the hallucination problem of LVLMs when given long-term misleading textual history, we propose a novel visual dialogue hallucination evaluation benchmark VisDiaHalBench. The benchmark consists of samples with five-turn questions about an edited image and its original version. VisDiaHalBench differs from previous hallucination benchmarks in the following three points: 1) The questions and answers are unambiguously grounded by annotated scene graphs. 2) The images are uncommonly edited to inspect the visual model and common-object hallucination in LLMs. 3) The carefully designed dialogue refers a same object in different turns to assess the image consistency and influence of history for LVLMs. The detailed analysis of several state-of-the-art LVLMs across image consistency, visual understanding, history influence, and other dimensions reveals their substantial performance gap with single-turn VQA tasks. The benchmark is released in: https://github.com/qingxingcao/VisDiaHalBench
[ "Cao, Qingxing", "Cheng, Junhao", "Liang, Xiaodan", "Lin, Liang" ]
VisDiaHalBench: A Visual Dialogue Benchmark For Diagnosing Hallucination in Large Vision-Language Models
acl-long.658
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.658/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.659.bib
@inproceedings{shi-etal-2024-autodsl, title = "{A}uto{DSL}: Automated domain-specific language design for structural representation of procedures with constraints", author = "Shi, Yu-Zhe and Hou, Haofei and Bi, Zhangqian and Meng, Fanxu and Wei, Xiang and Ruan, Lecheng and Wang, Qining", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.659", pages = "12177--12214", abstract = "Accurate representation of procedures in restricted scenarios, such as non-standardized scientific experiments, requires precise depiction of constraints. Unfortunately, Domain-specific Language (DSL), as an effective tool to express constraints structurally, often requires case-by-case hand-crafting, necessitating customized, labor-intensive efforts. To overcome this challenge, we introduce the AutoDSL framework to automate DSL-based constraint design across various domains. Utilizing domain specified experimental protocol corpora, AutoDSL optimizes syntactic constraints and abstracts semantic constraints. Quantitative and qualitative analyses of the DSLs designed by AutoDSL across five distinct domains highlight its potential as an auxiliary module for language models, aiming to improve procedural planning and execution.", }
Accurate representation of procedures in restricted scenarios, such as non-standardized scientific experiments, requires precise depiction of constraints. Unfortunately, Domain-specific Language (DSL), as an effective tool to express constraints structurally, often requires case-by-case hand-crafting, necessitating customized, labor-intensive efforts. To overcome this challenge, we introduce the AutoDSL framework to automate DSL-based constraint design across various domains. Utilizing domain specified experimental protocol corpora, AutoDSL optimizes syntactic constraints and abstracts semantic constraints. Quantitative and qualitative analyses of the DSLs designed by AutoDSL across five distinct domains highlight its potential as an auxiliary module for language models, aiming to improve procedural planning and execution.
[ "Shi, Yu-Zhe", "Hou, Haofei", "Bi, Zhangqian", "Meng, Fanxu", "Wei, Xiang", "Ruan, Lecheng", "Wang, Qining" ]
AutoDSL: Automated domain-specific language design for structural representation of procedures with constraints
acl-long.659
Poster
2406.12324
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.659/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.660.bib
@inproceedings{franzluebbers-etal-2024-multipath, title = "Multipath parsing in the brain", author = "Franzluebbers, Berta and Dunagan, Donald and Stanojevi{\'c}, Milo{\v{s}} and Buys, Jan and Hale, John", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.660", pages = "12215--12229", abstract = "Humans understand sentences word-by-word, in the order that they hear them. This incrementality entails resolving temporary ambiguities about syntactic relationships. We investigate how humans process these syntactic ambiguities by correlating predictions from incremental generative dependency parsers with timecourse data from people undergoing functional neuroimaging while listening to an audiobook. In particular, we compare competing hypotheses regarding the number of developing syntactic analyses in play during word-by-word comprehension: one vs more than one. This comparison involves evaluating syntactic surprisal from a state-of-the-art dependency parser with LLM-adapted encodings against an existing fMRI dataset. In both English and Chinese data, we find evidence for multipath parsing. Brain regions associated with this multipath effect include bilateral superior temporal gyrus.", }
Humans understand sentences word-by-word, in the order that they hear them. This incrementality entails resolving temporary ambiguities about syntactic relationships. We investigate how humans process these syntactic ambiguities by correlating predictions from incremental generative dependency parsers with timecourse data from people undergoing functional neuroimaging while listening to an audiobook. In particular, we compare competing hypotheses regarding the number of developing syntactic analyses in play during word-by-word comprehension: one vs more than one. This comparison involves evaluating syntactic surprisal from a state-of-the-art dependency parser with LLM-adapted encodings against an existing fMRI dataset. In both English and Chinese data, we find evidence for multipath parsing. Brain regions associated with this multipath effect include bilateral superior temporal gyrus.
[ "Franzluebbers, Berta", "Dunagan, Donald", "Stanojevi{\\'c}, Milo{\\v{s}}", "Buys, Jan", "Hale, John" ]
Multipath parsing in the brain
acl-long.660
Poster
2401.18046
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.660/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.661.bib
@inproceedings{yoon-etal-2024-search, title = "Search-Adaptor: Embedding Customization for Information Retrieval", author = "Yoon, Jinsung and Chen, Yanfei and Arik, Sercan and Pfister, Tomas", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.661", pages = "12230--12247", abstract = "Embeddings extracted by pre-trained Large Language Models (LLMs) have significant potential to improve information retrieval and search. Beyond the zero-shot setup in which they are being conventionally used, being able to take advantage of the information from the relevant query-corpus paired data can further boost the LLM capabilities. In this paper, we propose a novel method, Search-Adaptor, for customizing LLMs for information retrieval in an efficient and robust way. Search-Adaptor modifies the embeddings generated by pre-trained LLMs, and can be integrated with any LLM, including those only available via prediction APIs. On multiple English, multilingual, and multimodal retrieval datasets, we show consistent and significant performance benefits for Search-Adaptor {--} e.g., more than 5{\%} improvements for Google Embedding APIs in nDCG@10 averaged over 14 BEIR datasets.", }
Embeddings extracted by pre-trained Large Language Models (LLMs) have significant potential to improve information retrieval and search. Beyond the zero-shot setup in which they are being conventionally used, being able to take advantage of the information from the relevant query-corpus paired data can further boost the LLM capabilities. In this paper, we propose a novel method, Search-Adaptor, for customizing LLMs for information retrieval in an efficient and robust way. Search-Adaptor modifies the embeddings generated by pre-trained LLMs, and can be integrated with any LLM, including those only available via prediction APIs. On multiple English, multilingual, and multimodal retrieval datasets, we show consistent and significant performance benefits for Search-Adaptor {--} e.g., more than 5{\%} improvements for Google Embedding APIs in nDCG@10 averaged over 14 BEIR datasets.
[ "Yoon, Jinsung", "Chen, Yanfei", "Arik, Sercan", "Pfister, Tomas" ]
Search-Adaptor: Embedding Customization for Information Retrieval
acl-long.661
Poster
2310.08750
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.661/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.662.bib
@inproceedings{ahmadian-etal-2024-back, title = "Back to Basics: Revisiting {REINFORCE}-Style Optimization for Learning from Human Feedback in {LLM}s", author = {Ahmadian, Arash and Cremer, Chris and Gall{\'e}, Matthias and Fadaee, Marzieh and Kreutzer, Julia and Pietquin, Olivier and {\"U}st{\"u}n, Ahmet and Hooker, Sara}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.662", pages = "12248--12267", abstract = "AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. Proximal Policy Optimization (PPO) has been installed by the seminal literature as the standard method for the RL part of RLHF. However, it involves both high computational cost and sensitive hyperparameter tuning. We posit that most of the motivational principles that led to the development of PPO are less of a practical concern in RLHF and advocate for a less computationally expensive method that preserves and even increases performance. We revisit how alignment from human preferences is formulated in the context of RL. Keeping simplicity as a guiding principle, we show that many components of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style optimization variants outperform both PPO and newly proposed {``}RL-free{''} methods such as DPO and RAFT. Our work suggests that careful adaptation to LLMs alignment characteristics allows benefiting from online RL optimization at low cost.", }
AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. Proximal Policy Optimization (PPO) has been installed by the seminal literature as the standard method for the RL part of RLHF. However, it involves both high computational cost and sensitive hyperparameter tuning. We posit that most of the motivational principles that led to the development of PPO are less of a practical concern in RLHF and advocate for a less computationally expensive method that preserves and even increases performance. We revisit how alignment from human preferences is formulated in the context of RL. Keeping simplicity as a guiding principle, we show that many components of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style optimization variants outperform both PPO and newly proposed {``}RL-free{''} methods such as DPO and RAFT. Our work suggests that careful adaptation to LLMs alignment characteristics allows benefiting from online RL optimization at low cost.
[ "Ahmadian, Arash", "Cremer, Chris", "Gall{\\'e}, Matthias", "Fadaee, Marzieh", "Kreutzer, Julia", "Pietquin, Olivier", "{\\\"U}st{\\\"u}n, Ahmet", "Hooker, Sara" ]
Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs
acl-long.662
Poster
[ "" ]
https://huggingface.co/papers/2402.14740
6
6
0
7
https://aclanthology.org/2024.acl-long.662/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.663.bib
@inproceedings{ku-etal-2024-viescore, title = "{VIES}core: Towards Explainable Metrics for Conditional Image Synthesis Evaluation", author = "Ku, Max and Jiang, Dongfu and Wei, Cong and Yue, Xiang and Chen, Wenhu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.663", pages = "12268--12290", abstract = "In the rapidly advancing field of conditional image generation research, challenges such as limited explainability lie in effectively evaluating the performance and capabilities of various models. This paper introduces VIEScore, a Visual Instruction-guided Explainable metric for evaluating any conditional image generation tasks. VIEScore leverages general knowledge from Multimodal Large Language Models (MLLMs) as the backbone and does not require training or fine-tuning. We evaluate VIEScore on seven prominent tasks in conditional image tasks and found: (1) VIEScore (GPT4-o) achieves a high Spearman correlation of 0.4 with human evaluations, while the human-to-human correlation is 0.45. (2) VIEScore (with open-source MLLM) is significantly weaker than GPT-4o and GPT-4v in evaluating synthetic images. (3) VIEScore achieves a correlation on par with human ratings in the generation tasks but struggles in editing tasks. With these results, we believe VIEScore shows its great potential to replace human judges in evaluating image synthesis tasks.", }
In the rapidly advancing field of conditional image generation research, challenges such as limited explainability lie in effectively evaluating the performance and capabilities of various models. This paper introduces VIEScore, a Visual Instruction-guided Explainable metric for evaluating any conditional image generation tasks. VIEScore leverages general knowledge from Multimodal Large Language Models (MLLMs) as the backbone and does not require training or fine-tuning. We evaluate VIEScore on seven prominent tasks in conditional image tasks and found: (1) VIEScore (GPT4-o) achieves a high Spearman correlation of 0.4 with human evaluations, while the human-to-human correlation is 0.45. (2) VIEScore (with open-source MLLM) is significantly weaker than GPT-4o and GPT-4v in evaluating synthetic images. (3) VIEScore achieves a correlation on par with human ratings in the generation tasks but struggles in editing tasks. With these results, we believe VIEScore shows its great potential to replace human judges in evaluating image synthesis tasks.
[ "Ku, Max", "Jiang, Dongfu", "Wei, Cong", "Yue, Xiang", "Chen, Wenhu" ]
VIEScore: Towards Explainable Metrics for Conditional Image Synthesis Evaluation
acl-long.663
Poster
2312.14867
[ "" ]
https://huggingface.co/papers/2312.14867
3
1
0
5
https://aclanthology.org/2024.acl-long.663/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.664.bib
@inproceedings{zhou-etal-2024-tree, title = "Tree Transformer{'}s Disambiguation Ability of Prepositional Phrase Attachment and Garden Path Effects", author = "Zhou, Lingling and Verberne, Suzan and Wijnholds, Gijs", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.664", pages = "12291--12301", abstract = "This work studies two types of ambiguity in natural language: prepositional phrase (PP) attachment ambiguity, and garden path constructions. Due to the different nature of these ambiguities {--} one being structural, the other incremental in nature {--} we pretrain and evaluate the Tree Transformer of Wang et al. (2019), an unsupervised Transformer model that induces tree representations internally. To assess PP attachment ambiguity we inspect the model{'}s induced parse trees against a newly prepared dataset derived from the PP attachment corpus (Ratnaparkhi et al., 1994). Measuring garden path effects is done by considering surprisal rates of the underlying language model on a number of dedicated test suites, following Futrell et al. (2019). For comparison we evaluate a pretrained supervised BiLSTM-based model trained on constituency parsing as sequence labelling (G{\'o}mez-Rodr{\'\i}guez and Vilares, 2018). Results show that the unsupervised Tree Transformer does exhibit garden path effects, but its parsing ability is far inferior to the supervised BiLSTM, and it is not as sensitive to lexical cues as other large LSTM models, suggesting that supervised parsers based on a pre-Transformer architecture may be the better choice in the presence of ambiguity.", }
This work studies two types of ambiguity in natural language: prepositional phrase (PP) attachment ambiguity, and garden path constructions. Due to the different nature of these ambiguities {--} one being structural, the other incremental in nature {--} we pretrain and evaluate the Tree Transformer of Wang et al. (2019), an unsupervised Transformer model that induces tree representations internally. To assess PP attachment ambiguity we inspect the model{'}s induced parse trees against a newly prepared dataset derived from the PP attachment corpus (Ratnaparkhi et al., 1994). Measuring garden path effects is done by considering surprisal rates of the underlying language model on a number of dedicated test suites, following Futrell et al. (2019). For comparison we evaluate a pretrained supervised BiLSTM-based model trained on constituency parsing as sequence labelling (G{\'o}mez-Rodr{\'\i}guez and Vilares, 2018). Results show that the unsupervised Tree Transformer does exhibit garden path effects, but its parsing ability is far inferior to the supervised BiLSTM, and it is not as sensitive to lexical cues as other large LSTM models, suggesting that supervised parsers based on a pre-Transformer architecture may be the better choice in the presence of ambiguity.
[ "Zhou, Lingling", "Verberne, Suzan", "Wijnholds, Gijs" ]
Tree Transformer's Disambiguation Ability of Prepositional Phrase Attachment and Garden Path Effects
acl-long.664
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.664/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.665.bib
@inproceedings{markowitz-etal-2024-tree, title = "Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs", author = "Markowitz, Elan and Ramakrishna, Anil and Dhamala, Jwala and Mehrabi, Ninareh and Peris, Charith and Gupta, Rahul and Chang, Kai-Wei and Galstyan, Aram", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.665", pages = "12302--12319", abstract = "Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge. However, KGs and LLMs are often developed separately and must be integrated after training. We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs. The algorithm equips a LLM with actions for interfacing a KG and enables the LLM to perform tree search over possible thoughts and actions to find high confidence reasoning paths. Tree-of-Traversals significantly improves performance on question answering and KG question answering tasks. Code is available at https://github.com/amazon-science/tree-of-traversals", }
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge. However, KGs and LLMs are often developed separately and must be integrated after training. We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs. The algorithm equips a LLM with actions for interfacing a KG and enables the LLM to perform tree search over possible thoughts and actions to find high confidence reasoning paths. Tree-of-Traversals significantly improves performance on question answering and KG question answering tasks. Code is available at https://github.com/amazon-science/tree-of-traversals
[ "Markowitz, Elan", "Ramakrishna, Anil", "Dhamala, Jwala", "Mehrabi, Ninareh", "Peris, Charith", "Gupta, Rahul", "Chang, Kai-Wei", "Galstyan, Aram" ]
Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs
acl-long.665
Poster
2407.21358
[ "https://github.com/amazon-science/tree-of-traversals" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.665/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.666.bib
@inproceedings{shi-etal-2024-structured, title = "Structured Tree Alignment for Evaluation of (Speech) Constituency Parsing", author = "Shi, Freda and Gimpel, Kevin and Livescu, Karen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.666", pages = "12320--12332", abstract = "We present the structured average intersection-over-union ratio (STRUCT-IOU), an evaluation metric that compares a constituency parse tree over automatically recognized spoken word boundaries with the ground-truth parse tree over written words. To compute the metric, we (1) project the ground-truth parse tree to the speech domain by forced alignment, (2) align the projected ground-truth constituents with the predicted ones under certain structured constraints, and (3) calculate the average IOU score across all aligned constituent pairs. STRUCT-IOU takes word boundaries into account and overcomes the challenge that the predicted words and ground truth may not have perfect one-to-one correspondence. Extending to the evaluation of text constituency parsing, we demonstrate that STRUCT-IOU shows higher tolerance to syntactically plausible parses than PARSEVAL (Black et al., 1991).", }
We present the structured average intersection-over-union ratio (STRUCT-IOU), an evaluation metric that compares a constituency parse tree over automatically recognized spoken word boundaries with the ground-truth parse tree over written words. To compute the metric, we (1) project the ground-truth parse tree to the speech domain by forced alignment, (2) align the projected ground-truth constituents with the predicted ones under certain structured constraints, and (3) calculate the average IOU score across all aligned constituent pairs. STRUCT-IOU takes word boundaries into account and overcomes the challenge that the predicted words and ground truth may not have perfect one-to-one correspondence. Extending to the evaluation of text constituency parsing, we demonstrate that STRUCT-IOU shows higher tolerance to syntactically plausible parses than PARSEVAL (Black et al., 1991).
[ "Shi, Freda", "Gimpel, Kevin", "Livescu, Karen" ]
Structured Tree Alignment for Evaluation of (Speech) Constituency Parsing
acl-long.666
Poster
2402.13433
[ "https://github.com/explorerfreda/struct-iou" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.666/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.667.bib
@inproceedings{jha-etal-2024-visage, title = "{V}i{SAG}e: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation", author = "Jha, Akshita and Prabhakaran, Vinodkumar and Denton, Remi and Laszlo, Sarah and Dave, Shachi and Qadri, Rida and Reddy, Chandan and Dev, Sunipa", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.667", pages = "12333--12347", abstract = "Recent studies have shown that Text-to-Image (T2I) model generations can reflect social stereotypes present in the real world. However, existing approaches for evaluating stereotypes have a noticeable lack of coverage of global identity groups and their associated stereotypes. To address this gap, we introduce the ViSAGe (Visual Stereotypes Around the Globe) dataset to enable the evaluation of known nationality-based stereotypes in T2I models, across 135 nationalities. We enrich an existing textual stereotype resource by distinguishing between stereotypical associations that are more likely to have visual depictions, such as {`}sombrero{'}, from those that are less visually concrete, such as {`}attractive{'}. We demonstrate ViSAGe{'}s utility through a multi-faceted evaluation of T2I generations. First, we show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes, and that the offensiveness of these depictions is especially higher for identities from Africa, South America, and South East Asia. Second, we assess the {`}stereotypical pull{'} of visual depictions of identity groups, which reveals how the {`}default{'} representations of all identity groups in ViSAGe have a pull towards stereotypical depictions, and that this pull is even more prominent for identity groups from the Global South. CONTENT WARNING: Some examples contain offensive stereotypes.", }
Recent studies have shown that Text-to-Image (T2I) model generations can reflect social stereotypes present in the real world. However, existing approaches for evaluating stereotypes have a noticeable lack of coverage of global identity groups and their associated stereotypes. To address this gap, we introduce the ViSAGe (Visual Stereotypes Around the Globe) dataset to enable the evaluation of known nationality-based stereotypes in T2I models, across 135 nationalities. We enrich an existing textual stereotype resource by distinguishing between stereotypical associations that are more likely to have visual depictions, such as {`}sombrero{'}, from those that are less visually concrete, such as {`}attractive{'}. We demonstrate ViSAGe{'}s utility through a multi-faceted evaluation of T2I generations. First, we show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes, and that the offensiveness of these depictions is especially higher for identities from Africa, South America, and South East Asia. Second, we assess the {`}stereotypical pull{'} of visual depictions of identity groups, which reveals how the {`}default{'} representations of all identity groups in ViSAGe have a pull towards stereotypical depictions, and that this pull is even more prominent for identity groups from the Global South. CONTENT WARNING: Some examples contain offensive stereotypes.
[ "Jha, Akshita", "Prabhakaran, Vinodkumar", "Denton, Remi", "Laszlo, Sarah", "Dave, Shachi", "Qadri, Rida", "Reddy, Ch", "an", "Dev, Sunipa" ]
ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation
acl-long.667
Poster
2401.06310
[ "https://github.com/google-research-datasets/visage" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.667/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.668.bib
@inproceedings{zhang-etal-2024-transferable, title = "Transferable and Efficient Non-Factual Content Detection via Probe Training with Offline Consistency Checking", author = "Zhang, Xiaokang and Yao, Zijun and Zhang, Jing and Yun, Kaifeng and Yu, Jifan and Li, Juanzi and Tang, Jie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.668", pages = "12348--12364", abstract = "This paper proposes PiNose, which trains a probing model on offline self-consistency checking results, thereby circumventing the need for human-annotated data and achieving transferability across diverse data distributions. As the consistency check process is offline, PiNose reduces the computational burden of generating multiple responses by online consistency verification. Additionally, it examines various aspects of internal states prior to response decoding, contributing to more effective detection of factual inaccuracies. Experiment results on both factuality detection and question answering benchmarks show that PiNose achieves surpassing results than existing factuality detection methods.", }
This paper proposes PiNose, which trains a probing model on offline self-consistency checking results, thereby circumventing the need for human-annotated data and achieving transferability across diverse data distributions. As the consistency check process is offline, PiNose reduces the computational burden of generating multiple responses by online consistency verification. Additionally, it examines various aspects of internal states prior to response decoding, contributing to more effective detection of factual inaccuracies. Experiment results on both factuality detection and question answering benchmarks show that PiNose achieves surpassing results than existing factuality detection methods.
[ "Zhang, Xiaokang", "Yao, Zijun", "Zhang, Jing", "Yun, Kaifeng", "Yu, Jifan", "Li, Juanzi", "Tang, Jie" ]
Transferable and Efficient Non-Factual Content Detection via Probe Training with Offline Consistency Checking
acl-long.668
Poster
2404.06742
[ "https://github.com/pinocchio42/pinose" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.668/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.669.bib
@inproceedings{li-etal-2024-language, title = "What Do Language Models Learn in Context? The Structured Task Hypothesis.", author = "Li, Jiaoda and Hou, Yifan and Sachan, Mrinmaya and Cotterell, Ryan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.669", pages = "12365--12379", abstract = "Large language models (LLMs) exhibit an intriguing ability to learn a novel task from in-context examples presented in a demonstration, termed in-context learning (ICL). Understandably, a swath of research has been dedicated to uncovering the theories underpinning ICL. One popular hypothesis explains ICL by task selection. LLMs identify the task based on the demonstration and generalize it to the prompt. Another popular hypothesis is that ICL is a form of meta-learning, i.e., the models learn a learning algorithm at pre-training time and apply it to the demonstration. Finally, a third hypothesis argues that LLMs use the demonstration to select a composition of tasks learned during pre-training to perform ICL. In this paper, we empirically explore these three hypotheses that explain LLMs{'} ability to learn in context with a suite of experiments derived from common text classification tasks. We invalidate the first two hypotheses with counterexamples and provide evidence in support of the last hypothesis. Our results suggest an LLM could learn a novel task in context via composing tasks learned during pre-training.", }
Large language models (LLMs) exhibit an intriguing ability to learn a novel task from in-context examples presented in a demonstration, termed in-context learning (ICL). Understandably, a swath of research has been dedicated to uncovering the theories underpinning ICL. One popular hypothesis explains ICL by task selection. LLMs identify the task based on the demonstration and generalize it to the prompt. Another popular hypothesis is that ICL is a form of meta-learning, i.e., the models learn a learning algorithm at pre-training time and apply it to the demonstration. Finally, a third hypothesis argues that LLMs use the demonstration to select a composition of tasks learned during pre-training to perform ICL. In this paper, we empirically explore these three hypotheses that explain LLMs{'} ability to learn in context with a suite of experiments derived from common text classification tasks. We invalidate the first two hypotheses with counterexamples and provide evidence in support of the last hypothesis. Our results suggest an LLM could learn a novel task in context via composing tasks learned during pre-training.
[ "Li, Jiaoda", "Hou, Yifan", "Sachan, Mrinmaya", "Cotterell, Ryan" ]
What Do Language Models Learn in Context? The Structured Task Hypothesis.
acl-long.669
Oral
[ "https://github.com/eth-lre/llm_icl" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.669/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.670.bib
@inproceedings{yin-etal-2024-agent, title = "Agent Lumos: Unified and Modular Training for Open-Source Language Agents", author = "Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.670", pages = "12380--12403", abstract = "Closed-source agents suffer from several issues such as a lack of affordability, transparency, and reproducibility, particularly on complex interactive tasks. This motivates the development of open-source alternatives. We introduce Lumos, one of the first frameworks for training open-source LLM-based agents. Lumos features a learnable, unified and modular architecture with a planning module that learns high-level subgoal generation, and a grounding module trained to translate these into the actions using various tools in the execution module. The design allows for modular upgrades and wider applicability to diverse interactive tasks. To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks. On 9 datasets, Lumos exhibits several key advantages: (1) Lumos excels multiple larger open-source agents on the held-out datasets (unused for training) for each task type. Lumos even surpasses GPT agents on QA and web tasks; (2) Lumos outperforms open-source agents produced by chain-of-thoughts and unmodularized integrated training; and (3) Lumos effectively generalizes to unseen tasks, outperforming 33B-scale agents and domain-specific agents. Code and data will be released.", }
Closed-source agents suffer from several issues such as a lack of affordability, transparency, and reproducibility, particularly on complex interactive tasks. This motivates the development of open-source alternatives. We introduce Lumos, one of the first frameworks for training open-source LLM-based agents. Lumos features a learnable, unified and modular architecture with a planning module that learns high-level subgoal generation, and a grounding module trained to translate these into the actions using various tools in the execution module. The design allows for modular upgrades and wider applicability to diverse interactive tasks. To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks. On 9 datasets, Lumos exhibits several key advantages: (1) Lumos excels multiple larger open-source agents on the held-out datasets (unused for training) for each task type. Lumos even surpasses GPT agents on QA and web tasks; (2) Lumos outperforms open-source agents produced by chain-of-thoughts and unmodularized integrated training; and (3) Lumos effectively generalizes to unseen tasks, outperforming 33B-scale agents and domain-specific agents. Code and data will be released.
[ "Yin, Da", "Brahman, Faeze", "Ravich", "er, Abhilasha", "Ch", "u, Khyathi", "Chang, Kai-Wei", "Choi, Yejin", "Lin, Bill Yuchen" ]
Agent Lumos: Unified and Modular Training for Open-Source Language Agents
acl-long.670
Poster
2311.05657
[ "https://github.com/allenai/lumos" ]
https://huggingface.co/papers/2311.05657
6
27
2
7
https://aclanthology.org/2024.acl-long.670/
[ "ai2lumos/lumos_unified_plan_iterative", "ai2lumos/lumos_web_agent_plan_iterative", "ai2lumos/lumos_unified_ground_iterative-13B", "ai2lumos/lumos_complex_qa_ground_iterative-13B", "ai2lumos/lumos_unified_ground_iterative", "ai2lumos/lumos_unified_plan_iterative-13B", "ai2lumos/lumos_multimodal_plan_iterative", "ai2lumos/lumos_multimodal_ground_iterative", "ai2lumos/lumos_multimodal_ground_iterative-13B", "ai2lumos/lumos_maths_plan_onetime-13B", "ai2lumos/lumos_complex_qa_plan_iterative-13B", "ai2lumos/lumos_maths_plan_iterative-13B", "ai2lumos/lumos_web_agent_ground_iterative", "ai2lumos/lumos_maths_ground_onetime", "ai2lumos/lumos_complex_qa_ground_iterative", "ai2lumos/lumos_complex_qa_plan_iterative", "ai2lumos/lumos_maths_plan_onetime", "ai2lumos/lumos_complex_qa_plan_onetime", "ai2lumos/lumos_complex_qa_ground_onetime", "ai2lumos/lumos_maths_ground_iterative", "ai2lumos/lumos_maths_plan_iterative", "ai2lumos/lumos_multimodal_plan_iterative-13B" ]
[ "ai2lumos/lumos_web_agent_plan_iterative", "ai2lumos/lumos_complex_qa_plan_iterative", "ai2lumos/lumos_complex_qa_ground_onetime", "ai2lumos/lumos_complex_qa_plan_onetime", "ai2lumos/lumos_maths_plan_onetime", "ai2lumos/lumos_web_agent_ground_iterative", "ai2lumos/lumos_multimodal_plan_iterative", "ai2lumos/lumos_unified_plan_iterative", "ai2lumos/lumos_complex_qa_ground_iterative", "ai2lumos/lumos_unified_ground_iterative", "ai2lumos/lumos_maths_ground_iterative", "ai2lumos/lumos_maths_ground_onetime", "ai2lumos/lumos_multimodal_ground_iterative", "ai2lumos/lumos_maths_plan_iterative" ]
[]
1
https://aclanthology.org/2024.acl-long.671.bib
@inproceedings{alkhamissi-etal-2024-investigating, title = "Investigating Cultural Alignment of Large Language Models", author = "AlKhamissi, Badr and ElNokrashy, Muhammad and Alkhamissi, Mai and Diab, Mona", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.671", pages = "12404--12422", abstract = "The intricate relationship between language and culture has long been a subject of exploration within the realm of linguistic anthropology. Large Language Models (LLMs), promoted as repositories of collective human knowledge, raise a pivotal question: do these models genuinely encapsulate the diverse knowledge adopted by different cultures? Our study reveals that these models demonstrate greater cultural alignment along two dimensions{---}firstly, when prompted with the dominant language of a specific culture, and secondly, when pretrained with a refined mixture of languages employed by that culture. We quantify cultural alignment by simulating sociological surveys, comparing model responses to those of actual survey participants as references. Specifically, we replicate a survey conducted in various regions of Egypt and the United States through prompting LLMs with different pretraining data mixtures in both Arabic and English with the personas of the real respondents and the survey questions. Further analysis reveals that misalignment becomes more pronounced for underrepresented personas and for culturally sensitive topics, such as those probing social values. Finally, we introduce Anthropological Prompting, a novel method leveraging anthropological reasoning to enhance cultural alignment. Our study emphasizes the necessity for a more balanced multilingual pretraining dataset to better represent the diversity of human experience and the plurality of different cultures with many implications on the topic of cross-lingual transfer.", }
The intricate relationship between language and culture has long been a subject of exploration within the realm of linguistic anthropology. Large Language Models (LLMs), promoted as repositories of collective human knowledge, raise a pivotal question: do these models genuinely encapsulate the diverse knowledge adopted by different cultures? Our study reveals that these models demonstrate greater cultural alignment along two dimensions{---}firstly, when prompted with the dominant language of a specific culture, and secondly, when pretrained with a refined mixture of languages employed by that culture. We quantify cultural alignment by simulating sociological surveys, comparing model responses to those of actual survey participants as references. Specifically, we replicate a survey conducted in various regions of Egypt and the United States through prompting LLMs with different pretraining data mixtures in both Arabic and English with the personas of the real respondents and the survey questions. Further analysis reveals that misalignment becomes more pronounced for underrepresented personas and for culturally sensitive topics, such as those probing social values. Finally, we introduce Anthropological Prompting, a novel method leveraging anthropological reasoning to enhance cultural alignment. Our study emphasizes the necessity for a more balanced multilingual pretraining dataset to better represent the diversity of human experience and the plurality of different cultures with many implications on the topic of cross-lingual transfer.
[ "AlKhamissi, Badr", "ElNokrashy, Muhammad", "Alkhamissi, Mai", "Diab, Mona" ]
Investigating Cultural Alignment of Large Language Models
acl-long.671
Poster
2402.13231
[ "https://github.com/bkhmsi/cultural-trends" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.671/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.672.bib
@inproceedings{wongkamjan-etal-2024-victories, title = "More Victories, Less Cooperation: Assessing Cicero{'}s Diplomacy Play", author = "Wongkamjan, Wichayaporn and Gu, Feng and Wang, Yanze and Hermjakob, Ulf and May, Jonathan and Stewart, Brandon and Kummerfeld, Jonathan and Peskoff, Denis and Boyd-Graber, Jordan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.672", pages = "12423--12441", abstract = "The boardgame Diplomacy is a challenging setting for communicative and cooperative artificial intelligence. The most prominent communicative Diplomacy AI, Cicero, has excellent strategic abilities, exceeding human players. However, the best Diplomacy players master communication, not just tactics, which is why the game has received attention as an AI challenge. This work seeks to understand the degree to which Cicero succeeds at communication. First, we annotate in-game communication with abstract meaning representation to separate in-game tactics from general language. Second, we run two dozen games with humans and Cicero, totaling over 200 human-player hours of competition. While AI can consistently outplay human players, AI-Human communication is still limited because of AI{'}s difficulty with deception and persuasion. This shows that Cicero relies on strategy and has not yet reached the full promise of communicative and cooperative AI.", }
The boardgame Diplomacy is a challenging setting for communicative and cooperative artificial intelligence. The most prominent communicative Diplomacy AI, Cicero, has excellent strategic abilities, exceeding human players. However, the best Diplomacy players master communication, not just tactics, which is why the game has received attention as an AI challenge. This work seeks to understand the degree to which Cicero succeeds at communication. First, we annotate in-game communication with abstract meaning representation to separate in-game tactics from general language. Second, we run two dozen games with humans and Cicero, totaling over 200 human-player hours of competition. While AI can consistently outplay human players, AI-Human communication is still limited because of AI{'}s difficulty with deception and persuasion. This shows that Cicero relies on strategy and has not yet reached the full promise of communicative and cooperative AI.
[ "Wongkamjan, Wichayaporn", "Gu, Feng", "Wang, Yanze", "Hermjakob, Ulf", "May, Jonathan", "Stewart, Br", "on", "Kummerfeld, Jonathan", "Peskoff, Denis", "Boyd-Graber, Jordan" ]
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
acl-long.672
Poster
2406.04643
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.672/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.673.bib
@inproceedings{peng-etal-2024-voicecraft, title = "{V}oice{C}raft: Zero-Shot Speech Editing and Text-to-Speech in the Wild", author = "Peng, Puyuan and Huang, Po-Yao and Li, Shang-Wen and Mohamed, Abdelrahman and Harwath, David", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.673", pages = "12442--12462", abstract = "We introduce VoiceCraft, a token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech (TTS) on audiobooks, internet videos, and podcasts. VoiceCraft employs a Transformer decoder architecture and introduces a token rearrangement procedure that combines causal masking and delayed stacking to enable generation within an existing sequence. On speech editing tasks, VoiceCraft produces edited speech that is nearly indistinguishable from unedited recordings in terms of naturalness, as evaluated by humans; for zero-shot TTS, our model outperforms prior SotA models including VALL-E and the popular commercial model XTTS v2. Crucially, the models are evaluated on challenging and realistic datasets, that consist of diverse accents, speaking styles, recording conditions, and background noise and music, and our model performs consistently well compared to other models and real recordings. In particular, for speech editing evaluation, we introduce a high quality, challenging, and realistic dataset named . We encourage readers to listen to the demos at https://jasonppy.github.io/VoiceCraft{\_}web. Data, code, and model weights are available at https://github.com/jasonppy/VoiceCraft", }
We introduce VoiceCraft, a token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech (TTS) on audiobooks, internet videos, and podcasts. VoiceCraft employs a Transformer decoder architecture and introduces a token rearrangement procedure that combines causal masking and delayed stacking to enable generation within an existing sequence. On speech editing tasks, VoiceCraft produces edited speech that is nearly indistinguishable from unedited recordings in terms of naturalness, as evaluated by humans; for zero-shot TTS, our model outperforms prior SotA models including VALL-E and the popular commercial model XTTS v2. Crucially, the models are evaluated on challenging and realistic datasets, that consist of diverse accents, speaking styles, recording conditions, and background noise and music, and our model performs consistently well compared to other models and real recordings. In particular, for speech editing evaluation, we introduce a high quality, challenging, and realistic dataset named . We encourage readers to listen to the demos at https://jasonppy.github.io/VoiceCraft{\_}web. Data, code, and model weights are available at https://github.com/jasonppy/VoiceCraft
[ "Peng, Puyuan", "Huang, Po-Yao", "Li, Shang-Wen", "Mohamed, Abdelrahman", "Harwath, David" ]
VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild
acl-long.673
Oral
2403.16973
[ "https://github.com/jasonppy/voicecraft" ]
https://huggingface.co/papers/2403.16973
0
2
0
5
https://aclanthology.org/2024.acl-long.673/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.674.bib
@inproceedings{dugan-etal-2024-raid, title = "{RAID}: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors", author = "Dugan, Liam and Hwang, Alyssa and Trhl{\'\i}k, Filip and Zhu, Andrew and Ludan, Josh Magnus and Xu, Hainiu and Ippolito, Daphne and Callison-Burch, Chris", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.674", pages = "12463--12492", abstract = "Many commercial and open-source models claim to detect machine-generated text with extremely high accuracy (99{\%} or more). However, very few of these detectors are evaluated on shared benchmark datasets and even when they are, the datasets used for evaluation are insufficiently challenging{---}lacking variations in sampling strategy, adversarial attacks, and open-source generative models. In this work we present RAID: the largest and most challenging benchmark dataset for machine-generated text detection. RAID includes over 6 million generations spanning 11 models, 8 domains, 11 adversarial attacks and 4 decoding strategies. Using RAID, we evaluate the out-of-domain and adversarial robustness of 8 open- and 4 closed-source detectors and find that current detectors are easily fooled by adversarial attacks, variations in sampling strategies, repetition penalties, and unseen generative models. We release our data along with a leaderboard to encourage future research.", }
Many commercial and open-source models claim to detect machine-generated text with extremely high accuracy (99{\%} or more). However, very few of these detectors are evaluated on shared benchmark datasets and even when they are, the datasets used for evaluation are insufficiently challenging{---}lacking variations in sampling strategy, adversarial attacks, and open-source generative models. In this work we present RAID: the largest and most challenging benchmark dataset for machine-generated text detection. RAID includes over 6 million generations spanning 11 models, 8 domains, 11 adversarial attacks and 4 decoding strategies. Using RAID, we evaluate the out-of-domain and adversarial robustness of 8 open- and 4 closed-source detectors and find that current detectors are easily fooled by adversarial attacks, variations in sampling strategies, repetition penalties, and unseen generative models. We release our data along with a leaderboard to encourage future research.
[ "Dugan, Liam", "Hwang, Alyssa", "Trhl{\\'\\i}k, Filip", "Zhu, Andrew", "Ludan, Josh Magnus", "Xu, Hainiu", "Ippolito, Daphne", "Callison-Burch, Chris" ]
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
acl-long.674
Poster
2405.07940
[ "https://github.com/liamdugan/raid" ]
https://huggingface.co/papers/2405.07940
1
0
0
8
https://aclanthology.org/2024.acl-long.674/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.675.bib
@inproceedings{kruk-etal-2024-silent, title = "Silent Signals, Loud Impact: {LLM}s for Word-Sense Disambiguation of Coded Dog Whistles", author = "Kruk, Julia and Marchini, Michela and Magu, Rijul and Ziems, Caleb and Muchlinski, David and Yang, Diyi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.675", pages = "12493--12509", abstract = "A dog whistle is a form of coded communication that carries a secondary meaning to specific audiences and is often weaponized for racial and socioeconomic discrimination. Dog whistling historically originated from United States politics, but in recent years has taken root in social media as a means of evading hate speech detection systems and maintaining plausible deniability. In this paper, we present an approach for word-sense disambiguation of dog whistles from standard speech using Large Language Models (LLMs), and leverage this technique to create a dataset of 16,550 high-confidence coded examples of dog whistles used in formal and informal communication. Silent Signals is the largest dataset of disambiguated dog whistle usage, created for applications in hate speech detection, neology, and political science.", }
A dog whistle is a form of coded communication that carries a secondary meaning to specific audiences and is often weaponized for racial and socioeconomic discrimination. Dog whistling historically originated from United States politics, but in recent years has taken root in social media as a means of evading hate speech detection systems and maintaining plausible deniability. In this paper, we present an approach for word-sense disambiguation of dog whistles from standard speech using Large Language Models (LLMs), and leverage this technique to create a dataset of 16,550 high-confidence coded examples of dog whistles used in formal and informal communication. Silent Signals is the largest dataset of disambiguated dog whistle usage, created for applications in hate speech detection, neology, and political science.
[ "Kruk, Julia", "Marchini, Michela", "Magu, Rijul", "Ziems, Caleb", "Muchlinski, David", "Yang, Diyi" ]
Silent Signals, Loud Impact: LLMs for Word-Sense Disambiguation of Coded Dog Whistles
acl-long.675
Poster
2406.06840
[ "" ]
https://huggingface.co/papers/2406.06840
0
0
0
6
https://aclanthology.org/2024.acl-long.675/
[]
[ "SALT-NLP/silent_signals" ]
[]
1
https://aclanthology.org/2024.acl-long.676.bib
@inproceedings{nowak-etal-2024-representational, title = "On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning", author = "Nowak, Franz and Svete, Anej and Butoi, Alexandra and Cotterell, Ryan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.676", pages = "12510--12548", abstract = "The performance of modern language models (LMs) has been improved by chain-of-thought (CoT) reasoning, i.e., the process of generating intermediate results that guide the model towards a final answer. A possible explanation for this improvement is that CoT reasoning extends an LM{'}s computational power, as RNNs and transformers with additional scratch space are known to be Turing complete. Comparing LMs to Turing machines, however, introduces a category error{---}Turing machines decide language membership, whereas LMs define distributions over strings. To bridge this gap, we formalize CoT reasoning in a probabilistic setting. We present several results on the representational capacity of recurrent and transformer LMs with CoT reasoning, showing that they can represent the same family of distributions over strings as probabilistic Turing machines.", }
The performance of modern language models (LMs) has been improved by chain-of-thought (CoT) reasoning, i.e., the process of generating intermediate results that guide the model towards a final answer. A possible explanation for this improvement is that CoT reasoning extends an LM{'}s computational power, as RNNs and transformers with additional scratch space are known to be Turing complete. Comparing LMs to Turing machines, however, introduces a category error{---}Turing machines decide language membership, whereas LMs define distributions over strings. To bridge this gap, we formalize CoT reasoning in a probabilistic setting. We present several results on the representational capacity of recurrent and transformer LMs with CoT reasoning, showing that they can represent the same family of distributions over strings as probabilistic Turing machines.
[ "Nowak, Franz", "Svete, Anej", "Butoi, Alex", "ra", "Cotterell, Ryan" ]
On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning
acl-long.676
Poster
2406.14197
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.676/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.677.bib
@inproceedings{ramprasad-etal-2024-analyzing, title = "Analyzing {LLM} Behavior in Dialogue Summarization: Unveiling Circumstantial Hallucination Trends", author = "Ramprasad, Sanjana and Ferracane, Elisa and Lipton, Zachary", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.677", pages = "12549--12561", abstract = "Recent advancements in large language models (LLMs) have significantly advanced the capabilities of summarization systems.However, they continue to face a persistent challenge: hallucination. While prior work has extensively examined LLMs in news domains, evaluation of dialogue summarization has primarily focused on BART-based models, resulting in a notable gap in understanding LLM effectiveness.Our work seeks to address this gap by benchmarking LLMs for dialogue summarization faithfulness using human annotations,focusing on identifying and categorizing span-level inconsistencies.Specifically, we evaluate two prominent LLMs: GPT-4 and Alpaca-13B.Our evaluation reveals that LLMs often generate plausible, but not fully supported inferences based on conversation contextual cues, a trait absent in older models. As a result, we propose a refined taxonomy of errors, introducing a novel category termed {``}Contextual Inference{''} to address this aspect of LLM behavior. Using our taxonomy, we compare the behavioral differences between LLMs and older fine-tuned models. Additionally, we systematically assess the efficacy of automatic error detection methods on LLM summaries and find that they struggle to detect these nuanced errors effectively. To address this, we introduce two prompt-based approaches for fine-grained error detection. Our methods outperform existing metrics, particularly in identifying the novel {``}Contextual Inference{''} error type.", }
Recent advancements in large language models (LLMs) have significantly advanced the capabilities of summarization systems.However, they continue to face a persistent challenge: hallucination. While prior work has extensively examined LLMs in news domains, evaluation of dialogue summarization has primarily focused on BART-based models, resulting in a notable gap in understanding LLM effectiveness.Our work seeks to address this gap by benchmarking LLMs for dialogue summarization faithfulness using human annotations,focusing on identifying and categorizing span-level inconsistencies.Specifically, we evaluate two prominent LLMs: GPT-4 and Alpaca-13B.Our evaluation reveals that LLMs often generate plausible, but not fully supported inferences based on conversation contextual cues, a trait absent in older models. As a result, we propose a refined taxonomy of errors, introducing a novel category termed {``}Contextual Inference{''} to address this aspect of LLM behavior. Using our taxonomy, we compare the behavioral differences between LLMs and older fine-tuned models. Additionally, we systematically assess the efficacy of automatic error detection methods on LLM summaries and find that they struggle to detect these nuanced errors effectively. To address this, we introduce two prompt-based approaches for fine-grained error detection. Our methods outperform existing metrics, particularly in identifying the novel {``}Contextual Inference{''} error type.
[ "Ramprasad, Sanjana", "Ferracane, Elisa", "Lipton, Zachary" ]
Analyzing LLM Behavior in Dialogue Summarization: Unveiling Circumstantial Hallucination Trends
acl-long.677
Poster
2406.03487
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.677/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.678.bib
@inproceedings{alizadeh-etal-2024-llm, title = "{LLM} in a flash: Efficient Large Language Model Inference with Limited Memory", author = "Alizadeh, Keivan and Mirzadeh, Seyed Iman and Belenko, Dmitry and Khatamifard, S. and Cho, Minsik and Del Mundo, Carlo C and Rastegari, Mohammad and Farajtabar, Mehrdad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.678", pages = "12562--12584", abstract = "Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their substantial computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this hardware-informed framework, we introduce two principal techniques. First, {``}windowing{''} strategically reduces data transfer by reusing previously activated neurons, and second, {``}row-column bundling{''}, tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.", }
Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their substantial computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this hardware-informed framework, we introduce two principal techniques. First, {``}windowing{''} strategically reduces data transfer by reusing previously activated neurons, and second, {``}row-column bundling{''}, tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.
[ "Alizadeh, Keivan", "Mirzadeh, Seyed Iman", "Belenko, Dmitry", "Khatamifard, S.", "Cho, Minsik", "Del Mundo, Carlo C", "Rastegari, Mohammad", "Farajtabar, Mehrdad" ]
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
acl-long.678
Oral
2312.11514
[ "" ]
https://huggingface.co/papers/2312.11514
3
255
8
8
https://aclanthology.org/2024.acl-long.678/
[]
[]
[ "austinsilveria/tricksy" ]
1
https://aclanthology.org/2024.acl-long.679.bib
@inproceedings{maaz-etal-2024-video, title = "Video-{C}hat{GPT}: Towards Detailed Video Understanding via Large Vision and Language Models", author = "Maaz, Muhammad and Rasheed, Hanoona and Khan, Salman and Khan, Fahad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.679", pages = "12585--12602", abstract = "Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the under-explored field of \textit{video-based conversation} by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with an LLM. The resulting model is capable of understanding and generating detailed conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantitative evaluation framework for video-based dialogue models to objectively analyze the strengths and weaknesses of video-based dialogue models. Code: https://github.com/mbzuai-oryx/Video-ChatGPT.", }
Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the under-explored field of \textit{video-based conversation} by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with an LLM. The resulting model is capable of understanding and generating detailed conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantitative evaluation framework for video-based dialogue models to objectively analyze the strengths and weaknesses of video-based dialogue models. Code: https://github.com/mbzuai-oryx/Video-ChatGPT.
[ "Maaz, Muhammad", "Rasheed, Hanoona", "Khan, Salman", "Khan, Fahad" ]
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
acl-long.679
Poster
2306.05424
[ "https://github.com/mbzuai-oryx/video-chatgpt" ]
https://huggingface.co/papers/2306.05424
2
7
1
4
https://aclanthology.org/2024.acl-long.679/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.680.bib
@inproceedings{waheed-etal-2024-distill, title = "To Distill or Not to Distill? On the Robustness of Robust Knowledge Distillation", author = "Waheed, Abdul and Kadaoui, Karima and Abdul-Mageed, Muhammad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.680", pages = "12603--12621", abstract = "Arabic is known to present unique challengesfor Automatic Speech Recognition (ASR). Onone hand, its rich linguistic diversity andwide range of dialects complicate the de-velopment of robust, inclusive models. Onthe other, current multilingual ASR modelsare compute-intensive and lack proper com-prehensive evaluations. In light of thesechallenges, we distill knowledge from largeteacher models into smaller student variantsthat more efficient. We also introduce a novelhuman-annotated dataset covering five under-represented Arabic dialects for evaluation. Wefurther evaluate both our models and existingSoTA multilingual models on both standardavailable benchmarks and our new dialectaldata. Our best-distilled model{'}s overall perfor-mance (45.0{\%} WER) surpasses that of a SoTAmodel twice its size (SeamlessM4T-large-v2,WER=47.0{\%}) and its teacher model (Whisper-large-v2, WER=55.1{\%}), and its average perfor-mance on our new dialectal data (56.9{\%} WER)outperforms all other models. To gain more in-sight into the poor performance of these modelson dialectal data, we conduct an error analysisand report the main types of errors the differentmodels tend to make. The GitHub repositoryfor the project is available at https://github.com/UBC-NLP/distill-whisper-ar.", }
Arabic is known to present unique challengesfor Automatic Speech Recognition (ASR). Onone hand, its rich linguistic diversity andwide range of dialects complicate the de-velopment of robust, inclusive models. Onthe other, current multilingual ASR modelsare compute-intensive and lack proper com-prehensive evaluations. In light of thesechallenges, we distill knowledge from largeteacher models into smaller student variantsthat more efficient. We also introduce a novelhuman-annotated dataset covering five under-represented Arabic dialects for evaluation. Wefurther evaluate both our models and existingSoTA multilingual models on both standardavailable benchmarks and our new dialectaldata. Our best-distilled model{'}s overall perfor-mance (45.0{\%} WER) surpasses that of a SoTAmodel twice its size (SeamlessM4T-large-v2,WER=47.0{\%}) and its teacher model (Whisper-large-v2, WER=55.1{\%}), and its average perfor-mance on our new dialectal data (56.9{\%} WER)outperforms all other models. To gain more in-sight into the poor performance of these modelson dialectal data, we conduct an error analysisand report the main types of errors the differentmodels tend to make. The GitHub repositoryfor the project is available at https://github.com/UBC-NLP/distill-whisper-ar.
[ "Waheed, Abdul", "Kadaoui, Karima", "Abdul-Mageed, Muhammad" ]
To Distill or Not to Distill? On the Robustness of Robust Knowledge Distillation
acl-long.680
Poster
2406.04512
[ "" ]
https://huggingface.co/papers/2406.04512
0
0
0
3
https://aclanthology.org/2024.acl-long.680/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.681.bib
@inproceedings{elhoushi-etal-2024-layerskip, title = "{L}ayer{S}kip: Enabling Early Exit Inference and Self-Speculative Decoding", author = "Elhoushi, Mostafa and Shrivastava, Akshat and Liskovich, Diana and Hosmer, Basil and Wasti, Bram and Lai, Liangzhen and Mahmoud, Anas and Acun, Bilge and Agarwal, Saurabh and Roman, Ahmed and Aly, Ahmed and Chen, Beidi and Wu, Carole-Jean", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.681", pages = "12622--12642", abstract = "We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for earlier layers and higher dropout rates for later layers, and an early exit loss where all transformer layers share the same exit. Second, during inference, we show that this training recipe increases the accuracy of early exit at earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel self-speculative decoding solution where we exit at early layers and verify and correct with remaining layers of the model. Our proposed self-speculative decoding approach has less memory footprint than other speculative decoding approaches and benefits from shared compute and activations of the draft and verification stages. We run experiments on different Llama model sizes on different types of training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and finetuning on specific task. We implement our inference solution and show speedups of up to 2.16x on summarization for CNN/DM documents, 1.82x on coding, and 2.0x on TOPv2 semantic parsing task. We open source our code at https://github.com/facebookresearch/LayerSkip.", }
We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for earlier layers and higher dropout rates for later layers, and an early exit loss where all transformer layers share the same exit. Second, during inference, we show that this training recipe increases the accuracy of early exit at earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel self-speculative decoding solution where we exit at early layers and verify and correct with remaining layers of the model. Our proposed self-speculative decoding approach has less memory footprint than other speculative decoding approaches and benefits from shared compute and activations of the draft and verification stages. We run experiments on different Llama model sizes on different types of training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and finetuning on specific task. We implement our inference solution and show speedups of up to 2.16x on summarization for CNN/DM documents, 1.82x on coding, and 2.0x on TOPv2 semantic parsing task. We open source our code at https://github.com/facebookresearch/LayerSkip.
[ "Elhoushi, Mostafa", "Shrivastava, Akshat", "Liskovich, Diana", "Hosmer, Basil", "Wasti, Bram", "Lai, Liangzhen", "Mahmoud, Anas", "Acun, Bilge", "Agarwal, Saurabh", "Roman, Ahmed", "Aly, Ahmed", "Chen, Beidi", "Wu, Carole-Jean" ]
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
acl-long.681
Poster
2404.16710
[ "" ]
https://huggingface.co/papers/2404.16710
8
57
6
13
https://aclanthology.org/2024.acl-long.681/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.682.bib
@inproceedings{curry-etal-2024-classist, title = "Classist Tools: Social Class Correlates with Performance in {NLP}", author = "Curry, Amanda and Attanasio, Giuseppe and Talat, Zeerak and Hovy, Dirk", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.682", pages = "12643--12655", abstract = "The field of sociolinguistics has studied factors affecting language use for the last century. Labov (1964) and Bernstein (1960) showed that socioeconomic class strongly influences our accents, syntax and lexicon. However, despite growing concerns surrounding fairness and bias in Natural Language Processing (NLP), there is a dearth of studies delving into the effects it may have on NLP systems. We show empirically that NLP systems{'} performance is affected by speakers{'} SES, potentially disadvantaging less-privileged socioeconomic groups. We annotate a corpus of 95K utterances from movies with social class, ethnicity and geographical language variety and measure the performance of NLP systems on three tasks: language modelling, automatic speech recognition, and grammar error correction. We find significant performance disparities that can be attributed to socioeconomic status as well as ethnicity and geographical differences. With NLP technologies becoming ever more ubiquitous and quotidian, they must accommodate all language varieties to avoid disadvantaging already marginalised groups. We argue for the inclusion of socioeconomic class in future language technologies.", }
The field of sociolinguistics has studied factors affecting language use for the last century. Labov (1964) and Bernstein (1960) showed that socioeconomic class strongly influences our accents, syntax and lexicon. However, despite growing concerns surrounding fairness and bias in Natural Language Processing (NLP), there is a dearth of studies delving into the effects it may have on NLP systems. We show empirically that NLP systems{'} performance is affected by speakers{'} SES, potentially disadvantaging less-privileged socioeconomic groups. We annotate a corpus of 95K utterances from movies with social class, ethnicity and geographical language variety and measure the performance of NLP systems on three tasks: language modelling, automatic speech recognition, and grammar error correction. We find significant performance disparities that can be attributed to socioeconomic status as well as ethnicity and geographical differences. With NLP technologies becoming ever more ubiquitous and quotidian, they must accommodate all language varieties to avoid disadvantaging already marginalised groups. We argue for the inclusion of socioeconomic class in future language technologies.
[ "Curry, Am", "a", "Attanasio, Giuseppe", "Talat, Zeerak", "Hovy, Dirk" ]
Classist Tools: Social Class Correlates with Performance in NLP
acl-long.682
Poster
2403.04445
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.682/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.683.bib
@inproceedings{zhong-etal-2024-actionie, title = "{A}ction{IE}: Action Extraction from Scientific Literature with Programming Languages", author = "Zhong, Xianrui and Du, Yufeng and Ouyang, Siru and Zhong, Ming and Luo, Tingfeng and Ho, Qirong and Peng, Hao and Ji, Heng and Han, Jiawei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.683", pages = "12656--12671", abstract = "Extraction of experimental procedures from human language in scientific literature and patents into actionable sequences in robotics language holds immense significance in scientific domains. Such an action extraction task is particularly challenging given the intricate details and context-dependent nature of the instructions, especially in fields like chemistry where reproducibility is paramount. In this paper, we introduce ActionIE, a method that leverages Large Language Models (LLMs) to bridge this divide by converting actions written in natural language into executable Python code. This enables us to capture the entities of interest, and the relationship between each action, given the features of Programming Languages. Utilizing linguistic cues identified by frequent patterns, ActionIE provides an improved mechanism to discern entities of interest. While our method is broadly applicable, we exemplify its power in the domain of chemical literature, wherein we focus on extracting experimental procedures for chemical synthesis. The code generated by our method can be easily transformed into robotics language which is in high demand in scientific fields. Comprehensive experiments demonstrate the superiority of our method. In addition, we propose a graph-based metric to more accurately reflect the precision of extraction. We also develop a dataset to address the scarcity of scientific literature occurred in existing datasets.", }
Extraction of experimental procedures from human language in scientific literature and patents into actionable sequences in robotics language holds immense significance in scientific domains. Such an action extraction task is particularly challenging given the intricate details and context-dependent nature of the instructions, especially in fields like chemistry where reproducibility is paramount. In this paper, we introduce ActionIE, a method that leverages Large Language Models (LLMs) to bridge this divide by converting actions written in natural language into executable Python code. This enables us to capture the entities of interest, and the relationship between each action, given the features of Programming Languages. Utilizing linguistic cues identified by frequent patterns, ActionIE provides an improved mechanism to discern entities of interest. While our method is broadly applicable, we exemplify its power in the domain of chemical literature, wherein we focus on extracting experimental procedures for chemical synthesis. The code generated by our method can be easily transformed into robotics language which is in high demand in scientific fields. Comprehensive experiments demonstrate the superiority of our method. In addition, we propose a graph-based metric to more accurately reflect the precision of extraction. We also develop a dataset to address the scarcity of scientific literature occurred in existing datasets.
[ "Zhong, Xianrui", "Du, Yufeng", "Ouyang, Siru", "Zhong, Ming", "Luo, Tingfeng", "Ho, Qirong", "Peng, Hao", "Ji, Heng", "Han, Jiawei" ]
ActionIE: Action Extraction from Scientific Literature with Programming Languages
acl-long.683
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.683/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.684.bib
@inproceedings{verma-etal-2024-community, title = "A Community-Centric Perspective for Characterizing and Detecting Anti-{A}sian Violence-Provoking Speech", author = "Verma, Gaurav and Grover, Rynaa and Zhou, Jiawei and Mathew, Binny and Kraemer, Jordan and Choudhury, Munmun and Kumar, Srijan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.684", pages = "12672--12684", abstract = "Violence-provoking speech {--} speech that implicitly or explicitly promotes violence against the members of the targeted community, contributed to a massive surge in anti-Asian crimes during the COVID-19 pandemic. While previous works have characterized and built tools for detecting other forms of harmful speech, like fear speech and hate speech, our work takes a community-centric approach to studying anti-Asian violence-provoking speech. Using data from {\textasciitilde}420k Twitter posts spanning a 3-year duration (January 1, 2020 to February 1, 2023), we develop a codebook to characterize anti-Asian violence-provoking speech and collect a community-crowdsourced dataset to facilitate its large-scale detection using state-of-the-art classifiers. We contrast the capabilities of natural language processing classifiers, ranging from BERT-based to LLM-based classifiers, in detecting violence-provoking speech with their capabilities to detect anti-Asian hateful speech. In contrast to prior work that has demonstrated the effectiveness of such classifiers in detecting hateful speech ($F_1$ = 0.89), our work shows that accurate and reliable detection of violence-provoking speech is a challenging task ($F_1$ = 0.69). We discuss the implications of our findings, particularly the need for proactive interventions to support Asian communities during public health crises.", }
Violence-provoking speech {--} speech that implicitly or explicitly promotes violence against the members of the targeted community, contributed to a massive surge in anti-Asian crimes during the COVID-19 pandemic. While previous works have characterized and built tools for detecting other forms of harmful speech, like fear speech and hate speech, our work takes a community-centric approach to studying anti-Asian violence-provoking speech. Using data from {\textasciitilde}420k Twitter posts spanning a 3-year duration (January 1, 2020 to February 1, 2023), we develop a codebook to characterize anti-Asian violence-provoking speech and collect a community-crowdsourced dataset to facilitate its large-scale detection using state-of-the-art classifiers. We contrast the capabilities of natural language processing classifiers, ranging from BERT-based to LLM-based classifiers, in detecting violence-provoking speech with their capabilities to detect anti-Asian hateful speech. In contrast to prior work that has demonstrated the effectiveness of such classifiers in detecting hateful speech ($F_1$ = 0.89), our work shows that accurate and reliable detection of violence-provoking speech is a challenging task ($F_1$ = 0.69). We discuss the implications of our findings, particularly the need for proactive interventions to support Asian communities during public health crises.
[ "Verma, Gaurav", "Grover, Rynaa", "Zhou, Jiawei", "Mathew, Binny", "Kraemer, Jordan", "Choudhury, Munmun", "Kumar, Srijan" ]
A Community-Centric Perspective for Characterizing and Detecting Anti-Asian Violence-Provoking Speech
acl-long.684
Poster
2407.15227
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.684/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.685.bib
@inproceedings{cao-etal-2024-retaining, title = "Retaining Key Information under High Compression Ratios: Query-Guided Compressor for {LLM}s", author = "Cao, Zhiwei and Cao, Qian and Lu, Yu and Peng, Ningxin and Huang, Luyang and Cheng, Shanbo and Su, Jinsong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.685", pages = "12685--12695", abstract = "The growing popularity of Large Language Models has sparked interest in context compression for Large Language Models (LLMs). However, the performance of previous methods degrades dramatically as compression ratios increase, sometimes even falling to the closed-book level. This decline can be attributed to the loss of key information during the compression process. Our preliminary study supports this hypothesis, emphasizing the significance of retaining key information to maintain model performance under high compression ratios. As a result, we introduce Query-Guided Compressor (QGC), which leverages queries to guide the context compression process, effectively preserving key information within the compressed context. Additionally, we employ a dynamic compression strategy. We validate the effectiveness of our proposed QGC on the Question Answering task, including NaturalQuestions, TriviaQA, and HotpotQA datasets. Experimental results show that QGC can consistently perform well even at high compression ratios, which also offers significant benefits in terms of inference cost and throughput.", }
The growing popularity of Large Language Models has sparked interest in context compression for Large Language Models (LLMs). However, the performance of previous methods degrades dramatically as compression ratios increase, sometimes even falling to the closed-book level. This decline can be attributed to the loss of key information during the compression process. Our preliminary study supports this hypothesis, emphasizing the significance of retaining key information to maintain model performance under high compression ratios. As a result, we introduce Query-Guided Compressor (QGC), which leverages queries to guide the context compression process, effectively preserving key information within the compressed context. Additionally, we employ a dynamic compression strategy. We validate the effectiveness of our proposed QGC on the Question Answering task, including NaturalQuestions, TriviaQA, and HotpotQA datasets. Experimental results show that QGC can consistently perform well even at high compression ratios, which also offers significant benefits in terms of inference cost and throughput.
[ "Cao, Zhiwei", "Cao, Qian", "Lu, Yu", "Peng, Ningxin", "Huang, Luyang", "Cheng, Shanbo", "Su, Jinsong" ]
Retaining Key Information under High Compression Ratios: Query-Guided Compressor for LLMs
acl-long.685
Poster
2406.02376
[ "https://github.com/DeepLearnXMU/QGC" ]
https://huggingface.co/papers/2406.02376
1
1
1
7
https://aclanthology.org/2024.acl-long.685/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.686.bib
@inproceedings{darrin-etal-2024-cosmic, title = "{COSMIC}: Mutual Information for Task-Agnostic Summarization Evaluation", author = "Darrin, Maxime and Formont, Philippe and Cheung, Jackie and Piantanida, Pablo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.686", pages = "12696--12717", abstract = "Assessing the quality of summarizers poses significant challenges{---}gold summaries are hard to obtain and their suitability depends on the use context of the summarization system. Who is the user of the system, and what do they intend to do with the summary? In response, we propose a novel task-oriented evaluation approach that assesses summarizers based on their capacity to produce summaries while preserving task outcomes. We theoretically establish both a lower and upper bound on the expected error rate of these tasks, which depends on the mutual information between source texts and generated summaries. We introduce COSMIC, a practical implementation of this metric, and demonstrate its strong correlation with human judgment-based metrics, as well as its effectiveness in predicting downstream task performance. Comparative analyses against established metrics like BERTScore and ROUGE highlight the competitive performance of COSMIC.", }
Assessing the quality of summarizers poses significant challenges{---}gold summaries are hard to obtain and their suitability depends on the use context of the summarization system. Who is the user of the system, and what do they intend to do with the summary? In response, we propose a novel task-oriented evaluation approach that assesses summarizers based on their capacity to produce summaries while preserving task outcomes. We theoretically establish both a lower and upper bound on the expected error rate of these tasks, which depends on the mutual information between source texts and generated summaries. We introduce COSMIC, a practical implementation of this metric, and demonstrate its strong correlation with human judgment-based metrics, as well as its effectiveness in predicting downstream task performance. Comparative analyses against established metrics like BERTScore and ROUGE highlight the competitive performance of COSMIC.
[ "Darrin, Maxime", "Formont, Philippe", "Cheung, Jackie", "Piantanida, Pablo" ]
COSMIC: Mutual Information for Task-Agnostic Summarization Evaluation
acl-long.686
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.686/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.687.bib
@inproceedings{salaun-etal-2024-europa, title = "{EUROPA}: A Legal Multilingual Keyphrase Generation Dataset", author = {Sala{\"u}n, Olivier and Piedboeuf, Fr{\'e}d{\'e}ric and Le Berre, Guillaume and Alfonso-Hermelo, David and Langlais, Philippe}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.687", pages = "12718--12736", abstract = "Keyphrase generation has primarily been explored within the context of academic research articles, with a particular focus on scientific domains and the English language. In this work, we present EUROPA, a novel dataset for multilingual keyphrase generation in the legal domain. It is derived from legal judgments from the Court of Justice of the European Union (EU), and contains instances in all 24 EU official languages. We run multilingual models on our corpus and analyze the results, showing room for improvement on a domain-specific multilingual corpus such as the one we present.", }
Keyphrase generation has primarily been explored within the context of academic research articles, with a particular focus on scientific domains and the English language. In this work, we present EUROPA, a novel dataset for multilingual keyphrase generation in the legal domain. It is derived from legal judgments from the Court of Justice of the European Union (EU), and contains instances in all 24 EU official languages. We run multilingual models on our corpus and analyze the results, showing room for improvement on a domain-specific multilingual corpus such as the one we present.
[ "Sala{\\\"u}n, Olivier", "Piedboeuf, Fr{\\'e}d{\\'e}ric", "Le Berre, Guillaume", "Alfonso-Hermelo, David", "Langlais, Philippe" ]
EUROPA: A Legal Multilingual Keyphrase Generation Dataset
acl-long.687
Poster
2403.00252
[ "https://github.com/rali-udem/europa" ]
https://huggingface.co/papers/2403.00252
0
0
0
5
https://aclanthology.org/2024.acl-long.687/
[]
[ "NCube/europa", "NCube/europa-random-split" ]
[]
1
https://aclanthology.org/2024.acl-long.688.bib
@inproceedings{darrin-etal-2024-glimpse, title = "{GLIMPSE}: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews", author = "Darrin, Maxime and Arous, Ines and Piantanida, Pablo and Cheung, Jackie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.688", pages = "12737--12752", abstract = "Scientific peer review is essential for the quality of academic publications. However, the increasing number of paper submissions to conferences has strained the reviewing process. This surge poses a burden on area chairs who have to carefully read an ever-growing volume of reviews and discern each reviewer{'}s main arguments as part of their decision process. In this paper, we introduce , a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews. Unlike traditional consensus-based methods, extracts both common and unique opinions from the reviews. We introduce novel uniqueness scores based on the Rational Speech Act framework to identify relevant sentences in the reviews. Our method aims to provide a pragmatic glimpse into all reviews, offering a balanced perspective on their opinions. Our experimental results with both automatic metrics and human evaluation show that generates more discriminative summaries than baseline methods in terms of human evaluation while achieving comparable performance with these methods in terms of automatic metrics.", }
Scientific peer review is essential for the quality of academic publications. However, the increasing number of paper submissions to conferences has strained the reviewing process. This surge poses a burden on area chairs who have to carefully read an ever-growing volume of reviews and discern each reviewer{'}s main arguments as part of their decision process. In this paper, we introduce , a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews. Unlike traditional consensus-based methods, extracts both common and unique opinions from the reviews. We introduce novel uniqueness scores based on the Rational Speech Act framework to identify relevant sentences in the reviews. Our method aims to provide a pragmatic glimpse into all reviews, offering a balanced perspective on their opinions. Our experimental results with both automatic metrics and human evaluation show that generates more discriminative summaries than baseline methods in terms of human evaluation while achieving comparable performance with these methods in terms of automatic metrics.
[ "Darrin, Maxime", "Arous, Ines", "Piantanida, Pablo", "Cheung, Jackie" ]
GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews
acl-long.688
Poster
2406.07359
[ "https://github.com/icannos/glimpse-mds" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.688/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.689.bib
@inproceedings{alwajih-etal-2024-peacock, title = "Peacock: A Family of {A}rabic Multimodal Large Language Models and Benchmarks", author = "Alwajih, Fakhraddin and Nagoudi, El Moatez Billah and Bhatia, Gagan and Mohamed, Abdelrahman and Abdul-Mageed, Muhammad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.689", pages = "12753--12776", abstract = "Multimodal large language models (MLLMs) have proven effective in a wide range of tasks that require complex reasoning and linguistic comprehension. However, due to a lack of high-quality multimodal resources in languages other than English, the success of MLLMs remains relatively limited to English-based settings. This poses significant challenges in developing comparable models for other languages, even those with large speaker populations, such as Arabic. To alleviate this challenge, we introduce a comprehensive family of Arabic MLLMs, dubbed *Peacock*, with strong vision and language capabilities. Through comprehensive qualitative and quantitative analysis, we demonstrate the solid performance of our models on various visual reasoning tasks and further show their emerging dialectal potential. Additionally, we introduce *Henna*, a new benchmark specifically designed for assessing MLLMs on aspects related to Arabic culture, setting the first stone for culturally-aware Arabic MLLMs. The GitHub repository for the *Peacock* project is available at [https://github.com/UBC-NLP/peacock](https://github.com/UBC-NLP/peacock).", }
Multimodal large language models (MLLMs) have proven effective in a wide range of tasks that require complex reasoning and linguistic comprehension. However, due to a lack of high-quality multimodal resources in languages other than English, the success of MLLMs remains relatively limited to English-based settings. This poses significant challenges in developing comparable models for other languages, even those with large speaker populations, such as Arabic. To alleviate this challenge, we introduce a comprehensive family of Arabic MLLMs, dubbed *Peacock*, with strong vision and language capabilities. Through comprehensive qualitative and quantitative analysis, we demonstrate the solid performance of our models on various visual reasoning tasks and further show their emerging dialectal potential. Additionally, we introduce *Henna*, a new benchmark specifically designed for assessing MLLMs on aspects related to Arabic culture, setting the first stone for culturally-aware Arabic MLLMs. The GitHub repository for the *Peacock* project is available at [https://github.com/UBC-NLP/peacock](https://github.com/UBC-NLP/peacock).
[ "Alwajih, Fakhraddin", "Nagoudi, El Moatez Billah", "Bhatia, Gagan", "Mohamed, Abdelrahman", "Abdul-Mageed, Muhammad" ]
Peacock: A Family of Arabic Multimodal Large Language Models and Benchmarks
acl-long.689
Poster
2403.01031
[ "https://github.com/ubc-nlp/peacock" ]
https://huggingface.co/papers/2403.01031
2
1
0
5
https://aclanthology.org/2024.acl-long.689/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.690.bib
@inproceedings{bordalo-etal-2024-generating, title = "Generating Coherent Sequences of Visual Illustrations for Real-World Manual Tasks", author = "Bordalo, Jo{\~a}o and Ramos, Vasco and Val{\'e}rio, Rodrigo and Gl{\'o}ria-Silva, Diogo and Bitton, Yonatan and Yarom, Michal and Szpektor, Idan and Magalhaes, Joao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.690", pages = "12777--12797", abstract = "Multistep instructions, such as recipes and how-to guides, greatly benefit from visual aids, such as a series of images that accompany the instruction steps. While Large Language Models (LLMs) have become adept at generating coherent textual steps, Large Vision/Language Models (LVLMs) are less capable of generating accompanying image sequences. The most challenging aspect is that each generated image needs to adhere to the relevant textual step instruction, as well as be visually consistent with earlier images in the sequence. To address this problem, we propose an approach for generating consistent image sequences, which integrates a Latent Diffusion Model (LDM) with an LLM to transform the sequence into a caption to maintain the semantic coherence of the sequence. In addition, to maintain the visual coherence of the image sequence, we introduce a copy mechanism to initialise reverse diffusion processes with a latent vector iteration from a previously generated image from a relevant step. Both strategies will condition the reverse diffusion process on the sequence of instruction steps and tie the contents of the current image to previous instruction steps and corresponding images. Experiments show that the proposed approach is preferred by humans in 46.6{\%} of the cases against 26.6{\%} for the second best method. In addition, automatic metrics showed that the proposed method maintains semantic coherence and visual consistency across steps in both domains.", }
Multistep instructions, such as recipes and how-to guides, greatly benefit from visual aids, such as a series of images that accompany the instruction steps. While Large Language Models (LLMs) have become adept at generating coherent textual steps, Large Vision/Language Models (LVLMs) are less capable of generating accompanying image sequences. The most challenging aspect is that each generated image needs to adhere to the relevant textual step instruction, as well as be visually consistent with earlier images in the sequence. To address this problem, we propose an approach for generating consistent image sequences, which integrates a Latent Diffusion Model (LDM) with an LLM to transform the sequence into a caption to maintain the semantic coherence of the sequence. In addition, to maintain the visual coherence of the image sequence, we introduce a copy mechanism to initialise reverse diffusion processes with a latent vector iteration from a previously generated image from a relevant step. Both strategies will condition the reverse diffusion process on the sequence of instruction steps and tie the contents of the current image to previous instruction steps and corresponding images. Experiments show that the proposed approach is preferred by humans in 46.6{\%} of the cases against 26.6{\%} for the second best method. In addition, automatic metrics showed that the proposed method maintains semantic coherence and visual consistency across steps in both domains.
[ "Bordalo, Jo{\\~a}o", "Ramos, Vasco", "Val{\\'e}rio, Rodrigo", "Gl{\\'o}ria-Silva, Diogo", "Bitton, Yonatan", "Yarom, Michal", "Szpektor, Idan", "Magalhaes, Joao" ]
Generating Coherent Sequences of Visual Illustrations for Real-World Manual Tasks
acl-long.690
Poster
2405.10122
[ "" ]
https://huggingface.co/papers/2405.10122
1
0
0
8
https://aclanthology.org/2024.acl-long.690/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.691.bib
@inproceedings{adebara-etal-2024-cheetah, title = "Cheetah: Natural Language Generation for 517 {A}frican Languages", author = "Adebara, Ife and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.691", pages = "12798--12823", abstract = "Low-resource African languages pose unique challenges for natural language processing (NLP) tasks, including natural language generation (NLG). In this paper, we develop Cheetah, a massively multilingual NLG language model for African languages. Cheetah supports 517 African languages and language varieties, allowing us to address the scarcity of NLG resources and provide a solution to foster linguistic diversity. We demonstrate the effectiveness of Cheetah through comprehensive evaluations across six generation downstream tasks. In five of the six tasks, Cheetah significantly outperforms other models, showcasing its remarkable performance for generating coherent and contextually appropriate text in a wide range of African languages. We additionally conduct a detailed human evaluation to delve deeper into the linguistic capabilities of Cheetah. The findings of this study contribute to advancing NLP research in low-resource settings, enabling greater accessibility and inclusion for African languages in a rapidly expanding digital landscape. We will publicly release our models for research.", }
Low-resource African languages pose unique challenges for natural language processing (NLP) tasks, including natural language generation (NLG). In this paper, we develop Cheetah, a massively multilingual NLG language model for African languages. Cheetah supports 517 African languages and language varieties, allowing us to address the scarcity of NLG resources and provide a solution to foster linguistic diversity. We demonstrate the effectiveness of Cheetah through comprehensive evaluations across six generation downstream tasks. In five of the six tasks, Cheetah significantly outperforms other models, showcasing its remarkable performance for generating coherent and contextually appropriate text in a wide range of African languages. We additionally conduct a detailed human evaluation to delve deeper into the linguistic capabilities of Cheetah. The findings of this study contribute to advancing NLP research in low-resource settings, enabling greater accessibility and inclusion for African languages in a rapidly expanding digital landscape. We will publicly release our models for research.
[ "Adebara, Ife", "Elmadany, AbdelRahim", "Abdul-Mageed, Muhammad" ]
Cheetah: Natural Language Generation for 517 African Languages
acl-long.691
Poster
2401.01053
[ "" ]
https://huggingface.co/papers/2401.01053
2
1
0
3
https://aclanthology.org/2024.acl-long.691/
[ "UBC-NLP/cheetah-base" ]
[]
[]
1
https://aclanthology.org/2024.acl-long.692.bib
@inproceedings{zhao-etal-2024-tapera, title = "{T}a{PERA}: Enhancing Faithfulness and Interpretability in Long-Form Table {QA} by Content Planning and Execution-based Reasoning", author = "Zhao, Yilun and Chen, Lyuhao and Cohan, Arman and Zhao, Chen", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.692", pages = "12824--12840", abstract = "Long-form Table Question Answering (LFTQA) requires systems to generate paragraph long and complex answers to questions over tabular data. While Large language models based systems have made significant progress, it often hallucinates, especially when the task involves complex reasoning over tables. To tackle this issue, we propose a new LLM-based framework, TaPERA, for LFTQA tasks. Our framework uses a modular approach that decomposes the whole process into three sub-modules: 1) QA-based Content Planner that iteratively decomposes the input question into sub-questions; 2) Execution-based Table Reasoner that produces executable Python program for each sub-question; and 3) Answer Generator that generates long-form answer grounded on the program output. Human evaluation results on the FeTaQA and QTSumm datasets indicate that our framework significantly improves strong baselines on both accuracy and truthfulness, as our modular framework is better at table reasoning, and the long-form answer is always consistent with the program output. Our modular design further provides transparency as users are able to interact with our framework by manually changing the content plans.", }
Long-form Table Question Answering (LFTQA) requires systems to generate paragraph long and complex answers to questions over tabular data. While Large language models based systems have made significant progress, it often hallucinates, especially when the task involves complex reasoning over tables. To tackle this issue, we propose a new LLM-based framework, TaPERA, for LFTQA tasks. Our framework uses a modular approach that decomposes the whole process into three sub-modules: 1) QA-based Content Planner that iteratively decomposes the input question into sub-questions; 2) Execution-based Table Reasoner that produces executable Python program for each sub-question; and 3) Answer Generator that generates long-form answer grounded on the program output. Human evaluation results on the FeTaQA and QTSumm datasets indicate that our framework significantly improves strong baselines on both accuracy and truthfulness, as our modular framework is better at table reasoning, and the long-form answer is always consistent with the program output. Our modular design further provides transparency as users are able to interact with our framework by manually changing the content plans.
[ "Zhao, Yilun", "Chen, Lyuhao", "Cohan, Arman", "Zhao, Chen" ]
TaPERA: Enhancing Faithfulness and Interpretability in Long-Form Table QA by Content Planning and Execution-based Reasoning
acl-long.692
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.692/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.693.bib
@inproceedings{zhao-etal-2024-knowledgefmath, title = "{K}nowledge{FM}ath: A Knowledge-Intensive Math Reasoning Dataset in Finance Domains", author = "Zhao, Yilun and Liu, Hongjun and Long, Yitao and Zhang, Rui and Zhao, Chen and Cohan, Arman", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.693", pages = "12841--12858", abstract = "We introduce KnowledgeFMath, a novel benchmark designed to evaluate LLMs{'} capabilities in solving knowledge-intensive math reasoning problems. Compared to prior works, this study features three core advancements. First, KnowledgeFMath includes 1,259 problems with a hybrid of textual and tabular content. These problems require college-level knowledge in the finance domain for effective resolution. Second, we provide expert-annotated, detailed solution references in Python program format, ensuring a high-quality benchmark for LLM assessment. We also construct a finance-domain knowledge bank and investigate various knowledge integration strategies. Finally, we evaluate a wide spectrum of 26 LLMs with different prompting strategies like Chain-of-Thought and Program-of-Thought. Our experimental results reveal that the current best-performing system (i.e., GPT-4 with CoT prompting) achieves only 56.6{\%} accuracy, leaving substantial room for improvement. Moreover, while augmenting LLMs with external knowledge can improve their performance (e.g., from 33.5{\%} to 47.1{\%} for GPT-3.5), their accuracy remains significantly lower than the estimated human expert performance of 92{\%}. We believe that KnowledgeFMath can advance future research in the area of domain-specific knowledge retrieval and integration, particularly within the context of solving math reasoning problems.", }
We introduce KnowledgeFMath, a novel benchmark designed to evaluate LLMs{'} capabilities in solving knowledge-intensive math reasoning problems. Compared to prior works, this study features three core advancements. First, KnowledgeFMath includes 1,259 problems with a hybrid of textual and tabular content. These problems require college-level knowledge in the finance domain for effective resolution. Second, we provide expert-annotated, detailed solution references in Python program format, ensuring a high-quality benchmark for LLM assessment. We also construct a finance-domain knowledge bank and investigate various knowledge integration strategies. Finally, we evaluate a wide spectrum of 26 LLMs with different prompting strategies like Chain-of-Thought and Program-of-Thought. Our experimental results reveal that the current best-performing system (i.e., GPT-4 with CoT prompting) achieves only 56.6{\%} accuracy, leaving substantial room for improvement. Moreover, while augmenting LLMs with external knowledge can improve their performance (e.g., from 33.5{\%} to 47.1{\%} for GPT-3.5), their accuracy remains significantly lower than the estimated human expert performance of 92{\%}. We believe that KnowledgeFMath can advance future research in the area of domain-specific knowledge retrieval and integration, particularly within the context of solving math reasoning problems.
[ "Zhao, Yilun", "Liu, Hongjun", "Long, Yitao", "Zhang, Rui", "Zhao, Chen", "Cohan, Arman" ]
KnowledgeFMath: A Knowledge-Intensive Math Reasoning Dataset in Finance Domains
acl-long.693
Oral
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.693/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.694.bib
@inproceedings{basu-etal-2024-api, title = "{API}-{BLEND}: A Comprehensive Corpora for Training and Benchmarking {API} {LLM}s", author = "Basu, Kinjal and Abdelaziz, Ibrahim and Chaudhury, Subhajit and Dan, Soham and Crouse, Maxwell and Munawar, Asim and Austel, Vernon and Kumaravel, Sadhana and Muthusamy, Vinod and Kapanipathi, Pavan and Lastras, Luis", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.694", pages = "12859--12870", abstract = "There is a growing need for Large Language Models (LLMs) to effectively use tools and external Application Programming Interfaces (APIs) to plan and complete tasks. As such, there is tremendous interest in methods that can acquire sufficient quantities of train and test data that involve calls to tools / APIs. Two lines of research have emerged as the predominant strategies for addressing this challenge. The first has focused on synthetic data generation techniques, while the second has involved curating task-adjacent datasets which can be transformed into API / Tool-based tasks. In this paper, we focus on the task of identifying, curating, and transforming existing datasets and, in turn, introduce API-BLEND, a large corpora for training and systematic testing of tool-augmented LLMs. The datasets mimic real-world scenarios involving API-tasks such as API / tool detection, slot filling, and sequencing of the detected APIs. We demonstrate the utility of the API-BLEND dataset for both training and benchmarking purposes.", }
There is a growing need for Large Language Models (LLMs) to effectively use tools and external Application Programming Interfaces (APIs) to plan and complete tasks. As such, there is tremendous interest in methods that can acquire sufficient quantities of train and test data that involve calls to tools / APIs. Two lines of research have emerged as the predominant strategies for addressing this challenge. The first has focused on synthetic data generation techniques, while the second has involved curating task-adjacent datasets which can be transformed into API / Tool-based tasks. In this paper, we focus on the task of identifying, curating, and transforming existing datasets and, in turn, introduce API-BLEND, a large corpora for training and systematic testing of tool-augmented LLMs. The datasets mimic real-world scenarios involving API-tasks such as API / tool detection, slot filling, and sequencing of the detected APIs. We demonstrate the utility of the API-BLEND dataset for both training and benchmarking purposes.
[ "Basu, Kinjal", "Abdelaziz, Ibrahim", "Chaudhury, Subhajit", "Dan, Soham", "Crouse, Maxwell", "Munawar, Asim", "Austel, Vernon", "Kumaravel, Sadhana", "Muthusamy, Vinod", "Kapanipathi, Pavan", "Lastras, Luis" ]
API-BLEND: A Comprehensive Corpora for Training and Benchmarking API LLMs
acl-long.694
Poster
2402.15491
[ "https://github.com/ibm/api-blend" ]
https://huggingface.co/papers/2402.15491
6
13
3
10
https://aclanthology.org/2024.acl-long.694/
[]
[]
[]
1
https://aclanthology.org/2024.acl-long.695.bib
@inproceedings{wang-etal-2024-lora-flow, title = "{L}o{RA}-Flow: Dynamic {L}o{RA} Fusion for Large Language Models in Generative Tasks", author = "Wang, Hanqing and Ping, Bowen and Wang, Shuo and Han, Xu and Chen, Yun and Liu, Zhiyuan and Sun, Maosong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.695", pages = "12871--12882", abstract = "LoRA employs lightweight modules to customize large language models (LLMs) for each downstream task or domain, where different learned additional modules represent diverse skills. Combining existing LoRAs to address new tasks can enhance the reusability of learned LoRAs, particularly beneficial for tasks with limited annotated data. Most prior works on LoRA combination primarily rely on task-level weights for each involved LoRA, making different examples and tokens share the same LoRA weights. However, in generative tasks, different tokens may necessitate diverse skills to manage. Taking the Chinese math task as an example, understanding the problem description may depend more on the Chinese LoRA, while the calculation part may rely more on the math LoRA. To this end, we propose LoRA-Flow, which utilizes dynamic weights to adjust the impact of different LoRAs. The weights at each step are determined by a fusion gate with extremely few parameters, which can be learned with only 200 training examples. Experiments across six generative tasks demonstrate that our method consistently outperforms baselines with task-level fusion weights. This underscores the necessity of introducing dynamic fusion weights for LoRA combination.", }
LoRA employs lightweight modules to customize large language models (LLMs) for each downstream task or domain, where different learned additional modules represent diverse skills. Combining existing LoRAs to address new tasks can enhance the reusability of learned LoRAs, particularly beneficial for tasks with limited annotated data. Most prior works on LoRA combination primarily rely on task-level weights for each involved LoRA, making different examples and tokens share the same LoRA weights. However, in generative tasks, different tokens may necessitate diverse skills to manage. Taking the Chinese math task as an example, understanding the problem description may depend more on the Chinese LoRA, while the calculation part may rely more on the math LoRA. To this end, we propose LoRA-Flow, which utilizes dynamic weights to adjust the impact of different LoRAs. The weights at each step are determined by a fusion gate with extremely few parameters, which can be learned with only 200 training examples. Experiments across six generative tasks demonstrate that our method consistently outperforms baselines with task-level fusion weights. This underscores the necessity of introducing dynamic fusion weights for LoRA combination.
[ "Wang, Hanqing", "Ping, Bowen", "Wang, Shuo", "Han, Xu", "Chen, Yun", "Liu, Zhiyuan", "Sun, Maosong" ]
LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks
acl-long.695
Poster
2402.11455
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.695/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.696.bib
@inproceedings{huang-etal-2024-harder, title = "Harder Task Needs More Experts: Dynamic Routing in {M}o{E} Models", author = "Huang, Quzhe and An, Zhenwei and Zhuang, Nan and Tao, Mingxu and Zhang, Chen and Jin, Yang and Xu, Kun and Xu, Kun and Chen, Liwei and Huang, Songfang and Feng, Yansong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.696", pages = "12883--12895", abstract = "In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty. Unlike existing MoE approaches that rely on fixed TopK Routing, which activates a predetermined number of experts regardless of the input{'}s complexity, our method dynamically allocates experts based on the confidence level in expert selection for each input. This allows for more efficient utilization of computational resources, activating more experts for complex tasks requiring advanced reasoning and fewer for simpler tasks. Through extensive evaluations, our dynamic routing method demonstrates substantial improvements over Top2 Routing across various benchmarks, achieving an average improvement of 0.7{\%} with less than 90{\%} activated parameters. Further analysis shows our model dispatches more experts to tasks requiring complex reasoning skills, like BBH, confirming its ability to dynamically allocate computational resources in alignment with the input{'}s complexity.Our findings also highlight a variation in the number of experts needed across different layers of the transformer model, offering insights into the potential for designing heterogeneous MoE frameworks. The code and models are available at https://github.com/ZhenweiAn/Dynamic{\_}MoE.", }
In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty. Unlike existing MoE approaches that rely on fixed TopK Routing, which activates a predetermined number of experts regardless of the input{'}s complexity, our method dynamically allocates experts based on the confidence level in expert selection for each input. This allows for more efficient utilization of computational resources, activating more experts for complex tasks requiring advanced reasoning and fewer for simpler tasks. Through extensive evaluations, our dynamic routing method demonstrates substantial improvements over Top2 Routing across various benchmarks, achieving an average improvement of 0.7{\%} with less than 90{\%} activated parameters. Further analysis shows our model dispatches more experts to tasks requiring complex reasoning skills, like BBH, confirming its ability to dynamically allocate computational resources in alignment with the input{'}s complexity.Our findings also highlight a variation in the number of experts needed across different layers of the transformer model, offering insights into the potential for designing heterogeneous MoE frameworks. The code and models are available at https://github.com/ZhenweiAn/Dynamic{\_}MoE.
[ "Huang, Quzhe", "An, Zhenwei", "Zhuang, Nan", "Tao, Mingxu", "Zhang, Chen", "Jin, Yang", "Xu, Kun", "Xu, Kun", "Chen, Liwei", "Huang, Songfang", "Feng, Yansong" ]
Harder Task Needs More Experts: Dynamic Routing in MoE Models
acl-long.696
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.696/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.697.bib
@inproceedings{han-etal-2024-xlavs, title = "{XLAVS}-{R}: Cross-Lingual Audio-Visual Speech Representation Learning for Noise-Robust Speech Perception", author = "Han, HyoJung and Anwar, Mohamed and Pino, Juan and Hsu, Wei-Ning and Carpuat, Marine and Shi, Bowen and Wang, Changhan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.697", pages = "12896--12911", abstract = "Speech recognition and translation systems perform poorly on noisy inputs, which are frequent in realistic environments. Augmenting these systems with visual signals has the potential to improve robustness to noise. However, audio-visual (AV) data is only available in limited amounts and for fewer languages than audio-only resources.To address this gap, we present XLAVS-R, a cross-lingual audio-visual speech representation model for noise-robust speech recognition and translation in over 100 languages. It is designed to maximize the benefits of limited multilingual AV pre-training data, by building on top of audio-only multilingual pre-training and simplifying existing pre-training schemes. Extensive evaluation on the MuAViC benchmark shows the strength of XLAVS-R on downstream audio-visual speech recognition and translation tasks, where it outperforms the previous state of the art by up to 18.5{\%} WER and 4.7 BLEU given noisy AV inputs, and enables strong zero-shot audio-visual ability with audio-only fine-tuning.", }
Speech recognition and translation systems perform poorly on noisy inputs, which are frequent in realistic environments. Augmenting these systems with visual signals has the potential to improve robustness to noise. However, audio-visual (AV) data is only available in limited amounts and for fewer languages than audio-only resources.To address this gap, we present XLAVS-R, a cross-lingual audio-visual speech representation model for noise-robust speech recognition and translation in over 100 languages. It is designed to maximize the benefits of limited multilingual AV pre-training data, by building on top of audio-only multilingual pre-training and simplifying existing pre-training schemes. Extensive evaluation on the MuAViC benchmark shows the strength of XLAVS-R on downstream audio-visual speech recognition and translation tasks, where it outperforms the previous state of the art by up to 18.5{\%} WER and 4.7 BLEU given noisy AV inputs, and enables strong zero-shot audio-visual ability with audio-only fine-tuning.
[ "Han, HyoJung", "Anwar, Mohamed", "Pino, Juan", "Hsu, Wei-Ning", "Carpuat, Marine", "Shi, Bowen", "Wang, Changhan" ]
XLAVS-R: Cross-Lingual Audio-Visual Speech Representation Learning for Noise-Robust Speech Perception
acl-long.697
Poster
2403.14402
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.697/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.698.bib
@inproceedings{wang-etal-2024-sotopia, title = "{SOTOPIA}-{\mbox{$\pi$}}: Interactive Learning of Socially Intelligent Language Agents", author = "Wang, Ruiyi and Yu, Haofei and Zhang, Wenxin and Qi, Zhengyang and Sap, Maarten and Bisk, Yonatan and Neubig, Graham and Zhu, Hao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.698", pages = "12912--12940", abstract = "Humans learn social skills through both imitation and social interaction. This social learning process is largely understudied by existing research on building language agents. Motivated by this gap, we propose an interactive learning method, SOTOPIA-{\mbox{$\pi$}}, that improves the social intelligence of language agents. This method leverages behavior cloning and self-reinforcement based training on filtered social interaction data according to large language model (LLM) rating. We show that our training method allows a 7B LLM to reach the social goal completion ability of an expert model (GPT-4-based agent) without the loss of more generic abilities, such as the ability to answer knowledge-based questions. We also demonstrate that this training paradigm uncovers some weaknesses in standard evaluation and safety training paradigms that (1) LLM-based evaluation of social intelligence overestimates the abilities of the language agents trained specifically for social interaction, and that (2) despite not training for better safety or question answering (QA) ability, our methods improve the safety of language agents and maintain general QA ability on the MMLU benchmark.", }
Humans learn social skills through both imitation and social interaction. This social learning process is largely understudied by existing research on building language agents. Motivated by this gap, we propose an interactive learning method, SOTOPIA-{\mbox{$\pi$}}, that improves the social intelligence of language agents. This method leverages behavior cloning and self-reinforcement based training on filtered social interaction data according to large language model (LLM) rating. We show that our training method allows a 7B LLM to reach the social goal completion ability of an expert model (GPT-4-based agent) without the loss of more generic abilities, such as the ability to answer knowledge-based questions. We also demonstrate that this training paradigm uncovers some weaknesses in standard evaluation and safety training paradigms that (1) LLM-based evaluation of social intelligence overestimates the abilities of the language agents trained specifically for social interaction, and that (2) despite not training for better safety or question answering (QA) ability, our methods improve the safety of language agents and maintain general QA ability on the MMLU benchmark.
[ "Wang, Ruiyi", "Yu, Haofei", "Zhang, Wenxin", "Qi, Zhengyang", "Sap, Maarten", "Bisk, Yonatan", "Neubig, Graham", "Zhu, Hao" ]
SOTOPIA-: Interactive Learning of Socially Intelligent Language Agents
acl-long.698
Poster
[ "" ]
https://huggingface.co/papers/2403.08715
7
20
1
8
https://aclanthology.org/2024.acl-long.698/
[ "cmu-lti/sotopia-pi-mistral-7b-BC_SR" ]
[ "cmu-lti/sotopia-pi" ]
[ "cmu-lti/sotopia-space", "talha1503/hemm_space" ]
1
https://aclanthology.org/2024.acl-long.699.bib
@inproceedings{ding-etal-2024-mathcal, title = "${\mathcal X}${FT}: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts", author = "Ding, Yifeng and Liu, Jiawei and Wei, Yuxiang and Zhang, Lingming", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.699", pages = "12941--12955", abstract = "", }
[ "Ding, Yifeng", "Liu, Jiawei", "Wei, Yuxiang", "Zhang, Lingming" ]
𝒳FT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts
acl-long.699
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.699/
[]
[]
[]
0
https://aclanthology.org/2024.acl-long.700.bib
@inproceedings{nguyen-le-2024-generalizability, title = "Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning", author = "Nguyen, Tuc and Le, Thai", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.700", pages = "12956--12973", abstract = "Several parameter-efficient fine-tuning methods based on adapters have been proposed as a streamlined approach to incorporate not only a single specialized knowledge into existing Pre-Trained Language Models (PLMs) but also multiple of them at once. Recent works such as AdapterSoup propose to mix not all but only a selective sub-set of domain-specific adapters during inference via model weight averaging to optimize performance on novel, unseen domains with excellent computational efficiency. However, the essential generalizability of this emerging weight-space adapter mixing mechanism on \textit{unseen, in-domain examples} remains unexplored. Thus, in this study, we conduct a comprehensive analysis to elucidate the generalizability of domain-specific adapter mixtures in in-domain evaluation. We also provide investigations into the inner workings of the mixture of domain-specific adapters by analyzing their weight signs, yielding critical analysis on the negative correlation between their fraction of weight sign difference and their mixtures{'} generalizability. The code is available at Github.", }
Several parameter-efficient fine-tuning methods based on adapters have been proposed as a streamlined approach to incorporate not only a single specialized knowledge into existing Pre-Trained Language Models (PLMs) but also multiple of them at once. Recent works such as AdapterSoup propose to mix not all but only a selective sub-set of domain-specific adapters during inference via model weight averaging to optimize performance on novel, unseen domains with excellent computational efficiency. However, the essential generalizability of this emerging weight-space adapter mixing mechanism on \textit{unseen, in-domain examples} remains unexplored. Thus, in this study, we conduct a comprehensive analysis to elucidate the generalizability of domain-specific adapter mixtures in in-domain evaluation. We also provide investigations into the inner workings of the mixture of domain-specific adapters by analyzing their weight signs, yielding critical analysis on the negative correlation between their fraction of weight sign difference and their mixtures{'} generalizability. The code is available at Github.
[ "Nguyen, Tuc", "Le, Thai" ]
Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning
acl-long.700
Poster
2402.10639
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.acl-long.700/
[]
[]
[]
0