Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
566
3.75k
abstract
stringlengths
4
3.1k
authors
sequencelengths
1
66
title
stringlengths
12
172
id
stringlengths
7
19
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
21
upvotes
int64
-1
116
num_comments
int64
-1
11
n_authors
int64
-1
61
Models
sequencelengths
0
100
Datasets
sequencelengths
0
100
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
100
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.emnlp-main.201.bib
https://aclanthology.org/2024.emnlp-main.201/
@inproceedings{bonaldi-etal-2024-safer, title = "Is Safer Better? The Impact of Guardrails on the Argumentative Strength of {LLM}s in Hate Speech Countering", author = "Bonaldi, Helena and Damo, Greta and Ocampo, Nicol{\'a}s Benjam{\'\i}n and Cabrio, Elena and Villata, Serena and Guerini, Marco", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.201", pages = "3446--3463", abstract = "The potential effectiveness of counterspeech as a hate speech mitigation strategy is attracting increasing interest in the NLG research community, particularly towards the task of automatically producing it. However, automatically generated responses often lack the argumentative richness which characterises expert-produced counterspeech. In this work, we focus on two aspects of counterspeech generation to produce more cogent responses. First, by investigating the tension between helpfulness and harmlessness of LLMs, we test whether the presence of safety guardrails hinders the quality of the generations. Secondly, we assess whether attacking a specific component of the hate speech results in a more effective argumentative strategy to fight online hate. By conducting an extensive human and automatic evaluation, we show how the presence of safety guardrails can be detrimental also to a task that inherently aims at fostering positive social interactions. Moreover, our results show that attacking a specific component of the hate speech, and in particular its implicit negative stereotype and its hateful parts, leads to higher-quality generations.", }
The potential effectiveness of counterspeech as a hate speech mitigation strategy is attracting increasing interest in the NLG research community, particularly towards the task of automatically producing it. However, automatically generated responses often lack the argumentative richness which characterises expert-produced counterspeech. In this work, we focus on two aspects of counterspeech generation to produce more cogent responses. First, by investigating the tension between helpfulness and harmlessness of LLMs, we test whether the presence of safety guardrails hinders the quality of the generations. Secondly, we assess whether attacking a specific component of the hate speech results in a more effective argumentative strategy to fight online hate. By conducting an extensive human and automatic evaluation, we show how the presence of safety guardrails can be detrimental also to a task that inherently aims at fostering positive social interactions. Moreover, our results show that attacking a specific component of the hate speech, and in particular its implicit negative stereotype and its hateful parts, leads to higher-quality generations.
[ "Bonaldi, Helena", "Damo, Greta", "Ocampo, Nicol{\\'a}s Benjam{\\'\\i}n", "Cabrio, Elena", "Villata, Serena", "Guerini, Marco" ]
Is Safer Better? The Impact of Guardrails on the Argumentative Strength of LLMs in Hate Speech Countering
emnlp-main.201
Poster
2410.03466
[ "https://github.com/LanD-FBK/wsf_argumentation_structure" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.202.bib
https://aclanthology.org/2024.emnlp-main.202/
@inproceedings{oh-schuler-2024-leading, title = "Leading Whitespaces of Language Models{'} Subword Vocabulary Pose a Confound for Calculating Word Probabilities", author = "Oh, Byung-Doh and Schuler, William", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.202", pages = "3464--3472", }
No abstract found
[ "Oh, Byung-Doh", "Schuler, William" ]
Leading Whitespaces of Language Models' Subword Vocabulary Pose a Confound for Calculating Word Probabilities
emnlp-main.202
Poster
2406.10851
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.203.bib
https://aclanthology.org/2024.emnlp-main.203/
@inproceedings{tan-etal-2024-llm4decompile, title = "{LLM}4{D}ecompile: Decompiling Binary Code with Large Language Models", author = "Tan, Hanzhuo and Luo, Qi and Li, Jing and Zhang, Yuqun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.203", pages = "3473--3487", abstract = "Decompilation aims to convert binary code to high-level source code, but traditional tools like Ghidra often produce results that are difficult to read and execute. Motivated by the advancements in Large Language Models (LLMs), we propose LLM4Decompile, the first and largest open-source LLM series (1.3B to 33B) trained to decompile binary code. We optimize the LLM training process and introduce the LLM4Decompile-End models to decompile binary directly. The resulting models significantly outperform GPT-4o and Ghidra on the HumanEval and ExeBench benchmarks by over 100{\%} in terms of re-executability rate. Additionally, we improve the standard refinement approach to fine-tune the LLM4Decompile-Ref models, enabling them to effectively refine the decompiled code from Ghidra and achieve a further 16.2{\%} improvement over the LLM4Decompile-End. LLM4Decompile demonstrates the potential of LLMs to revolutionize binary code decompilation, delivering remarkable improvements in readability and executability while complementing conventional tools for optimal results.", }
Decompilation aims to convert binary code to high-level source code, but traditional tools like Ghidra often produce results that are difficult to read and execute. Motivated by the advancements in Large Language Models (LLMs), we propose LLM4Decompile, the first and largest open-source LLM series (1.3B to 33B) trained to decompile binary code. We optimize the LLM training process and introduce the LLM4Decompile-End models to decompile binary directly. The resulting models significantly outperform GPT-4o and Ghidra on the HumanEval and ExeBench benchmarks by over 100{\%} in terms of re-executability rate. Additionally, we improve the standard refinement approach to fine-tune the LLM4Decompile-Ref models, enabling them to effectively refine the decompiled code from Ghidra and achieve a further 16.2{\%} improvement over the LLM4Decompile-End. LLM4Decompile demonstrates the potential of LLMs to revolutionize binary code decompilation, delivering remarkable improvements in readability and executability while complementing conventional tools for optimal results.
[ "Tan, Hanzhuo", "Luo, Qi", "Li, Jing", "Zhang, Yuqun" ]
LLM4Decompile: Decompiling Binary Code with Large Language Models
emnlp-main.203
Poster
2403.05286
[ "https://github.com/albertan017/LLM4Decompile" ]
https://huggingface.co/papers/2403.05286
0
0
0
4
[ "arise-sustech/llm4decompile-6.7b-uo", "arise-sustech/llm4decompile-1.3b", "arise-sustech/llm4decompile-33b", "arise-sustech/llm4decompile-6.7b", "arise-sustech/llm4decompile-6.7b-nsp", "RichardErkhov/arise-sustech_-_llm4decompile-1.3b-gguf" ]
[]
[]
[ "arise-sustech/llm4decompile-6.7b-uo", "arise-sustech/llm4decompile-1.3b", "arise-sustech/llm4decompile-33b", "arise-sustech/llm4decompile-6.7b", "arise-sustech/llm4decompile-6.7b-nsp", "RichardErkhov/arise-sustech_-_llm4decompile-1.3b-gguf" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.204.bib
https://aclanthology.org/2024.emnlp-main.204/
@inproceedings{gu-etal-2024-bottom, title = "From Bottom to Top: Extending the Potential of Parameter Efficient Fine-Tuning", author = "Gu, Jihao and Wang, Zelin and Zhang, Yibo and Zhang, Ziji and Gong, Ping", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.204", pages = "3488--3500", abstract = "With the proliferation of large language models, Parameter Efficient Fine-Tuning (PEFT) method, which freeze pre-trained parameters and only fine-tune a few task-specific parameters, are playing an increasingly important role. However, previous work primarily applied uniform operations across all layers of the model, overlooking the fact that different layers in a transformer store different information. In the process of exploration, We find that there is a significant differences in fine-tuning strategies between different layers, and fine-tuning only a subset of layers can even achieve comparable performance. Based on this, we propose the Hybrid LoRA-Prefix Tuning(HLPT) method, which uses enhanced LoRA and Prefix-tuning methods with learnable adaptive mechanism separately for the bottom and top layers, and the Half Hybrid LoRA-Prefix Tuning($H^2$LPT) method, which goes a step further, reducing the parameter count to nearly half by omitting fine-tuning in the middle layers. Extensive experiments with large language models on various downstream tasks provide strong evidence for the potential of PEFT focusing on different layers{'} interactions and the effectiveness of our methods. Furthermore, we validate the robustness of these methods and their advantages in speeding up training convergence, reducing inference time requirements.", }
With the proliferation of large language models, Parameter Efficient Fine-Tuning (PEFT) method, which freeze pre-trained parameters and only fine-tune a few task-specific parameters, are playing an increasingly important role. However, previous work primarily applied uniform operations across all layers of the model, overlooking the fact that different layers in a transformer store different information. In the process of exploration, We find that there is a significant differences in fine-tuning strategies between different layers, and fine-tuning only a subset of layers can even achieve comparable performance. Based on this, we propose the Hybrid LoRA-Prefix Tuning(HLPT) method, which uses enhanced LoRA and Prefix-tuning methods with learnable adaptive mechanism separately for the bottom and top layers, and the Half Hybrid LoRA-Prefix Tuning($H^2$LPT) method, which goes a step further, reducing the parameter count to nearly half by omitting fine-tuning in the middle layers. Extensive experiments with large language models on various downstream tasks provide strong evidence for the potential of PEFT focusing on different layers{'} interactions and the effectiveness of our methods. Furthermore, we validate the robustness of these methods and their advantages in speeding up training convergence, reducing inference time requirements.
[ "Gu, Jihao", "Wang, Zelin", "Zhang, Yibo", "Zhang, Ziji", "Gong, Ping" ]
From Bottom to Top: Extending the Potential of Parameter Efficient Fine-Tuning
emnlp-main.204
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.205.bib
https://aclanthology.org/2024.emnlp-main.205/
@inproceedings{wu-etal-2024-cotkr, title = "{C}o{TKR}: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering", author = "Wu, Yike and Huang, Yi and Hu, Nan and Hua, Yuncheng and Qi, Guilin and Chen, Jiaoyan and Pan, Jeff Z.", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.205", pages = "3501--3520", abstract = "Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA). They typically require rewriting retrieved subgraphs into natural language formats comprehensible to LLMs. However, when tackling complex questions, the knowledge rewritten by existing methods may include irrelevant information, omit crucial details, or fail to align with the question{'}s semantics. To address them, we propose a novel rewriting method CoTKR, Chain- of-Thought Enhanced Knowledge Rewriting, for generating reasoning traces and corresponding knowledge in an interleaved manner, thereby mitigating the limitations of single-step knowledge rewriting. Additionally, to bridge the preference gap between the knowledge rewriter and the question answering (QA) model, we propose a training strategy PAQAF, Preference Alignment from Question Answering Feedback, for leveraging feedback from the QA model to further optimize the knowledge rewriter. We conduct experiments using various LLMs across several KGQA benchmarks. Experimental results demonstrate that, compared with previous knowledge rewriting methods, CoTKR generates the most beneficial knowledge representation for QA models, which significantly improves the performance of LLMs in KGQA.", }
Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA). They typically require rewriting retrieved subgraphs into natural language formats comprehensible to LLMs. However, when tackling complex questions, the knowledge rewritten by existing methods may include irrelevant information, omit crucial details, or fail to align with the question{'}s semantics. To address them, we propose a novel rewriting method CoTKR, Chain- of-Thought Enhanced Knowledge Rewriting, for generating reasoning traces and corresponding knowledge in an interleaved manner, thereby mitigating the limitations of single-step knowledge rewriting. Additionally, to bridge the preference gap between the knowledge rewriter and the question answering (QA) model, we propose a training strategy PAQAF, Preference Alignment from Question Answering Feedback, for leveraging feedback from the QA model to further optimize the knowledge rewriter. We conduct experiments using various LLMs across several KGQA benchmarks. Experimental results demonstrate that, compared with previous knowledge rewriting methods, CoTKR generates the most beneficial knowledge representation for QA models, which significantly improves the performance of LLMs in KGQA.
[ "Wu, Yike", "Huang, Yi", "Hu, Nan", "Hua, Yuncheng", "Qi, Guilin", "Chen, Jiaoyan", "Pan, Jeff Z." ]
CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering
emnlp-main.205
Poster
2409.19753
[ "https://github.com/wuyike2000/CoTKR" ]
https://huggingface.co/papers/2409.19753
0
0
0
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.206.bib
https://aclanthology.org/2024.emnlp-main.206/
@inproceedings{fei-etal-2024-mtls, title = "{MTLS}: Making Texts into Linguistic Symbols", author = "Fei, Wenlong and Wang, Xiaohua and Hu, Min and Zhang, Qingyu and Li, Hongbo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.206", pages = "3521--3535", abstract = "In linguistics, all languages can be considered as symbolic systems, with each language relying on symbolic processes to associate specific symbols with meanings. In the same language, there is a fixed correspondence between linguistic symbol and meaning. In different languages, universal meanings follow varying rules of symbolization in one-to-one correspondence with symbols. Most work overlooks the properties of languages as symbol systems. In this paper, we shift the focus to the symbolic properties and introduce MTLS: a pre-training method to improve the multilingual capability of models by Making Texts into Linguistic Symbols. Initially, we replace the vocabulary in pre-trained language models by mapping relations between linguistic symbols and semantics. Subsequently, universal semantics within the symbolic system serve as bridges, linking symbols from different languages to the embedding space of the model, thereby enabling the model to process linguistic symbols. To evaluate the effectiveness of MTLS, we conducted experiments on multilingual tasks using BERT and RoBERTa, respectively, as the backbone. The results indicate that despite having just over 12,000 pieces of English data in pre-training, the improvement that MTLS brings to multilingual capabilities is remarkably significant.", }
In linguistics, all languages can be considered as symbolic systems, with each language relying on symbolic processes to associate specific symbols with meanings. In the same language, there is a fixed correspondence between linguistic symbol and meaning. In different languages, universal meanings follow varying rules of symbolization in one-to-one correspondence with symbols. Most work overlooks the properties of languages as symbol systems. In this paper, we shift the focus to the symbolic properties and introduce MTLS: a pre-training method to improve the multilingual capability of models by Making Texts into Linguistic Symbols. Initially, we replace the vocabulary in pre-trained language models by mapping relations between linguistic symbols and semantics. Subsequently, universal semantics within the symbolic system serve as bridges, linking symbols from different languages to the embedding space of the model, thereby enabling the model to process linguistic symbols. To evaluate the effectiveness of MTLS, we conducted experiments on multilingual tasks using BERT and RoBERTa, respectively, as the backbone. The results indicate that despite having just over 12,000 pieces of English data in pre-training, the improvement that MTLS brings to multilingual capabilities is remarkably significant.
[ "Fei, Wenlong", "Wang, Xiaohua", "Hu, Min", "Zhang, Qingyu", "Li, Hongbo" ]
MTLS: Making Texts into Linguistic Symbols
emnlp-main.206
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.207.bib
https://aclanthology.org/2024.emnlp-main.207/
@inproceedings{chen-etal-2024-d2r, title = "{D}2{R}: Dual-Branch Dynamic Routing Network for Multimodal Sentiment Detection", author = "Chen, Yifan and Li, Kuntao and Mai, Weixing and Wu, Qiaofeng and Xue, Yun and Li, Fenghuan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.207", pages = "3536--3547", }
No abstract found
[ "Chen, Yifan", "Li, Kuntao", "Mai, Weixing", "Wu, Qiaofeng", "Xue, Yun", "Li, Fenghuan" ]
D2R: Dual-Branch Dynamic Routing Network for Multimodal Sentiment Detection
emnlp-main.207
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.208.bib
https://aclanthology.org/2024.emnlp-main.208/
@inproceedings{tian-etal-2024-generic, title = "A Generic Method for Fine-grained Category Discovery in Natural Language Texts", author = "Tian, Chang and Blaschko, Matthew B. and Yin, Wenpeng and Xing, Mingzhe and Yue, Yinliang and Moens, Marie-Francine", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.208", pages = "3548--3566", abstract = "Fine-grained category discovery using only coarse-grained supervision is a cost-effective yet challenging task. Previous training methods focus on aligning query samples with positive samples and distancing them from negatives. They often neglect intra-category and inter-category semantic similarities of fine-grained categories when navigating sample distributions in the embedding space. Furthermore, some evaluation techniques that rely on pre-collected test samples are inadequate for real-time applications. To address these shortcomings, we introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function. The method uses semantic similarities in a logarithmic space to guide sample distributions in the Euclidean space and to form distinct clusters that represent fine-grained categories. We also propose a centroid inference mechanism to support real-time applications. The efficacy of the method is both theoretically justified and empirically confirmed on three benchmark tasks. The proposed objective function is integrated in multiple contrastive learning based neural models. Its results surpass existing state-of-the-art approaches in terms of Accuracy, Adjusted Rand Index and Normalized Mutual Information of the detected fine-grained categories. Code and data are publicly available at https://github.com/changtianluckyforever/F-grained-STAR.", }
Fine-grained category discovery using only coarse-grained supervision is a cost-effective yet challenging task. Previous training methods focus on aligning query samples with positive samples and distancing them from negatives. They often neglect intra-category and inter-category semantic similarities of fine-grained categories when navigating sample distributions in the embedding space. Furthermore, some evaluation techniques that rely on pre-collected test samples are inadequate for real-time applications. To address these shortcomings, we introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function. The method uses semantic similarities in a logarithmic space to guide sample distributions in the Euclidean space and to form distinct clusters that represent fine-grained categories. We also propose a centroid inference mechanism to support real-time applications. The efficacy of the method is both theoretically justified and empirically confirmed on three benchmark tasks. The proposed objective function is integrated in multiple contrastive learning based neural models. Its results surpass existing state-of-the-art approaches in terms of Accuracy, Adjusted Rand Index and Normalized Mutual Information of the detected fine-grained categories. Code and data are publicly available at https://github.com/changtianluckyforever/F-grained-STAR.
[ "Tian, Chang", "Blaschko, Matthew B.", "Yin, Wenpeng", "Xing, Mingzhe", "Yue, Yinliang", "Moens, Marie-Francine" ]
A Generic Method for Fine-grained Category Discovery in Natural Language Texts
emnlp-main.208
Poster
2406.13103
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.209.bib
https://aclanthology.org/2024.emnlp-main.209/
@inproceedings{cao-etal-2024-toxicity, title = "Toxicity Detection is {NOT} all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators through a User-Centric Method", author = "Cao, Yang Trista and Domingo, Lovely-Frances and Gilbert, Sarah and Mazurek, Michelle L. and Shilton, Katie and Daum{\'e} Iii, Hal", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.209", pages = "3567--3587", abstract = "Extensive efforts in automated approaches for content moderation have been focused on developing models to identify toxic, offensive, and hateful content with the aim of lightening the load for moderators. Yet, it remains uncertain whether improvements on those tasks have truly addressed moderators{'} needs in accomplishing their work. In this paper, we surface gaps between past research efforts that have aimed to provide automation for aspects of content moderation and the needs of volunteer content moderators, regarding identifying violations of various moderation rules. To do so, we conduct a model review on Hugging Face to reveal the availability of models to cover various moderation rules and guidelines from three exemplar forums. We further put state-of-the-art LLMs to the test, evaluating how well these models perform in flagging violations of platform rules from one particular forum. Finally, we conduct a user survey study with volunteer moderators to gain insight into their perspectives on useful moderation models. Overall, we observe a non trivial gap, as missing developed models and LLMs exhibit moderate to low performance on a significant portion of the rules. Moderators{'} reports provide guides for future work on developing moderation assistant models.", }
Extensive efforts in automated approaches for content moderation have been focused on developing models to identify toxic, offensive, and hateful content with the aim of lightening the load for moderators. Yet, it remains uncertain whether improvements on those tasks have truly addressed moderators{'} needs in accomplishing their work. In this paper, we surface gaps between past research efforts that have aimed to provide automation for aspects of content moderation and the needs of volunteer content moderators, regarding identifying violations of various moderation rules. To do so, we conduct a model review on Hugging Face to reveal the availability of models to cover various moderation rules and guidelines from three exemplar forums. We further put state-of-the-art LLMs to the test, evaluating how well these models perform in flagging violations of platform rules from one particular forum. Finally, we conduct a user survey study with volunteer moderators to gain insight into their perspectives on useful moderation models. Overall, we observe a non trivial gap, as missing developed models and LLMs exhibit moderate to low performance on a significant portion of the rules. Moderators{'} reports provide guides for future work on developing moderation assistant models.
[ "Cao, Yang Trista", "Domingo, Lovely-Frances", "Gilbert, Sarah", "Mazurek, Michelle L.", "Shilton, Katie", "Daum{\\'e} Iii, Hal" ]
Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators through a User-Centric Method
emnlp-main.209
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.210.bib
https://aclanthology.org/2024.emnlp-main.210/
@inproceedings{wang-etal-2024-user, title = "A User-Centric Multi-Intent Benchmark for Evaluating Large Language Models", author = "Wang, Jiayin and Mo, Fengran and Ma, Weizhi and Sun, Peijie and Zhang, Min and Nie, Jian-Yun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.210", pages = "3588--3612", abstract = "Large language models (LLMs) are essential tools that users employ across various scenarios, so evaluating their performance and guiding users in selecting the suitable service is important. Although many benchmarks exist, they mainly focus on specific predefined model abilities, such as world knowledge, reasoning, etc. Based on these ability scores, it is hard for users to determine which LLM best suits their particular needs. To address these issues, we propose to evaluate LLMs from a user-centric perspective and design this benchmark to measure their efficacy in satisfying user needs under distinct intents. Firstly, we collect 1,846 real-world use cases from a user study with 712 participants from 23 countries. This first-hand data helps us understand actual user intents and needs in LLM interactions, forming the User Reported Scenarios (URS) dataset, which is categorized with six types of user intents. Secondly, based on this authentic dataset, we benchmark 10 LLM services with GPT-4-as-Judge. Thirdly, we show that benchmark scores align well with human preference in both real-world experience and pair-wise annotations, achieving Pearson correlations of 0.95 and 0.94, respectively. This alignment confirms that the URS dataset and our evaluation method establish an effective user-centric benchmark. The dataset, code, and process data are publicly available at https://github.com/Alice1998/URS.", }
Large language models (LLMs) are essential tools that users employ across various scenarios, so evaluating their performance and guiding users in selecting the suitable service is important. Although many benchmarks exist, they mainly focus on specific predefined model abilities, such as world knowledge, reasoning, etc. Based on these ability scores, it is hard for users to determine which LLM best suits their particular needs. To address these issues, we propose to evaluate LLMs from a user-centric perspective and design this benchmark to measure their efficacy in satisfying user needs under distinct intents. Firstly, we collect 1,846 real-world use cases from a user study with 712 participants from 23 countries. This first-hand data helps us understand actual user intents and needs in LLM interactions, forming the User Reported Scenarios (URS) dataset, which is categorized with six types of user intents. Secondly, based on this authentic dataset, we benchmark 10 LLM services with GPT-4-as-Judge. Thirdly, we show that benchmark scores align well with human preference in both real-world experience and pair-wise annotations, achieving Pearson correlations of 0.95 and 0.94, respectively. This alignment confirms that the URS dataset and our evaluation method establish an effective user-centric benchmark. The dataset, code, and process data are publicly available at https://github.com/Alice1998/URS.
[ "Wang, Jiayin", "Mo, Fengran", "Ma, Weizhi", "Sun, Peijie", "Zhang, Min", "Nie, Jian-Yun" ]
A User-Centric Multi-Intent Benchmark for Evaluating Large Language Models
emnlp-main.210
Oral
2404.13940
[ "https://github.com/alice1998/urs" ]
https://huggingface.co/papers/2404.13940
0
0
0
6
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.211.bib
https://aclanthology.org/2024.emnlp-main.211/
@inproceedings{yang-etal-2024-decompose, title = "Decompose and Compare Consistency: Measuring {VLM}s{'} Answer Reliability via Task-Decomposition Consistency Comparison", author = "Yang, Qian and Yan, Weixiang and Agrawal, Aishwarya", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.211", pages = "3613--3627", abstract = "Despite tremendous advancements, current state-of-the-art Vision-Language Models (VLMs) are still far from perfect. They tend to hallucinate and may generate biased responses. In such circumstances, having a way to assess the reliability of a given response generated by a VLM is quite useful. Existing methods, such as estimating uncertainty using answer likelihoods or prompt-based confidence generation, often suffer from overconfidence. Other methods use self-consistency comparison but are affected by confirmation biases. To alleviate these, we propose Decompose and Compare Consistency (DeCC) for reliability measurement. By comparing the consistency between the direct answer generated using the VLM{'}s internal reasoning process, and the indirect answers obtained by decomposing the question into sub-questions and reasoning over the sub-answers produced by the VLM, DeCC measures the reliability of VLM{'}s direct answer. Experiments across six vision-language tasks with three VLMs show DeCC{'}s reliability estimation achieves better correlation with task accuracy compared to the existing methods.", }
Despite tremendous advancements, current state-of-the-art Vision-Language Models (VLMs) are still far from perfect. They tend to hallucinate and may generate biased responses. In such circumstances, having a way to assess the reliability of a given response generated by a VLM is quite useful. Existing methods, such as estimating uncertainty using answer likelihoods or prompt-based confidence generation, often suffer from overconfidence. Other methods use self-consistency comparison but are affected by confirmation biases. To alleviate these, we propose Decompose and Compare Consistency (DeCC) for reliability measurement. By comparing the consistency between the direct answer generated using the VLM{'}s internal reasoning process, and the indirect answers obtained by decomposing the question into sub-questions and reasoning over the sub-answers produced by the VLM, DeCC measures the reliability of VLM{'}s direct answer. Experiments across six vision-language tasks with three VLMs show DeCC{'}s reliability estimation achieves better correlation with task accuracy compared to the existing methods.
[ "Yang, Qian", "Yan, Weixiang", "Agrawal, Aishwarya" ]
Decompose and Compare Consistency: Measuring VLMs' Answer Reliability via Task-Decomposition Consistency Comparison
emnlp-main.211
Poster
2407.07840
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.212.bib
https://aclanthology.org/2024.emnlp-main.212/
@inproceedings{cao-2024-learn, title = "Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism", author = "Cao, Lang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.212", pages = "3628--3646", abstract = "Large language models (LLMs) have demonstrated impressive language understanding and generation capabilities, enabling them to answer a wide range of questions across various domains. However, these models are not flawless and often produce responses that contain errors or misinformation. These inaccuracies, commonly referred to as hallucinations, render LLMs unreliable and even unusable in many scenarios. In this paper, our focus is on mitigating the issue of hallucination in LLMs, particularly in the context of question-answering. Instead of attempting to answer all questions, we explore a refusal mechanism that instructs LLMs to refuse to answer challenging questions in order to avoid errors. We then propose a simple yet effective solution called Learn to Refuse (L2R), which incorporates the refusal mechanism to enable LLMs to recognize and refuse to answer questions that they find difficult to address. To achieve this, we utilize a structured knowledge base to represent all the LLM{'}s understanding of the world, enabling it to provide traceable gold knowledge. This knowledge base is separate from the LLM and initially empty. It can be filled with validated knowledge and progressively expanded. When an LLM encounters questions outside its domain, the system recognizes its knowledge scope and determines whether it can answer the question independently. Additionally, we introduce a method for automatically and efficiently expanding the knowledge base of LLMs. Through qualitative and quantitative analysis, we demonstrate that our approach enhances the controllability and reliability of LLMs.", }
Large language models (LLMs) have demonstrated impressive language understanding and generation capabilities, enabling them to answer a wide range of questions across various domains. However, these models are not flawless and often produce responses that contain errors or misinformation. These inaccuracies, commonly referred to as hallucinations, render LLMs unreliable and even unusable in many scenarios. In this paper, our focus is on mitigating the issue of hallucination in LLMs, particularly in the context of question-answering. Instead of attempting to answer all questions, we explore a refusal mechanism that instructs LLMs to refuse to answer challenging questions in order to avoid errors. We then propose a simple yet effective solution called Learn to Refuse (L2R), which incorporates the refusal mechanism to enable LLMs to recognize and refuse to answer questions that they find difficult to address. To achieve this, we utilize a structured knowledge base to represent all the LLM{'}s understanding of the world, enabling it to provide traceable gold knowledge. This knowledge base is separate from the LLM and initially empty. It can be filled with validated knowledge and progressively expanded. When an LLM encounters questions outside its domain, the system recognizes its knowledge scope and determines whether it can answer the question independently. Additionally, we introduce a method for automatically and efficiently expanding the knowledge base of LLMs. Through qualitative and quantitative analysis, we demonstrate that our approach enhances the controllability and reliability of LLMs.
[ "Cao, Lang" ]
Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism
emnlp-main.212
Poster
2311.01041
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.213.bib
https://aclanthology.org/2024.emnlp-main.213/
@inproceedings{zou-etal-2024-vgbench, title = "{VGB}ench: Evaluating Large Language Models on Vector Graphics Understanding and Generation", author = "Zou, Bocheng and Cai, Mu and Zhang, Jianrui and Lee, Yong Jae", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.213", pages = "3647--3659", abstract = "In the realm of vision models, the primary mode of representation is using pixels to rasterize the visual world. Yet this is not always the best or unique way to represent visual content, especially for designers and artists who depict the world using geometry primitives such as polygons. Vector graphics (VG), on the other hand, offer a textual representation of visual content, which can be more concise and powerful for content like cartoons, sketches and scientific figures. Recent studies have shown promising results on processing vector graphics with capable Large Language Models (LLMs). However, such works focus solely on qualitative results, understanding, or a specific type of vector graphics. We propose VGBench, a comprehensive benchmark for LLMs on handling vector graphics through diverse aspects, including (a) both visual understanding and generation, (b) evaluation of various vector graphics formats, (c) diverse question types, (d) wide range of prompting techniques, (e) under multiple LLMs and (f) comparison with VLMs on rasterized representations. Evaluating on our collected 4279 understanding and 5845 generation samples, we find that LLMs show strong capability on both aspects while exhibiting less desirable performance on low-level formats (SVG). Both data and evaluation pipeline will be open-sourced.", }
In the realm of vision models, the primary mode of representation is using pixels to rasterize the visual world. Yet this is not always the best or unique way to represent visual content, especially for designers and artists who depict the world using geometry primitives such as polygons. Vector graphics (VG), on the other hand, offer a textual representation of visual content, which can be more concise and powerful for content like cartoons, sketches and scientific figures. Recent studies have shown promising results on processing vector graphics with capable Large Language Models (LLMs). However, such works focus solely on qualitative results, understanding, or a specific type of vector graphics. We propose VGBench, a comprehensive benchmark for LLMs on handling vector graphics through diverse aspects, including (a) both visual understanding and generation, (b) evaluation of various vector graphics formats, (c) diverse question types, (d) wide range of prompting techniques, (e) under multiple LLMs and (f) comparison with VLMs on rasterized representations. Evaluating on our collected 4279 understanding and 5845 generation samples, we find that LLMs show strong capability on both aspects while exhibiting less desirable performance on low-level formats (SVG). Both data and evaluation pipeline will be open-sourced.
[ "Zou, Bocheng", "Cai, Mu", "Zhang, Jianrui", "Lee, Yong Jae" ]
VGBench: Evaluating Large Language Models on Vector Graphics Understanding and Generation
emnlp-main.213
Poster
2407.10972
[ "https://github.com/vgbench/VGBench" ]
https://huggingface.co/papers/2407.10972
2
1
0
4
[]
[ "vgbench/VGQA" ]
[]
[]
[ "vgbench/VGQA" ]
[]
1
https://aclanthology.org/2024.emnlp-main.214.bib
https://aclanthology.org/2024.emnlp-main.214/
@inproceedings{qian-etal-2024-large, title = "What do Large Language Models Need for Machine Translation Evaluation?", author = "Qian, Shenbin and Sindhujan, Archchana and Kabra, Minnie and Kanojia, Diptesh and Orasan, Constantin and Ranasinghe, Tharindu and Blain, Fred", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.214", pages = "3660--3674", abstract = "Leveraging large language models (LLMs) for various natural language processing tasks has led to superlative claims about their performance. For the evaluation of machine translation (MT), existing research shows that LLMs are able to achieve results comparable to fine-tuned multilingual pre-trained language models. In this paper, we explore what translation information, such as the source, reference, translation errors and annotation guidelines, is needed for LLMs to evaluate MT quality. In addition, we investigate prompting techniques such as zero-shot, Chain of Thought (CoT) and few-shot prompting for eight language pairs covering high-, medium- and low-resource languages, leveraging varying LLM variants. Our findings indicate the importance of reference translations for an LLM-based evaluation. While larger models do not necessarily fare better, they tend to benefit more from CoT prompting, than smaller models. We also observe that LLMs do not always provide a numerical score when generating evaluations, which poses a question on their reliability for the task. Our work presents a comprehensive analysis for resource-constrained and training-less LLM-based evaluation of machine translation. We release the accrued prompt templates, code and data publicly for reproducibility.", }
Leveraging large language models (LLMs) for various natural language processing tasks has led to superlative claims about their performance. For the evaluation of machine translation (MT), existing research shows that LLMs are able to achieve results comparable to fine-tuned multilingual pre-trained language models. In this paper, we explore what translation information, such as the source, reference, translation errors and annotation guidelines, is needed for LLMs to evaluate MT quality. In addition, we investigate prompting techniques such as zero-shot, Chain of Thought (CoT) and few-shot prompting for eight language pairs covering high-, medium- and low-resource languages, leveraging varying LLM variants. Our findings indicate the importance of reference translations for an LLM-based evaluation. While larger models do not necessarily fare better, they tend to benefit more from CoT prompting, than smaller models. We also observe that LLMs do not always provide a numerical score when generating evaluations, which poses a question on their reliability for the task. Our work presents a comprehensive analysis for resource-constrained and training-less LLM-based evaluation of machine translation. We release the accrued prompt templates, code and data publicly for reproducibility.
[ "Qian, Shenbin", "Sindhujan, Archchana", "Kabra, Minnie", "Kanojia, Diptesh", "Orasan, Constantin", "Ranasinghe, Tharindu", "Blain, Fred" ]
What do Large Language Models Need for Machine Translation Evaluation?
emnlp-main.214
Poster
2410.03278
[ "https://github.com/surrey-nlp/LLM4MT_eval" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.215.bib
https://aclanthology.org/2024.emnlp-main.215/
@inproceedings{palo-etal-2024-performance, title = "Performance-Guided {LLM} Knowledge Distillation for Efficient Text Classification at Scale", author = "Palo, Flavio Di and Singhi, Prateek and Fadlallah, Bilal H", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.215", pages = "3675--3687", abstract = "Large Language Models (LLMs) face significant challenges at inference time due to their high computational demands. To address this, we present Performance-Guided Knowledge Distillation (PGKD), a cost-effective and high-throughput solution for production text classification applications. PGKD utilizes teacher-student Knowledge Distillation to distill the knowledge of LLMs into smaller, task-specific models. PGKD establishes an active learning routine between the student model and the LLM; the LLM continuously generates new training data leveraging hard-negative mining, student model validation performance, and early-stopping protocols to inform the data generation. By employing a cyclical, performance-aware approach tailored for highly multi-class, sparsely annotated datasets prevalent in industrial text classification, PGKD effectively addresses training challenges and outperforms traditional BERT-base models and other knowledge distillation methods on several multi-class classification datasets. Additionally, cost and latency benchmarking reveals that models fine-tuned with PGKD are up to 130X faster and 25X less expensive than LLMs for inference on the same classification task. While PGKD is showcased for text classification tasks, its versatile framework can be extended to any LLM distillation task, including language generation, making it a powerful tool for optimizing performance across a wide range of AI applications.", }
Large Language Models (LLMs) face significant challenges at inference time due to their high computational demands. To address this, we present Performance-Guided Knowledge Distillation (PGKD), a cost-effective and high-throughput solution for production text classification applications. PGKD utilizes teacher-student Knowledge Distillation to distill the knowledge of LLMs into smaller, task-specific models. PGKD establishes an active learning routine between the student model and the LLM; the LLM continuously generates new training data leveraging hard-negative mining, student model validation performance, and early-stopping protocols to inform the data generation. By employing a cyclical, performance-aware approach tailored for highly multi-class, sparsely annotated datasets prevalent in industrial text classification, PGKD effectively addresses training challenges and outperforms traditional BERT-base models and other knowledge distillation methods on several multi-class classification datasets. Additionally, cost and latency benchmarking reveals that models fine-tuned with PGKD are up to 130X faster and 25X less expensive than LLMs for inference on the same classification task. While PGKD is showcased for text classification tasks, its versatile framework can be extended to any LLM distillation task, including language generation, making it a powerful tool for optimizing performance across a wide range of AI applications.
[ "Palo, Flavio Di", "Singhi, Prateek", "Fadlallah, Bilal H" ]
Performance-Guided LLM Knowledge Distillation for Efficient Text Classification at Scale
emnlp-main.215
Poster
2411.05045
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.216.bib
https://aclanthology.org/2024.emnlp-main.216/
@inproceedings{gemechu-reed-2024-external, title = "External Knowledge-Driven Argument Mining: Leveraging Attention-Enhanced Multi-Network Models", author = "Gemechu, Debela and Reed, Chris", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.216", pages = "3688--3709", abstract = "Argument mining (AM) involves the identification of argument relations (AR) between Argumentative Discourse Units (ADUs). The essence of ARs among ADUs is context-dependent and lies in maintaining a coherent flow of ideas, often centered around the relations between discussed entities, topics, themes or concepts. However, these relations are not always explicitly stated; rather, inferred from implicit chains of reasoning connecting the concepts addressed in the ADUs. While humans can infer such background knowledge, machines face challenges when the contextual cues are not explicitly provided. This paper leverages external resources, including WordNet, ConceptNet, and Wikipedia to identify semantic paths (knowledge paths) connecting the concepts discussed in the ADUs to obtain the implicit chains of reasoning. To effectively leverage these paths for AR prediction, we propose attention-based Multi-Network architectures. Various architecture are evaluated on the external resources, and the Wikipedia based configuration attains F-scores of 0.85, 0.84, 0.70, and 0.87, respectively, on four diverse datasets, showing strong performance over the baselines.", }
Argument mining (AM) involves the identification of argument relations (AR) between Argumentative Discourse Units (ADUs). The essence of ARs among ADUs is context-dependent and lies in maintaining a coherent flow of ideas, often centered around the relations between discussed entities, topics, themes or concepts. However, these relations are not always explicitly stated; rather, inferred from implicit chains of reasoning connecting the concepts addressed in the ADUs. While humans can infer such background knowledge, machines face challenges when the contextual cues are not explicitly provided. This paper leverages external resources, including WordNet, ConceptNet, and Wikipedia to identify semantic paths (knowledge paths) connecting the concepts discussed in the ADUs to obtain the implicit chains of reasoning. To effectively leverage these paths for AR prediction, we propose attention-based Multi-Network architectures. Various architecture are evaluated on the external resources, and the Wikipedia based configuration attains F-scores of 0.85, 0.84, 0.70, and 0.87, respectively, on four diverse datasets, showing strong performance over the baselines.
[ "Gemechu, Debela", "Reed, Chris" ]
External Knowledge-Driven Argument Mining: Leveraging Attention-Enhanced Multi-Network Models
emnlp-main.216
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.217.bib
https://aclanthology.org/2024.emnlp-main.217/
@inproceedings{musa-etal-2024-c3pa, title = "{C}3{PA}: An Open Dataset of Expert-Annotated and Regulation-Aware Privacy Policies to Enable Scalable Regulatory Compliance Audits", author = "Musa, Maaz Bin and Winston, Steven M. and Allen, Garrison and Schiller, Jacob and Moore, Kevin and Quick, Sean and Melvin, Johnathan and Srinivasan, Padmini and Diamantis, Mihailis E. and Nithyanand, Rishab", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.217", pages = "3710--3722", abstract = "The development of tools and techniques to analyze and extract organizations{'} data habits from privacy policies are critical for scalable regulatory compliance audits. Unfortunately, these tools are becoming increasingly limited in their ability to identify compliance issues and fixes. After all, most were developed using regulation-agnostic datasets of annotated privacy policies obtained from a time before the introduction of landmark privacy regulations such as EU{'}s GDPR and California{'}s CCPA. In this paper, we describe the first open regulation-aware dataset of expert-annotated privacy policies, C3PA (CCPA Privacy Policy Provision Annotations), aimed to address this challenge. C3PA contains over 48K expert-labeled privacy policy text segments associated with responses to CCPA-specific disclosure mandates from 411 unique organizations. We demonstrate that the C3PA dataset is uniquely suited for aiding automated audits of compliance with CCPA-related disclosure mandates.", }
The development of tools and techniques to analyze and extract organizations{'} data habits from privacy policies are critical for scalable regulatory compliance audits. Unfortunately, these tools are becoming increasingly limited in their ability to identify compliance issues and fixes. After all, most were developed using regulation-agnostic datasets of annotated privacy policies obtained from a time before the introduction of landmark privacy regulations such as EU{'}s GDPR and California{'}s CCPA. In this paper, we describe the first open regulation-aware dataset of expert-annotated privacy policies, C3PA (CCPA Privacy Policy Provision Annotations), aimed to address this challenge. C3PA contains over 48K expert-labeled privacy policy text segments associated with responses to CCPA-specific disclosure mandates from 411 unique organizations. We demonstrate that the C3PA dataset is uniquely suited for aiding automated audits of compliance with CCPA-related disclosure mandates.
[ "Musa, Maaz Bin", "Winston, Steven M.", "Allen, Garrison", "Schiller, Jacob", "Moore, Kevin", "Quick, Sean", "Melvin, Johnathan", "Srinivasan, Padmini", "Diamantis, Mihailis E.", "Nithyan", ", Rishab" ]
C3PA: An Open Dataset of Expert-Annotated and Regulation-Aware Privacy Policies to Enable Scalable Regulatory Compliance Audits
emnlp-main.217
Poster
2410.03925
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.218.bib
https://aclanthology.org/2024.emnlp-main.218/
@inproceedings{wang-etal-2024-m2pt, title = "{M}$^2${PT}: Multimodal Prompt Tuning for Zero-shot Instruction Learning", author = "Wang, Taowen and Liu, Yiyang and Liang, James Chenhao and Zhao, Junhan and Cui, Yiming and Mao, Yuning and Nie, Shaoliang and Liu, Jiahao and Feng, Fuli and Xu, Zenglin and Han, Cheng and Huang, Lifu and Wang, Qifan and Liu, Dongfang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.218", pages = "3723--3740", abstract = "Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains, with increasing emphasis on enhancing their zero-shot generalization capabilities for unseen tasks across various modalities. Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks. As the scale of MLLMs continues to grow, parameter-efficient finetuning becomes increasingly critical. However, most existing parameter-efficient approaches focus only on single modalities and often overlook the multimodal characteristics during finetuning. In this work, we introduce a novel Multimodal Prompt Tuning (M$^2$PT) approach for efficient instruction tuning of MLLMs. M$^2$PT effectively integrates visual and textual prompts into the vision encoder and language processor respectively during finetuning, facilitating the extraction and alignment of features across modalities. Empirical results on various multimodal evaluation datasets demonstrate the superior performance of our approach compared to several state-of-the-art baselines. A comprehensive set of ablation studies validates the effectiveness of our prompt design and the efficiency of our approach.", }
Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains, with increasing emphasis on enhancing their zero-shot generalization capabilities for unseen tasks across various modalities. Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks. As the scale of MLLMs continues to grow, parameter-efficient finetuning becomes increasingly critical. However, most existing parameter-efficient approaches focus only on single modalities and often overlook the multimodal characteristics during finetuning. In this work, we introduce a novel Multimodal Prompt Tuning (M$^2$PT) approach for efficient instruction tuning of MLLMs. M$^2$PT effectively integrates visual and textual prompts into the vision encoder and language processor respectively during finetuning, facilitating the extraction and alignment of features across modalities. Empirical results on various multimodal evaluation datasets demonstrate the superior performance of our approach compared to several state-of-the-art baselines. A comprehensive set of ablation studies validates the effectiveness of our prompt design and the efficiency of our approach.
[ "Wang, Taowen", "Liu, Yiyang", "Liang, James Chenhao", "Zhao, Junhan", "Cui, Yiming", "Mao, Yuning", "Nie, Shaoliang", "Liu, Jiahao", "Feng, Fuli", "Xu, Zenglin", "Han, Cheng", "Huang, Lifu", "Wang, Qifan", "Liu, Dongfang" ]
M^2PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
emnlp-main.218
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.219.bib
https://aclanthology.org/2024.emnlp-main.219/
@inproceedings{peng-etal-2024-text, title = "Text Grafting: Near-Distribution Weak Supervision for Minority Classes in Text Classification", author = "Peng, Letian and Gu, Yi and Dong, Chengyu and Wang, Zihan and Shang, Jingbo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.219", pages = "3741--3752", abstract = "For extremely weak-supervised text classification, pioneer research generates pseudo labels by mining texts similar to the class names from the raw corpus, which may end up with very limited or even no samples for the minority classes. Recent works have started to generate the relevant texts by prompting LLMs using the class names or definitions; however, there is a high risk that LLMs cannot generate in-distribution (i.e., similar to the corpus where the text classifier will be applied) data, leading to ungeneralizable classifiers. In this paper, we combine the advantages of these two approaches and propose to bridge the gap via a novel framework, \textit{text grafting}, which aims to obtain clean and near-distribution weak supervision for minority classes. Specifically, we first use LLM-based logits to mine masked templates from the raw corpus, which have a high potential for data synthesis into the target minority class. Then, the templates are filled by state-of-the-art LLMs to synthesize near-distribution texts falling into minority classes. Text grafting shows significant improvement over direct mining or synthesis on minority classes. We also use analysis and case studies to comprehend the property of text grafting.", }
For extremely weak-supervised text classification, pioneer research generates pseudo labels by mining texts similar to the class names from the raw corpus, which may end up with very limited or even no samples for the minority classes. Recent works have started to generate the relevant texts by prompting LLMs using the class names or definitions; however, there is a high risk that LLMs cannot generate in-distribution (i.e., similar to the corpus where the text classifier will be applied) data, leading to ungeneralizable classifiers. In this paper, we combine the advantages of these two approaches and propose to bridge the gap via a novel framework, \textit{text grafting}, which aims to obtain clean and near-distribution weak supervision for minority classes. Specifically, we first use LLM-based logits to mine masked templates from the raw corpus, which have a high potential for data synthesis into the target minority class. Then, the templates are filled by state-of-the-art LLMs to synthesize near-distribution texts falling into minority classes. Text grafting shows significant improvement over direct mining or synthesis on minority classes. We also use analysis and case studies to comprehend the property of text grafting.
[ "Peng, Letian", "Gu, Yi", "Dong, Chengyu", "Wang, Zihan", "Shang, Jingbo" ]
Text Grafting: Near-Distribution Weak Supervision for Minority Classes in Text Classification
emnlp-main.219
Poster
2406.11115
[ "https://github.com/KomeijiForce/TextGrafting" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.220.bib
https://aclanthology.org/2024.emnlp-main.220/
@inproceedings{peng-etal-2024-incubating, title = "Incubating Text Classifiers Following User Instruction with Nothing but {LLM}", author = "Peng, Letian and Wang, Zilong and Shang, Jingbo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.220", pages = "3753--3766", abstract = "In this paper, we aim to generate text classification data given arbitrary class definitions (i.e., user instruction), so one can train a text classifier without any human annotation or raw corpus. Recent advances in large language models (LLMs) lead to pioneer attempts to individually generate texts for each class via prompting. In this paper, we propose Incubator, the first framework that can handle complicated and even mutually dependent classes (e.g., ''\textit{TED Talk given by Educator}'' and ''\textit{Other}''). Specifically, our Incubator is a fine-tuned LLM that takes the instruction of all class definitions as input, and in each inference, it can jointly generate one sample for every class. First, we tune Incubator on the instruction-to-data mappings that we obtained from classification datasets and descriptions on Hugging Face together with in-context augmentation by GPT-4. To emphasize the uniformity and diversity in generations, we refine Incubator by fine-tuning with the cluster centers of semantic textual embeddings of the generated samples. We compare Incubator on various classification tasks with strong baselines such as direct LLM-based inference and training data generation by prompt engineering. Experiments show Incubator is able to (1) outperform previous methods on traditional benchmarks, (2) take label interdependency and user preference into consideration, and (3) enable logical text mining by incubating multiple classifiers", }
In this paper, we aim to generate text classification data given arbitrary class definitions (i.e., user instruction), so one can train a text classifier without any human annotation or raw corpus. Recent advances in large language models (LLMs) lead to pioneer attempts to individually generate texts for each class via prompting. In this paper, we propose Incubator, the first framework that can handle complicated and even mutually dependent classes (e.g., ''\textit{TED Talk given by Educator}'' and ''\textit{Other}''). Specifically, our Incubator is a fine-tuned LLM that takes the instruction of all class definitions as input, and in each inference, it can jointly generate one sample for every class. First, we tune Incubator on the instruction-to-data mappings that we obtained from classification datasets and descriptions on Hugging Face together with in-context augmentation by GPT-4. To emphasize the uniformity and diversity in generations, we refine Incubator by fine-tuning with the cluster centers of semantic textual embeddings of the generated samples. We compare Incubator on various classification tasks with strong baselines such as direct LLM-based inference and training data generation by prompt engineering. Experiments show Incubator is able to (1) outperform previous methods on traditional benchmarks, (2) take label interdependency and user preference into consideration, and (3) enable logical text mining by incubating multiple classifiers
[ "Peng, Letian", "Wang, Zilong", "Shang, Jingbo" ]
Incubating Text Classifiers Following User Instruction with Nothing but LLM
emnlp-main.220
Poster
2404.10877
[ "https://github.com/komeijiforce/incubator" ]
https://huggingface.co/papers/2404.10877
0
0
0
2
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.221.bib
https://aclanthology.org/2024.emnlp-main.221/
@inproceedings{luo-etal-2024-ptd, title = "{PTD}-{SQL}: Partitioning and Targeted Drilling with {LLM}s in Text-to-{SQL}", author = "Luo, Ruilin and Wang, Liyuan and Lin, Binghuai and Lin, Zicheng and Yang, Yujiu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.221", pages = "3767--3799", abstract = "Large Language Models (LLMs) have emerged as powerful tools for Text-to-SQL tasks, exhibiting remarkable reasoning capabilities. Different from tasks such as math word problem and commonsense reasoning, SQL solutions have a relatively fixed pattern. This facilitates the investigation of whether LLMs can benefit from categorical thinking, mirroring how humans acquire knowledge through inductive reasoning based on comparable examples. In this study, we propose that employing query group partitioning allows LLMs to focus on learning the thought processes specific to a single problem type, consequently enhancing their reasoning abilities across diverse difficulty levels and problem categories. Our experiments reveal that multiple advanced LLMs, when equipped with PTD-SQL, can either surpass or match previous state-of-the-art (SOTA) methods on the Spider and BIRD datasets. Intriguingly, models with varying initial performances have exhibited significant improvements mainly at the boundary of their capabilities after targeted drilling, suggesting a parallel with human progress. Code is available at https://github.com/lrlbbzl/PTD-SQL.", }
Large Language Models (LLMs) have emerged as powerful tools for Text-to-SQL tasks, exhibiting remarkable reasoning capabilities. Different from tasks such as math word problem and commonsense reasoning, SQL solutions have a relatively fixed pattern. This facilitates the investigation of whether LLMs can benefit from categorical thinking, mirroring how humans acquire knowledge through inductive reasoning based on comparable examples. In this study, we propose that employing query group partitioning allows LLMs to focus on learning the thought processes specific to a single problem type, consequently enhancing their reasoning abilities across diverse difficulty levels and problem categories. Our experiments reveal that multiple advanced LLMs, when equipped with PTD-SQL, can either surpass or match previous state-of-the-art (SOTA) methods on the Spider and BIRD datasets. Intriguingly, models with varying initial performances have exhibited significant improvements mainly at the boundary of their capabilities after targeted drilling, suggesting a parallel with human progress. Code is available at https://github.com/lrlbbzl/PTD-SQL.
[ "Luo, Ruilin", "Wang, Liyuan", "Lin, Binghuai", "Lin, Zicheng", "Yang, Yujiu" ]
PTD-SQL: Partitioning and Targeted Drilling with LLMs in Text-to-SQL
emnlp-main.221
Poster
2409.14082
[ "https://github.com/lrlbbzl/ptd-sql" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.222.bib
https://aclanthology.org/2024.emnlp-main.222/
@inproceedings{holliday-etal-2024-conditional, title = "Conditional and Modal Reasoning in Large Language Models", author = "Holliday, Wesley H. and Mandelkern, Matthew and Zhang, Cedegao E.", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.222", pages = "3800--3821", abstract = "The reasoning abilities of large language models (LLMs) are the topic of a growing body of research in AI and cognitive science. In this paper, we probe the extent to which twenty-nine LLMs are able to distinguish logically correct inferences from logically fallacious ones. We focus on inference patterns involving conditionals (e.g., '*If* Ann has a queen, *then* Bob has a jack{'}) and epistemic modals (e.g., {`}Ann *might* have an ace{'}, {`}Bob *must* have a king{'}). These inferences have been of special interest to logicians, philosophers, and linguists, since they play a central role in the fundamental human ability to reason about distal possibilities. Assessing LLMs on these inferences is thus highly relevant to the question of how much the reasoning abilities of LLMs match those of humans. All the LLMs we tested make some basic mistakes with conditionals or modals, though zero-shot chain-of-thought prompting helps them make fewer mistakes. Even the best performing LLMs make basic errors in modal reasoning, display logically inconsistent judgments across inference patterns involving epistemic modals and conditionals, and give answers about complex conditional inferences that do not match reported human judgments. These results highlight gaps in basic logical reasoning in today{'}s LLMs.", }
The reasoning abilities of large language models (LLMs) are the topic of a growing body of research in AI and cognitive science. In this paper, we probe the extent to which twenty-nine LLMs are able to distinguish logically correct inferences from logically fallacious ones. We focus on inference patterns involving conditionals (e.g., '*If* Ann has a queen, *then* Bob has a jack{'}) and epistemic modals (e.g., {`}Ann *might* have an ace{'}, {`}Bob *must* have a king{'}). These inferences have been of special interest to logicians, philosophers, and linguists, since they play a central role in the fundamental human ability to reason about distal possibilities. Assessing LLMs on these inferences is thus highly relevant to the question of how much the reasoning abilities of LLMs match those of humans. All the LLMs we tested make some basic mistakes with conditionals or modals, though zero-shot chain-of-thought prompting helps them make fewer mistakes. Even the best performing LLMs make basic errors in modal reasoning, display logically inconsistent judgments across inference patterns involving epistemic modals and conditionals, and give answers about complex conditional inferences that do not match reported human judgments. These results highlight gaps in basic logical reasoning in today{'}s LLMs.
[ "Holliday, Wesley H.", "M", "elkern, Matthew", "Zhang, Cedegao E." ]
Conditional and Modal Reasoning in Large Language Models
emnlp-main.222
Oral
2401.17169
[ "https://github.com/wesholliday/llm-logic" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.223.bib
https://aclanthology.org/2024.emnlp-main.223/
@inproceedings{huang-etal-2024-advancing, title = "Advancing Large Language Model Attribution through Self-Improving", author = "Huang, Lei and Feng, Xiaocheng and Ma, Weitao and Zhao, Liang and Fan, Yuchun and Zhong, Weihong and Xu, Dongliang and Yang, Qing and Liu, Hongtao and Qin, Bing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.223", pages = "3822--3836", abstract = "Teaching large language models (LLMs) to generate text with citations to evidence sources can mitigate hallucinations and enhance verifiability in information-seeking systems. However, improving this capability requires high-quality attribution data, which is costly and labor-intensive. Inspired by recent advances in self-improvement that enhance LLMs without manual annotation, we present START, a Self-Taught AttRibuTion framework for iteratively improving the attribution capability of LLMs. First, to prevent models from stagnating due to initially insufficient supervision signals, START leverages the model to self-construct synthetic training data for warming up. To further self-improve the model{'}s attribution ability, START iteratively utilizes fine-grained preference supervision signals constructed from its sampled responses to encourage robust, comprehensive, and attributable generation. Experiments on three open-domain question-answering datasets, covering long-form QA and multi-step reasoning, demonstrate significant performance gains of 25.13{\%} on average without relying on human annotations and more advanced models. Further analysis reveals that START excels in aggregating information across multiple sources.", }
Teaching large language models (LLMs) to generate text with citations to evidence sources can mitigate hallucinations and enhance verifiability in information-seeking systems. However, improving this capability requires high-quality attribution data, which is costly and labor-intensive. Inspired by recent advances in self-improvement that enhance LLMs without manual annotation, we present START, a Self-Taught AttRibuTion framework for iteratively improving the attribution capability of LLMs. First, to prevent models from stagnating due to initially insufficient supervision signals, START leverages the model to self-construct synthetic training data for warming up. To further self-improve the model{'}s attribution ability, START iteratively utilizes fine-grained preference supervision signals constructed from its sampled responses to encourage robust, comprehensive, and attributable generation. Experiments on three open-domain question-answering datasets, covering long-form QA and multi-step reasoning, demonstrate significant performance gains of 25.13{\%} on average without relying on human annotations and more advanced models. Further analysis reveals that START excels in aggregating information across multiple sources.
[ "Huang, Lei", "Feng, Xiaocheng", "Ma, Weitao", "Zhao, Liang", "Fan, Yuchun", "Zhong, Weihong", "Xu, Dongliang", "Yang, Qing", "Liu, Hongtao", "Qin, Bing" ]
Advancing Large Language Model Attribution through Self-Improving
emnlp-main.223
Poster
2410.13298
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.224.bib
https://aclanthology.org/2024.emnlp-main.224/
@inproceedings{liang-etal-2024-aligncap, title = "{A}lign{C}ap: Aligning Speech Emotion Captioning to Human Preferences", author = "Liang, Ziqi and Shi, Haoxiang and Chen, Hanhui", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.224", pages = "3837--3846", abstract = "Speech Emotion Captioning (SEC) has gradually become an active research task. The emotional content conveyed through human speech are often complex, and classifying them into fixed categories may not be enough to fully capture speech emotions. Describing speech emotions through natural language may be a more effective approach. However, existing SEC methods often produce hallucinations and lose generalization on unseen speech. To overcome these problems, we propose AlignCap, which Aligning Speech Emotion Captioning to Human Preferences based on large language model (LLM) with two properties: 1) Speech-Text Alignment, which minimizing the divergence between the LLM{'}s response prediction distributions for speech and text inputs using knowledge distillation (KD) Regularization. 2) Human Preference Alignment, where we design Preference Optimization (PO) Regularization to eliminate factuality and faithfulness hallucinations. We also extract emotional clues as a prompt for enriching fine-grained information under KD-Regularization. Experiments demonstrate that AlignCap presents stronger performance to other state-of-the-art methods on Zero-shot SEC task.", }
Speech Emotion Captioning (SEC) has gradually become an active research task. The emotional content conveyed through human speech are often complex, and classifying them into fixed categories may not be enough to fully capture speech emotions. Describing speech emotions through natural language may be a more effective approach. However, existing SEC methods often produce hallucinations and lose generalization on unseen speech. To overcome these problems, we propose AlignCap, which Aligning Speech Emotion Captioning to Human Preferences based on large language model (LLM) with two properties: 1) Speech-Text Alignment, which minimizing the divergence between the LLM{'}s response prediction distributions for speech and text inputs using knowledge distillation (KD) Regularization. 2) Human Preference Alignment, where we design Preference Optimization (PO) Regularization to eliminate factuality and faithfulness hallucinations. We also extract emotional clues as a prompt for enriching fine-grained information under KD-Regularization. Experiments demonstrate that AlignCap presents stronger performance to other state-of-the-art methods on Zero-shot SEC task.
[ "Liang, Ziqi", "Shi, Haoxiang", "Chen, Hanhui" ]
AlignCap: Aligning Speech Emotion Captioning to Human Preferences
emnlp-main.224
Poster
2410.19134
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.225.bib
https://aclanthology.org/2024.emnlp-main.225/
@inproceedings{hong-lipani-2024-interpretability, title = "Interpretability-based Tailored Knowledge Editing in Transformers", author = "Hong, Yihuai and Lipani, Aldo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.225", pages = "3847--3858", abstract = "Language models recognized as a new form of knowledge bases, face challenges of outdated, erroneous, and privacy-sensitive information, necessitating knowledge editing to rectify errors without costly retraining. Existing methods, spanning model{'}s parameters modification, external knowledge integration, and in-context learning, lack in-depth analysis from a model interpretability perspective. Our work explores the instability in in-context learning outcomes, providing insights into its reasons and distinctions from other methods. Leveraging findings on the critical role of feed-forward MLPs in decoder-only models, we propose a tailored knowledge editing method, TailoredKE, that considers the unique information flow of each sample. Model interpretability reveals diverse attribute recall across transformer layers, guiding edits to specific features at different depths and mitigating over-editing issues.", }
Language models recognized as a new form of knowledge bases, face challenges of outdated, erroneous, and privacy-sensitive information, necessitating knowledge editing to rectify errors without costly retraining. Existing methods, spanning model{'}s parameters modification, external knowledge integration, and in-context learning, lack in-depth analysis from a model interpretability perspective. Our work explores the instability in in-context learning outcomes, providing insights into its reasons and distinctions from other methods. Leveraging findings on the critical role of feed-forward MLPs in decoder-only models, we propose a tailored knowledge editing method, TailoredKE, that considers the unique information flow of each sample. Model interpretability reveals diverse attribute recall across transformer layers, guiding edits to specific features at different depths and mitigating over-editing issues.
[ "Hong, Yihuai", "Lipani, Aldo" ]
Interpretability-based Tailored Knowledge Editing in Transformers
emnlp-main.225
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.226.bib
https://aclanthology.org/2024.emnlp-main.226/
@inproceedings{chen-etal-2024-prompt, title = "{PR}ompt Optimization in Multi-Step Tasks ({PROMST}): Integrating Human Feedback and Heuristic-based Sampling", author = "Chen, Yongchao and Arkin, Jacob and Hao, Yilun and Zhang, Yang and Roy, Nicholas and Fan, Chuchu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.226", pages = "3859--3920", abstract = "Prompt optimization aims to find the best prompt to a large language model (LLM) for a given task. LLMs have been successfully used to help find and improve prompt candidates for single-step tasks. However, realistic tasks for agents are multi-step and introduce new challenges: (1) Prompt content is likely to be more extensive and complex, making it more difficult for LLMs to analyze errors, (2) the impact of an individual step is difficult to evaluate, and (3) different people may have varied preferences about task execution. While humans struggle to optimize prompts, they are good at providing feedback about LLM outputs; we therefore introduce a new LLM-driven discrete prompt optimization framework PROMST that incorporates human-designed feedback rules to automatically offer direct suggestions for improvement. We also use an extra learned heuristic model that predicts prompt performance to efficiently sample from prompt candidates. This approach significantly outperforms both human-engineered prompts and several other prompt optimization methods across 11 representative multi-step tasks (an average 10.6{\%}-29.3{\%} improvement to current best methods on five LLMs respectively). We believe our work can serve as a benchmark for automatic prompt optimization for LLM-driven multi-step tasks.", }
Prompt optimization aims to find the best prompt to a large language model (LLM) for a given task. LLMs have been successfully used to help find and improve prompt candidates for single-step tasks. However, realistic tasks for agents are multi-step and introduce new challenges: (1) Prompt content is likely to be more extensive and complex, making it more difficult for LLMs to analyze errors, (2) the impact of an individual step is difficult to evaluate, and (3) different people may have varied preferences about task execution. While humans struggle to optimize prompts, they are good at providing feedback about LLM outputs; we therefore introduce a new LLM-driven discrete prompt optimization framework PROMST that incorporates human-designed feedback rules to automatically offer direct suggestions for improvement. We also use an extra learned heuristic model that predicts prompt performance to efficiently sample from prompt candidates. This approach significantly outperforms both human-engineered prompts and several other prompt optimization methods across 11 representative multi-step tasks (an average 10.6{\%}-29.3{\%} improvement to current best methods on five LLMs respectively). We believe our work can serve as a benchmark for automatic prompt optimization for LLM-driven multi-step tasks.
[ "Chen, Yongchao", "Arkin, Jacob", "Hao, Yilun", "Zhang, Yang", "Roy, Nicholas", "Fan, Chuchu" ]
PRompt Optimization in Multi-Step Tasks (PROMST): Integrating Human Feedback and Heuristic-based Sampling
emnlp-main.226
Poster
2402.08702
[ "https://github.com/yongchao98/promst" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.227.bib
https://aclanthology.org/2024.emnlp-main.227/
@inproceedings{cai-etal-2024-empowering, title = "Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting", author = "Cai, Chen and Wang, Zheng and Gao, Jianjun and Liu, Wenyang and Lu, Ye and Zhang, Runzhong and Yap, Kim-Hui", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.227", pages = "3921--3932", abstract = "In recent years, the rapid increase in online video content has underscored the limitations of static Video Question Answering (VideoQA) models trained on fixed datasets, as they struggle to adapt to new questions or tasks posed by newly available content. In this paper, we explore the novel challenge of VideoQA within a continual learning framework, and empirically identify a critical issue: fine-tuning a large language model (LLM) for a sequence of tasks often results in catastrophic forgetting. To address this, we propose Collaborative Prompting (ColPro), which integrates specific question constraint prompting, knowledge acquisition prompting, and visual temporal awareness prompting. These prompts aim to capture textual question context, visual content, and video temporal dynamics in VideoQA, a perspective underexplored in prior research. Experimental results on the NExT-QA and DramaQA datasets show that ColPro achieves superior performance compared to existing approaches, achieving 55.14{\%} accuracy on NExT-QA and 71.24{\%} accuracy on DramaQA, highlighting its practical relevance and effectiveness.", }
In recent years, the rapid increase in online video content has underscored the limitations of static Video Question Answering (VideoQA) models trained on fixed datasets, as they struggle to adapt to new questions or tasks posed by newly available content. In this paper, we explore the novel challenge of VideoQA within a continual learning framework, and empirically identify a critical issue: fine-tuning a large language model (LLM) for a sequence of tasks often results in catastrophic forgetting. To address this, we propose Collaborative Prompting (ColPro), which integrates specific question constraint prompting, knowledge acquisition prompting, and visual temporal awareness prompting. These prompts aim to capture textual question context, visual content, and video temporal dynamics in VideoQA, a perspective underexplored in prior research. Experimental results on the NExT-QA and DramaQA datasets show that ColPro achieves superior performance compared to existing approaches, achieving 55.14{\%} accuracy on NExT-QA and 71.24{\%} accuracy on DramaQA, highlighting its practical relevance and effectiveness.
[ "Cai, Chen", "Wang, Zheng", "Gao, Jianjun", "Liu, Wenyang", "Lu, Ye", "Zhang, Runzhong", "Yap, Kim-Hui" ]
Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting
emnlp-main.227
Poster
2410.00771
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.228.bib
https://aclanthology.org/2024.emnlp-main.228/
@inproceedings{hong-etal-2024-dissecting, title = "Dissecting Fine-Tuning Unlearning in Large Language Models", author = "Hong, Yihuai and Zou, Yuelin and Hu, Lijie and Zeng, Ziqian and Wang, Di and Yang, Haiqin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.228", pages = "3933--3941", abstract = "Fine-tuning-based unlearning methods prevail for erasing targeted harmful, sensitive, or copyrighted information within large language models while preserving overall capabilities. However, the true effectiveness of the methods is unclear. In this paper, we delve into the limitations of fine-tuning-based unlearning through activation patching and parameter restoration experiments. Our findings reveal that these methods alter the model{'}s knowledge retrieval process, rather than genuinely erasing the problematic knowledge embedded in the model parameters. Furthermore, behavioral tests demonstrate that the unlearning mechanisms inevitably impact the global behavior of the models, affecting unrelated knowledge or capabilities. Our work advocates the development of more resilient unlearning techniques for truly erasing knowledge.", }
Fine-tuning-based unlearning methods prevail for erasing targeted harmful, sensitive, or copyrighted information within large language models while preserving overall capabilities. However, the true effectiveness of the methods is unclear. In this paper, we delve into the limitations of fine-tuning-based unlearning through activation patching and parameter restoration experiments. Our findings reveal that these methods alter the model{'}s knowledge retrieval process, rather than genuinely erasing the problematic knowledge embedded in the model parameters. Furthermore, behavioral tests demonstrate that the unlearning mechanisms inevitably impact the global behavior of the models, affecting unrelated knowledge or capabilities. Our work advocates the development of more resilient unlearning techniques for truly erasing knowledge.
[ "Hong, Yihuai", "Zou, Yuelin", "Hu, Lijie", "Zeng, Ziqian", "Wang, Di", "Yang, Haiqin" ]
Dissecting Fine-Tuning Unlearning in Large Language Models
emnlp-main.228
Oral
2410.06606
[ "https://github.com/yihuaihong/dissecting-ft-unlearning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.229.bib
https://aclanthology.org/2024.emnlp-main.229/
@inproceedings{wu-etal-2024-dancing, title = "Dancing in Chains: Reconciling Instruction Following and Faithfulness in Language Models", author = "Wu, Zhengxuan and Zhang, Yuhao and Qi, Peng and Xu, Yumo and Han, Rujun and Zhang, Yian and Chen, Jifan and Min, Bonan and Huang, Zhiheng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.229", pages = "3942--3965", abstract = "Modern language models (LMs) need to follow human instructions while being faithful; yet, they often fail to achieve both. Here, we provide concrete evidence of a trade-off between instruction following (i.e., follow open-ended instructions) and faithfulness (i.e., ground responses in given context) when training LMs with these objectives. For instance, fine-tuning LLaMA-7B on instruction following datasets renders it less faithful. Conversely, instruction-tuned Vicuna-7B shows degraded performance at following instructions when further optimized on tasks that require contextual grounding. One common remedy is multi-task learning (MTL) with data mixing, yet it remains far from achieving a synergic outcome. We propose a simple yet effective method that relies on Reject-sampling by Self-instruct with Continued Fine-tuning (ReSet), which significantly outperforms vanilla MTL. Surprisingly, we find that less is more, as training ReSet with high-quality, yet substantially smaller data (three-fold less) yields superior results. Our findings offer a better understanding of objective discrepancies in alignment training of LMs.", }
Modern language models (LMs) need to follow human instructions while being faithful; yet, they often fail to achieve both. Here, we provide concrete evidence of a trade-off between instruction following (i.e., follow open-ended instructions) and faithfulness (i.e., ground responses in given context) when training LMs with these objectives. For instance, fine-tuning LLaMA-7B on instruction following datasets renders it less faithful. Conversely, instruction-tuned Vicuna-7B shows degraded performance at following instructions when further optimized on tasks that require contextual grounding. One common remedy is multi-task learning (MTL) with data mixing, yet it remains far from achieving a synergic outcome. We propose a simple yet effective method that relies on Reject-sampling by Self-instruct with Continued Fine-tuning (ReSet), which significantly outperforms vanilla MTL. Surprisingly, we find that less is more, as training ReSet with high-quality, yet substantially smaller data (three-fold less) yields superior results. Our findings offer a better understanding of objective discrepancies in alignment training of LMs.
[ "Wu, Zhengxuan", "Zhang, Yuhao", "Qi, Peng", "Xu, Yumo", "Han, Rujun", "Zhang, Yian", "Chen, Jifan", "Min, Bonan", "Huang, Zhiheng" ]
Dancing in Chains: Reconciling Instruction Following and Faithfulness in Language Models
emnlp-main.229
Oral
2407.21417
[ "https://github.com/frankaging/dancing-in-chains" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.230.bib
https://aclanthology.org/2024.emnlp-main.230/
@inproceedings{geh-etal-2024-signal, title = "Where is the signal in tokenization space?", author = "Geh, Renato and Zhang, Honghua and Ahmed, Kareem and Wang, Benjie and Van Den Broeck, Guy", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.230", pages = "3966--3979", abstract = "Large Language Models (LLMs) are typically shipped with tokenizers that *deterministically* encode text into so-called *canonical* token sequences, to which the LLMs assign probability values.One common assumption is that the probability of a piece of text is the probability of its canonical token sequence.However, the tokenization of a string is not unique: e.g., the Llama2 tokenizer encodes {`}Tokens{`} as {`}[Tok,ens]{`}, but {`}[Tok,en,s]{`} also represents the same text.In this paper, we study non-canonical tokenizations.We prove that, given a string, it is computationally hard to find the most likely tokenization for an autoregressive LLM, as well as to compute the marginal probability over all possible tokenizations.We then show how the marginal is, in most cases, indistinguishable from the canonical probability.Surprisingly, we then empirically demonstrate the existence of a significant amount of signal hidden within tokenization space.Notably, by simply aggregating the probabilities of non-canonical tokenizations, we achieve improvements across a range of LLM evaluation benchmarks for a variety of architectures, including transformers and state space models.", }
Large Language Models (LLMs) are typically shipped with tokenizers that *deterministically* encode text into so-called *canonical* token sequences, to which the LLMs assign probability values.One common assumption is that the probability of a piece of text is the probability of its canonical token sequence.However, the tokenization of a string is not unique: e.g., the Llama2 tokenizer encodes {`}Tokens{`} as {`}[Tok,ens]{`}, but {`}[Tok,en,s]{`} also represents the same text.In this paper, we study non-canonical tokenizations.We prove that, given a string, it is computationally hard to find the most likely tokenization for an autoregressive LLM, as well as to compute the marginal probability over all possible tokenizations.We then show how the marginal is, in most cases, indistinguishable from the canonical probability.Surprisingly, we then empirically demonstrate the existence of a significant amount of signal hidden within tokenization space.Notably, by simply aggregating the probabilities of non-canonical tokenizations, we achieve improvements across a range of LLM evaluation benchmarks for a variety of architectures, including transformers and state space models.
[ "Geh, Renato", "Zhang, Honghua", "Ahmed, Kareem", "Wang, Benjie", "Van Den Broeck, Guy" ]
Where is the signal in tokenization space?
emnlp-main.230
Poster
2408.08541
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.231.bib
https://aclanthology.org/2024.emnlp-main.231/
@inproceedings{huang-etal-2024-private, title = "Private Language Models via Truncated Laplacian Mechanism", author = "Huang, Tianhao and Yang, Tao and Habernal, Ivan and Hu, Lijie and Wang, Di", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.231", pages = "3980--3993", abstract = "Recently it has been shown that deep learning models for NLP tasks are prone to attacks that can even reconstruct the verbatim training texts. To prevent privacy leakage, researchers have investigated word-level perturbations, relying on the formal guarantees of differential privacy (DP) in the embedding space. However, many existing approaches either achieve unsatisfactory performance in the high privacy regime when using the Laplacian or Gaussian mechanism, or resort to weaker relaxations of DP that are inferior to the canonical DP in terms of privacy strength. This raises the question of whether a new method for private word embedding can be designed to overcome these limitations. In this paper, we propose a novel private embedding method called the high dimensional truncated Laplacian mechanism. Specifically, we introduce a non-trivial extension of the truncated Laplacian mechanism, which was previously only investigated in one-dimensional space cases. Theoretically, we show that our method has a lower variance compared to the previous private word embedding methods. To further validate its effectiveness, we conduct comprehensive experiments on private embedding and downstream tasks using three datasets. Remarkably, even in the high privacy regime, our approach only incurs a slight decrease in utility compared to the non-private scenario.", }
Recently it has been shown that deep learning models for NLP tasks are prone to attacks that can even reconstruct the verbatim training texts. To prevent privacy leakage, researchers have investigated word-level perturbations, relying on the formal guarantees of differential privacy (DP) in the embedding space. However, many existing approaches either achieve unsatisfactory performance in the high privacy regime when using the Laplacian or Gaussian mechanism, or resort to weaker relaxations of DP that are inferior to the canonical DP in terms of privacy strength. This raises the question of whether a new method for private word embedding can be designed to overcome these limitations. In this paper, we propose a novel private embedding method called the high dimensional truncated Laplacian mechanism. Specifically, we introduce a non-trivial extension of the truncated Laplacian mechanism, which was previously only investigated in one-dimensional space cases. Theoretically, we show that our method has a lower variance compared to the previous private word embedding methods. To further validate its effectiveness, we conduct comprehensive experiments on private embedding and downstream tasks using three datasets. Remarkably, even in the high privacy regime, our approach only incurs a slight decrease in utility compared to the non-private scenario.
[ "Huang, Tianhao", "Yang, Tao", "Habernal, Ivan", "Hu, Lijie", "Wang, Di" ]
Private Language Models via Truncated Laplacian Mechanism
emnlp-main.231
Poster
2410.08027
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.232.bib
https://aclanthology.org/2024.emnlp-main.232/
@inproceedings{gottesman-geva-2024-estimating, title = "Estimating Knowledge in Large Language Models Without Generating a Single Token", author = "Gottesman, Daniela and Geva, Mor", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.232", pages = "3994--4019", }
No abstract found
[ "Gottesman, Daniela", "Geva, Mor" ]
Estimating Knowledge in Large Language Models Without Generating a Single Token
emnlp-main.232
Poster
2406.12673
[ "" ]
https://huggingface.co/papers/2406.12673
1
7
1
2
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.233.bib
https://aclanthology.org/2024.emnlp-main.233/
@inproceedings{zhang-etal-2024-consistent, title = "Consistent Autoformalization for Constructing Mathematical Libraries", author = "Zhang, Lan and Quan, Xin and Freitas, Andre", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.233", pages = "4020--4033", abstract = "Autoformalization is the task of automatically translating mathematical content written in natural language to a formal language expression. The growing language interpretation capabilities of Large Language Models (LLMs), including in formal languages, are lowering the barriers for autoformalization. However, LLMs alone are not capable of consistently and reliably delivering autoformalization, in particular as the complexity and specialization of the target domain grows. As the field evolves into the direction of systematically applying autoformalization towards large mathematical libraries, the need to improve syntactic, terminological and semantic control increases. This paper proposes the coordinated use of three mechanisms, most-similar retrieval augmented generation (MS-RAG), denoising steps, and auto-correction with syntax error feedback (Auto-SEF) to improve autoformalization quality. The empirical analysis, across different models, demonstrates that these mechanisms can deliver autoformalizaton results which are syntactically, terminologically and semantically more consistent. These mechanisms can be applied across different LLMs and have shown to deliver improve results across different model types.", }
Autoformalization is the task of automatically translating mathematical content written in natural language to a formal language expression. The growing language interpretation capabilities of Large Language Models (LLMs), including in formal languages, are lowering the barriers for autoformalization. However, LLMs alone are not capable of consistently and reliably delivering autoformalization, in particular as the complexity and specialization of the target domain grows. As the field evolves into the direction of systematically applying autoformalization towards large mathematical libraries, the need to improve syntactic, terminological and semantic control increases. This paper proposes the coordinated use of three mechanisms, most-similar retrieval augmented generation (MS-RAG), denoising steps, and auto-correction with syntax error feedback (Auto-SEF) to improve autoformalization quality. The empirical analysis, across different models, demonstrates that these mechanisms can deliver autoformalizaton results which are syntactically, terminologically and semantically more consistent. These mechanisms can be applied across different LLMs and have shown to deliver improve results across different model types.
[ "Zhang, Lan", "Quan, Xin", "Freitas, Andre" ]
Consistent Autoformalization for Constructing Mathematical Libraries
emnlp-main.233
Oral
2410.04194
[ "https://github.com/lanzhang128/retrieval_augmented_autoformalization" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.234.bib
https://aclanthology.org/2024.emnlp-main.234/
@inproceedings{tao-etal-2024-context, title = "When Context Leads but Parametric Memory Follows in Large Language Models", author = "Tao, Yufei and Hiatt, Adam and Haake, Erik and Jetter, Antonie J. and Agrawal, Ameeta", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.234", pages = "4034--4058", abstract = "Large language models (LLMs) have demonstrated remarkable progress in leveraging diverse knowledge sources. This study investigates how nine widely used LLMs allocate knowledge between local context and global parameters when answering open-ended questions in knowledge-consistent scenarios. We introduce a novel dataset, WikiAtomic, and systematically vary context sizes to analyze how LLMs prioritize and utilize the provided information and their parametric knowledge in knowledge-consistent scenarios. Additionally, we also study their tendency to hallucinate under varying context sizes. Our findings reveal consistent patterns across models, including a consistent reliance on both contextual (around 70{\%}) and parametric (around 30{\%}) knowledge, and a decrease in hallucinations with increasing context. These insights highlight the importance of more effective context organization and developing models that use input more deterministically for robust performance.", }
Large language models (LLMs) have demonstrated remarkable progress in leveraging diverse knowledge sources. This study investigates how nine widely used LLMs allocate knowledge between local context and global parameters when answering open-ended questions in knowledge-consistent scenarios. We introduce a novel dataset, WikiAtomic, and systematically vary context sizes to analyze how LLMs prioritize and utilize the provided information and their parametric knowledge in knowledge-consistent scenarios. Additionally, we also study their tendency to hallucinate under varying context sizes. Our findings reveal consistent patterns across models, including a consistent reliance on both contextual (around 70{\%}) and parametric (around 30{\%}) knowledge, and a decrease in hallucinations with increasing context. These insights highlight the importance of more effective context organization and developing models that use input more deterministically for robust performance.
[ "Tao, Yufei", "Hiatt, Adam", "Haake, Erik", "Jetter, Antonie J.", "Agrawal, Ameeta" ]
When Context Leads but Parametric Memory Follows in Large Language Models
emnlp-main.234
Poster
2409.08435
[ "https://github.com/PortNLP/WikiAtomic" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.235.bib
https://aclanthology.org/2024.emnlp-main.235/
@inproceedings{yedetore-kim-2024-semantic, title = "Semantic Training Signals Promote Hierarchical Syntactic Generalization in Transformers", author = "Yedetore, Aditya and Kim, Najoung", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.235", pages = "4059--4073", abstract = "Neural networks without hierarchical biases often struggle to learn linguistic rules that come naturally to humans. However, neural networks are trained primarily on form alone, while children acquiring language additionally receive data about meaning. Would neural networks generalize more like humans when trained on both form and meaning? We investigate this by examining if Transformers{---}neural networks without a hierarchical bias{---}better achieve hierarchical generalization when trained on both form and meaning compared to when trained on form alone. Our results show that Transformers trained on form and meaning do favor the hierarchical generalization more than those trained on form alone, suggesting that statistical learners without hierarchical biases can leverage semantic training signals to bootstrap hierarchical syntactic generalization.", }
Neural networks without hierarchical biases often struggle to learn linguistic rules that come naturally to humans. However, neural networks are trained primarily on form alone, while children acquiring language additionally receive data about meaning. Would neural networks generalize more like humans when trained on both form and meaning? We investigate this by examining if Transformers{---}neural networks without a hierarchical bias{---}better achieve hierarchical generalization when trained on both form and meaning compared to when trained on form alone. Our results show that Transformers trained on form and meaning do favor the hierarchical generalization more than those trained on form alone, suggesting that statistical learners without hierarchical biases can leverage semantic training signals to bootstrap hierarchical syntactic generalization.
[ "Yedetore, Aditya", "Kim, Najoung" ]
Semantic Training Signals Promote Hierarchical Syntactic Generalization in Transformers
emnlp-main.235
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.236.bib
https://aclanthology.org/2024.emnlp-main.236/
@inproceedings{chang-etal-2024-multilinguality, title = "When Is Multilinguality a Curse? Language Modeling for 250 High- and Low-Resource Languages", author = "Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Ben", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.236", pages = "4074--4096", abstract = "Multilingual language models are widely used to extend NLP systems to low-resource languages. However, concrete evidence for the effects of multilinguality on language modeling performance in individual languages remains scarce. Here, we pre-train over 10,000 monolingual and multilingual language models for over 250 languages, including multiple language families that are under-studied in NLP. We assess how language modeling performance in each language varies as a function of (1) monolingual dataset size, (2) added multilingual dataset size, (3) linguistic similarity of the added languages, and (4) model size (up to 45M parameters). We find that in moderation, adding multilingual data improves low-resource language modeling performance, similar to increasing low-resource dataset sizes by up to 33{\%}. Improvements depend on the syntactic similarity of the added multilingual data, with marginal additional effects of vocabulary overlap. However, high-resource languages consistently perform worse in multilingual pre-training scenarios. As dataset sizes increase, adding multilingual data begins to hurt performance for both low-resource and high-resource languages, likely due to limited model capacity (the {``}curse of multilinguality{''}). These results suggest that massively multilingual pre-training may not be optimal for any languages involved, but that more targeted models can significantly improve performance.", }
Multilingual language models are widely used to extend NLP systems to low-resource languages. However, concrete evidence for the effects of multilinguality on language modeling performance in individual languages remains scarce. Here, we pre-train over 10,000 monolingual and multilingual language models for over 250 languages, including multiple language families that are under-studied in NLP. We assess how language modeling performance in each language varies as a function of (1) monolingual dataset size, (2) added multilingual dataset size, (3) linguistic similarity of the added languages, and (4) model size (up to 45M parameters). We find that in moderation, adding multilingual data improves low-resource language modeling performance, similar to increasing low-resource dataset sizes by up to 33{\%}. Improvements depend on the syntactic similarity of the added multilingual data, with marginal additional effects of vocabulary overlap. However, high-resource languages consistently perform worse in multilingual pre-training scenarios. As dataset sizes increase, adding multilingual data begins to hurt performance for both low-resource and high-resource languages, likely due to limited model capacity (the {``}curse of multilinguality{''}). These results suggest that massively multilingual pre-training may not be optimal for any languages involved, but that more targeted models can significantly improve performance.
[ "Chang, Tyler A.", "Arnett, Catherine", "Tu, Zhuowen", "Bergen, Ben" ]
When Is Multilinguality a Curse? Language Modeling for 250 High- and Low-Resource Languages
emnlp-main.236
Poster
2311.09205
[ "https://github.com/tylerachang/curse-of-multilinguality" ]
https://huggingface.co/papers/2311.09205
2
0
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.237.bib
https://aclanthology.org/2024.emnlp-main.237/
@inproceedings{xi-etal-2024-teaching, title = "Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use", author = "Xi, Jiajun and He, Yinong and Yang, Jianing and Dai, Yinpei and Chai, Joyce", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.237", pages = "4097--4114", abstract = "In real-world scenarios, it is desirable for embodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks. Despite recent progress, most previous approaches adopt simple low-level instructions as language inputs, which may not reflect natural human communication. We expect human language to be informative (i.e., providing feedback on agents{'} past behaviors and offering guidance on achieving their future goals) and diverse (i.e., encompassing a wide range of expressions and style nuances). To enable flexibility of language use in teaching agents tasks, this paper studies different types of language inputs in facilitating reinforcement learning (RL) embodied agents. More specifically, we examine how different levels of language informativeness and diversity impact agent learning and inference. Our empirical results based on four RL benchmarks demonstrate that agents trained with diverse and informative language feedback can achieve enhanced generalization and fast adaptation to new tasks. These findings highlight the pivotal role of language use in teaching embodied agents new tasks in an open world.", }
In real-world scenarios, it is desirable for embodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks. Despite recent progress, most previous approaches adopt simple low-level instructions as language inputs, which may not reflect natural human communication. We expect human language to be informative (i.e., providing feedback on agents{'} past behaviors and offering guidance on achieving their future goals) and diverse (i.e., encompassing a wide range of expressions and style nuances). To enable flexibility of language use in teaching agents tasks, this paper studies different types of language inputs in facilitating reinforcement learning (RL) embodied agents. More specifically, we examine how different levels of language informativeness and diversity impact agent learning and inference. Our empirical results based on four RL benchmarks demonstrate that agents trained with diverse and informative language feedback can achieve enhanced generalization and fast adaptation to new tasks. These findings highlight the pivotal role of language use in teaching embodied agents new tasks in an open world.
[ "Xi, Jiajun", "He, Yinong", "Yang, Jianing", "Dai, Yinpei", "Chai, Joyce" ]
Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use
emnlp-main.237
Poster
2410.24218
[ "https://github.com/sled-group/teachable_rl" ]
https://huggingface.co/papers/2410.24218
1
4
2
5
[]
[ "sled-umich/Teachable_RL" ]
[]
[]
[ "sled-umich/Teachable_RL" ]
[]
1
https://aclanthology.org/2024.emnlp-main.238.bib
https://aclanthology.org/2024.emnlp-main.238/
@inproceedings{robinson-etal-2024-mittens, title = "{M}i{TT}en{S}: A Dataset for Evaluating Gender Mistranslation", author = "Robinson, Kevin and Kudugunta, Sneha and Stella, Romina and Dev, Sunipa and Bastings, Jasmijn", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.238", pages = "4115--4124", abstract = "Translation systems, including foundation models capable of translation, can produce errors that result in gender mistranslation, and such errors can be especially harmful. To measure the extent of such potential harms when translating into and out of English, we introduce a dataset, MiTTenS, covering 26 languages from a variety of language families and scripts, including several traditionally under-represented in digital resources. The dataset is constructed with handcrafted passages that target known failure patterns, longer synthetically generated passages, and natural passages sourced from multiple domains. We demonstrate the usefulness of the dataset by evaluating both neural machine translation systems and foundation models, and show that all systems exhibit gender mistranslation and potential harm, even in high resource languages.", }
Translation systems, including foundation models capable of translation, can produce errors that result in gender mistranslation, and such errors can be especially harmful. To measure the extent of such potential harms when translating into and out of English, we introduce a dataset, MiTTenS, covering 26 languages from a variety of language families and scripts, including several traditionally under-represented in digital resources. The dataset is constructed with handcrafted passages that target known failure patterns, longer synthetically generated passages, and natural passages sourced from multiple domains. We demonstrate the usefulness of the dataset by evaluating both neural machine translation systems and foundation models, and show that all systems exhibit gender mistranslation and potential harm, even in high resource languages.
[ "Robinson, Kevin", "Kudugunta, Sneha", "Stella, Romina", "Dev, Sunipa", "Bastings, Jasmijn" ]
MiTTenS: A Dataset for Evaluating Gender Mistranslation
emnlp-main.238
Poster
2401.06935
[ "https://github.com/google-research-datasets/mittens" ]
https://huggingface.co/papers/2401.06935
0
0
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.239.bib
https://aclanthology.org/2024.emnlp-main.239/
@inproceedings{feng-etal-2024-teaching, title = "Teaching {LLM}s to Abstain across Languages via Multilingual Feedback", author = "Feng, Shangbin and Shi, Weijia and Wang, Yike and Ding, Wenxuan and Ahia, Orevaoghene and Li, Shuyue Stella and Balachandran, Vidhisha and Sitaram, Sunayana and Tsvetkov, Yulia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.239", pages = "4125--4150", abstract = "Multilingual LLMs often have knowledge disparities across languages, with larger gaps in under-resourced languages. Teaching LLMs to abstain in the face of knowledge gaps is thus a promising strategy to mitigate hallucinations in multilingual settings. However, previous studies on LLM abstention primarily focus on English; we find that directly applying existing solutions beyond English results in up to 20.5{\%} performance gaps between high and low-resource languages, potentially due to LLMs{'} drop in calibration and reasoning beyond a few resource-rich languages. To this end, we propose strategies to enhance LLM abstention by learning from multilingual feedback, where LLMs self-reflect on proposed answers in one language by generating multiple feedback items in related languages: we show that this helps identifying the knowledge gaps across diverse languages, cultures, and communities. Extensive experiments demonstrate that our multilingual feedback approach outperforms various strong baselines, achieving up to 9.2{\%} improvement for low-resource languages across three black-box and open models on three datasets, featuring open-book, closed-book, and commonsense QA. Further analysis reveals that multilingual feedback is both an effective and a more equitable abstain strategy to serve diverse language speakers, and cultural factors have great impact on language selection and LLM abstention behavior, highlighting future directions for multilingual and multi-cultural reliable language modeling.", }
Multilingual LLMs often have knowledge disparities across languages, with larger gaps in under-resourced languages. Teaching LLMs to abstain in the face of knowledge gaps is thus a promising strategy to mitigate hallucinations in multilingual settings. However, previous studies on LLM abstention primarily focus on English; we find that directly applying existing solutions beyond English results in up to 20.5{\%} performance gaps between high and low-resource languages, potentially due to LLMs{'} drop in calibration and reasoning beyond a few resource-rich languages. To this end, we propose strategies to enhance LLM abstention by learning from multilingual feedback, where LLMs self-reflect on proposed answers in one language by generating multiple feedback items in related languages: we show that this helps identifying the knowledge gaps across diverse languages, cultures, and communities. Extensive experiments demonstrate that our multilingual feedback approach outperforms various strong baselines, achieving up to 9.2{\%} improvement for low-resource languages across three black-box and open models on three datasets, featuring open-book, closed-book, and commonsense QA. Further analysis reveals that multilingual feedback is both an effective and a more equitable abstain strategy to serve diverse language speakers, and cultural factors have great impact on language selection and LLM abstention behavior, highlighting future directions for multilingual and multi-cultural reliable language modeling.
[ "Feng, Shangbin", "Shi, Weijia", "Wang, Yike", "Ding, Wenxuan", "Ahia, Orevaoghene", "Li, Shuyue Stella", "Balach", "ran, Vidhisha", "Sitaram, Sunayana", "Tsvetkov, Yulia" ]
Teaching LLMs to Abstain across Languages via Multilingual Feedback
emnlp-main.239
Poster
2406.15948
[ "https://github.com/BunsenFeng/M-AbstainQA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.240.bib
https://aclanthology.org/2024.emnlp-main.240/
@inproceedings{feng-etal-2024-modular, title = "Modular Pluralism: Pluralistic Alignment via Multi-{LLM} Collaboration", author = "Feng, Shangbin and Sorensen, Taylor and Liu, Yuhan and Fisher, Jillian and Park, Chan Young and Choi, Yejin and Tsvetkov, Yulia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.240", pages = "4151--4171", abstract = "While existing alignment paradigms have been integral in developing large language models (LLMs), LLMs often learn an averaged human preference and struggle to model diverse preferences across cultures, demographics, and communities. We propose Modular Pluralism, a modular framework based on multi-LLM collaboration for pluralistic alignment: it {``}plugs into{''} a base LLM a pool of smaller but specialized community LMs, where models collaborate in distinct modes to flexibility support three modes of pluralism: Overton, steerable, and distributional. Modular Pluralism is uniquely compatible with black-box LLMs and offers the modular control of adding new community LMs for previously underrepresented communities. We evaluate Modular Pluralism with six tasks and four datasets featuring questions/instructions with value-laden and perspective-informed responses. Extensive experiments demonstrate that Modular Pluralism advances the three pluralism objectives across six black-box and open-source LLMs. Further analysis reveals that LLMs are generally faithful to the inputs from smaller community LLMs, allowing seamless patching by adding a new community LM to better cover previously underrepresented communities.", }
While existing alignment paradigms have been integral in developing large language models (LLMs), LLMs often learn an averaged human preference and struggle to model diverse preferences across cultures, demographics, and communities. We propose Modular Pluralism, a modular framework based on multi-LLM collaboration for pluralistic alignment: it {``}plugs into{''} a base LLM a pool of smaller but specialized community LMs, where models collaborate in distinct modes to flexibility support three modes of pluralism: Overton, steerable, and distributional. Modular Pluralism is uniquely compatible with black-box LLMs and offers the modular control of adding new community LMs for previously underrepresented communities. We evaluate Modular Pluralism with six tasks and four datasets featuring questions/instructions with value-laden and perspective-informed responses. Extensive experiments demonstrate that Modular Pluralism advances the three pluralism objectives across six black-box and open-source LLMs. Further analysis reveals that LLMs are generally faithful to the inputs from smaller community LLMs, allowing seamless patching by adding a new community LM to better cover previously underrepresented communities.
[ "Feng, Shangbin", "Sorensen, Taylor", "Liu, Yuhan", "Fisher, Jillian", "Park, Chan Young", "Choi, Yejin", "Tsvetkov, Yulia" ]
Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration
emnlp-main.240
Poster
2406.15951
[ "https://github.com/BunsenFeng/modular_pluralism" ]
https://huggingface.co/papers/2406.15951
0
0
0
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.241.bib
https://aclanthology.org/2024.emnlp-main.241/
@inproceedings{fisher-etal-2024-styleremix, title = "{S}tyle{R}emix: Interpretable Authorship Obfuscation via Distillation and Perturbation of Style Elements", author = "Fisher, Jillian and Hallinan, Skyler and Lu, Ximing and Gordon, Mitchell L and Harchaoui, Zaid and Choi, Yejin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.241", pages = "4172--4206", abstract = "Authorship obfuscation, rewriting a text to intentionally obscure the identity of the author, is important yet challenging. Current methods using large language models (LLMs) lack interpretability and controllability, often ignoring author-specific stylistic features, resulting in less robust performance overall.To address this, we develop StyleRemix, an adaptive and interpretable obfuscation method that perturbs specific, fine-grained style elements of the original input text. StyleRemix uses pre-trained Low Rank Adaptation (LoRA) modules to rewrite inputs along various stylistic axes (e.g., formality, length) while maintaining low computational costs. StyleRemix outperforms state-of-the-art baselines and much larger LLMs on an array of domains on both automatic and human evaluation.Additionally, we release AuthorMix, a large set of 30K high-quality, long-form texts from a diverse set of 14 authors and 4 domains, and DiSC, a parallel corpus of 1,500 texts spanning seven style axes in 16 unique directions.", }
Authorship obfuscation, rewriting a text to intentionally obscure the identity of the author, is important yet challenging. Current methods using large language models (LLMs) lack interpretability and controllability, often ignoring author-specific stylistic features, resulting in less robust performance overall.To address this, we develop StyleRemix, an adaptive and interpretable obfuscation method that perturbs specific, fine-grained style elements of the original input text. StyleRemix uses pre-trained Low Rank Adaptation (LoRA) modules to rewrite inputs along various stylistic axes (e.g., formality, length) while maintaining low computational costs. StyleRemix outperforms state-of-the-art baselines and much larger LLMs on an array of domains on both automatic and human evaluation.Additionally, we release AuthorMix, a large set of 30K high-quality, long-form texts from a diverse set of 14 authors and 4 domains, and DiSC, a parallel corpus of 1,500 texts spanning seven style axes in 16 unique directions.
[ "Fisher, Jillian", "Hallinan, Skyler", "Lu, Ximing", "Gordon, Mitchell L", "Harchaoui, Zaid", "Choi, Yejin" ]
StyleRemix: Interpretable Authorship Obfuscation via Distillation and Perturbation of Style Elements
emnlp-main.241
Poster
2408.15666
[ "https://github.com/jfisher52/StyleRemix" ]
https://huggingface.co/papers/2408.15666
4
9
2
6
[ "hallisky/lora-sarcasm-more-llama-3-8b", "hallisky/lora-function-more-llama-3-8b", "hallisky/lora-type-persuasive-llama-3-8b", "hallisky/lora-length-long-llama-3-8b", "hallisky/lora-formality-informal-llama-3-8b", "hallisky/lora-voice-active-llama-3-8b", "hallisky/lora-formality-formal-llama-3-8b", "hallisky/lora-type-expository-llama-3-8b", "hallisky/lora-grade-highschool-llama-3-8b", "hallisky/lora-type-descriptive-llama-3-8b", "hallisky/lora-sarcasm-less-llama-3-8b", "hallisky/lora-voice-passive-llama-3-8b", "hallisky/lora-length-short-llama-3-8b", "hallisky/lora-type-narrative-llama-3-8b", "hallisky/lora-function-less-llama-3-8b" ]
[ "hallisky/AuthorMix", "hallisky/DiSC-subset-new-prompts", "hallisky/DiSC" ]
[ "hallisky/StyleRemix" ]
[ "hallisky/lora-sarcasm-more-llama-3-8b", "hallisky/lora-function-more-llama-3-8b", "hallisky/lora-type-persuasive-llama-3-8b", "hallisky/lora-length-long-llama-3-8b", "hallisky/lora-formality-informal-llama-3-8b", "hallisky/lora-voice-active-llama-3-8b", "hallisky/lora-formality-formal-llama-3-8b", "hallisky/lora-type-expository-llama-3-8b", "hallisky/lora-grade-highschool-llama-3-8b", "hallisky/lora-type-descriptive-llama-3-8b", "hallisky/lora-sarcasm-less-llama-3-8b", "hallisky/lora-voice-passive-llama-3-8b", "hallisky/lora-length-short-llama-3-8b", "hallisky/lora-type-narrative-llama-3-8b", "hallisky/lora-function-less-llama-3-8b" ]
[ "hallisky/AuthorMix", "hallisky/DiSC-subset-new-prompts", "hallisky/DiSC" ]
[ "hallisky/StyleRemix" ]
1
https://aclanthology.org/2024.emnlp-main.242.bib
https://aclanthology.org/2024.emnlp-main.242/
@inproceedings{zhao-etal-2024-couldve, title = "{I} Could{'}ve Asked That: Reformulating Unanswerable Questions", author = "Zhao, Wenting and Gao, Ge and Cardie, Claire and Rush, Alexander M", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.242", pages = "4207--4220", abstract = "When seeking information from unfamiliar documents, users frequently pose questions that cannot be answered by the documents. While existing large language models (LLMs) identify these unanswerable questions, they do not assist users in reformulating their questions, thereby reducing their overall utility. We curate CouldAsk, an evaluation benchmark composed of existing and new datasets for document-grounded question answering, specifically designed to study reformulating unanswerable questions. We evaluate state-of-the-art open-source and proprietary LLMs on CouldAsk. The results demonstrate the limited capabilities of these models in reformulating questions. Specifically, GPT-4 and Llama2-7B successfully reformulate questions only 26{\%} and 12{\%} of the time, respectively. Error analysis shows that 62{\%} of the unsuccessful reformulations stem from the models merely rephrasing the questions or even generating identical questions. We publicly release the benchmark and the code to reproduce the experiments.", }
When seeking information from unfamiliar documents, users frequently pose questions that cannot be answered by the documents. While existing large language models (LLMs) identify these unanswerable questions, they do not assist users in reformulating their questions, thereby reducing their overall utility. We curate CouldAsk, an evaluation benchmark composed of existing and new datasets for document-grounded question answering, specifically designed to study reformulating unanswerable questions. We evaluate state-of-the-art open-source and proprietary LLMs on CouldAsk. The results demonstrate the limited capabilities of these models in reformulating questions. Specifically, GPT-4 and Llama2-7B successfully reformulate questions only 26{\%} and 12{\%} of the time, respectively. Error analysis shows that 62{\%} of the unsuccessful reformulations stem from the models merely rephrasing the questions or even generating identical questions. We publicly release the benchmark and the code to reproduce the experiments.
[ "Zhao, Wenting", "Gao, Ge", "Cardie, Claire", "Rush, Alex", "er M" ]
I Could've Asked That: Reformulating Unanswerable Questions
emnlp-main.242
Poster
2407.17469
[ "https://github.com/wenting-zhao/couldask" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.243.bib
https://aclanthology.org/2024.emnlp-main.243/
@inproceedings{morabito-etal-2024-stop, title = "{STOP}! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions", author = "Morabito, Robert and Madhusudan, Sangmitra and McDonald, Tyler and Emami, Ali", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.243", pages = "4221--4243", abstract = "Mitigating explicit and implicit biases in Large Language Models (LLMs) has become a critical focus in the field of natural language processing. However, many current methodologies evaluate scenarios in isolation, without considering the broader context or the spectrum of potential biases within each situation. To address this, we introduce the Sensitivity Testing on Offensive Progressions (STOP) dataset, which includes 450 offensive progressions containing 2,700 unique sentences of varying severity that progressively escalate from less to more explicitly offensive. Covering a broad spectrum of 9 demographics and 46 sub-demographics, STOP ensures inclusivity and comprehensive coverage. We evaluate several leading closed- and open-source models, including GPT-4, Mixtral, and Llama 3. Our findings reveal that even the best-performing models detect bias inconsistently, with success rates ranging from 19.3{\%} to 69.8{\%}. Furthermore, we demonstrate how aligning models with human judgments on STOP can improve model answer rates on sensitive tasks such as BBQ, StereoSet, and CrowS-Pairs by up to 191{\%}, while maintaining or even improving performance. STOP presents a novel framework for assessing the complex nature of biases in LLMs, which will enable more effective bias mitigation strategies and facilitates the creation of fairer language models.", }
Mitigating explicit and implicit biases in Large Language Models (LLMs) has become a critical focus in the field of natural language processing. However, many current methodologies evaluate scenarios in isolation, without considering the broader context or the spectrum of potential biases within each situation. To address this, we introduce the Sensitivity Testing on Offensive Progressions (STOP) dataset, which includes 450 offensive progressions containing 2,700 unique sentences of varying severity that progressively escalate from less to more explicitly offensive. Covering a broad spectrum of 9 demographics and 46 sub-demographics, STOP ensures inclusivity and comprehensive coverage. We evaluate several leading closed- and open-source models, including GPT-4, Mixtral, and Llama 3. Our findings reveal that even the best-performing models detect bias inconsistently, with success rates ranging from 19.3{\%} to 69.8{\%}. Furthermore, we demonstrate how aligning models with human judgments on STOP can improve model answer rates on sensitive tasks such as BBQ, StereoSet, and CrowS-Pairs by up to 191{\%}, while maintaining or even improving performance. STOP presents a novel framework for assessing the complex nature of biases in LLMs, which will enable more effective bias mitigation strategies and facilitates the creation of fairer language models.
[ "Morabito, Robert", "Madhusudan, Sangmitra", "McDonald, Tyler", "Emami, Ali" ]
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
emnlp-main.243
Poster
2409.13843
[ "https://github.com/Robert-Morabito/STOP" ]
https://huggingface.co/papers/2409.13843
0
0
0
4
[]
[ "Robert-Morabito/STOP" ]
[]
[]
[ "Robert-Morabito/STOP" ]
[]
1
https://aclanthology.org/2024.emnlp-main.244.bib
https://aclanthology.org/2024.emnlp-main.244/
@inproceedings{potter-etal-2024-hidden, title = "Hidden Persuaders: {LLM}s{'} Political Leaning and Their Influence on Voters", author = "Potter, Yujin and Lai, Shiyang and Kim, Junsol and Evans, James and Song, Dawn", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.244", pages = "4244--4275", abstract = "Do LLMs have political leanings and are LLMs able to shift our political views? This paper explores these questions in the context of the 2024 U.S. presidential election. Through a voting simulation, we demonstrate 18 open-weight and closed-source LLMs{'} political preference for Biden over Trump. We show how Biden-leaning becomes more pronounced in instruction-tuned and reinforced models compared to their base versions by analyzing their responses to political questions related to the two nominees. We further explore the potential impact of LLMs on voter choice by recruiting 935 U.S. registered voters. Participants interacted with LLMs (Claude-3, Llama-3, and GPT-4) over five exchanges. Intriguingly, although LLMs were not asked to persuade users to support Biden, about 20{\%} of Trump supporters reduced their support for Trump after LLM interaction. This result is noteworthy given that many studies on the persuasiveness of political campaigns have shown minimal effects in presidential elections. Many users also expressed a desire for further interaction with LLMs on political subjects. Further research on how LLMs affect users{'} political views is required, as their use becomes more widespread.", }
Do LLMs have political leanings and are LLMs able to shift our political views? This paper explores these questions in the context of the 2024 U.S. presidential election. Through a voting simulation, we demonstrate 18 open-weight and closed-source LLMs{'} political preference for Biden over Trump. We show how Biden-leaning becomes more pronounced in instruction-tuned and reinforced models compared to their base versions by analyzing their responses to political questions related to the two nominees. We further explore the potential impact of LLMs on voter choice by recruiting 935 U.S. registered voters. Participants interacted with LLMs (Claude-3, Llama-3, and GPT-4) over five exchanges. Intriguingly, although LLMs were not asked to persuade users to support Biden, about 20{\%} of Trump supporters reduced their support for Trump after LLM interaction. This result is noteworthy given that many studies on the persuasiveness of political campaigns have shown minimal effects in presidential elections. Many users also expressed a desire for further interaction with LLMs on political subjects. Further research on how LLMs affect users{'} political views is required, as their use becomes more widespread.
[ "Potter, Yujin", "Lai, Shiyang", "Kim, Junsol", "Evans, James", "Song, Dawn" ]
Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters
emnlp-main.244
Poster
2410.24190
[ "https://github.com/sunblaze-ucb/political_leaning_RepE" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.245.bib
https://aclanthology.org/2024.emnlp-main.245/
@inproceedings{jia-etal-2024-soul, title = "{SOUL}: Unlocking the Power of Second-Order Optimization for {LLM} Unlearning", author = "Jia, Jinghan and Zhang, Yihua and Zhang, Yimeng and Liu, Jiancheng and Runwal, Bharat and Diffenderfer, James and Kailkhura, Bhavya and Liu, Sijia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.245", pages = "4276--4292", abstract = "Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities without compromising utility beyond the scope of unlearning. While interest in studying LLM unlearning is growing, the impact of the optimizer choice for LLM unlearning remains unexplored. In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between second-order optimization and influence unlearning (a classical approach using influence functions to update the model for data influence removal). This insight propels us to develop a second-order optimization-based LLM unlearning framework, termed Second-Order UnLearning (SOUL), which extends the static, one-shot model update using influence unlearning to a dynamic, iterative unlearning process. Our extensive experiments show that SOUL consistently outperforms conventional first-order methods across various unlearning tasks, models, and metrics, indicating that second-order optimization offers an effective and broadly applicable solution for LLM unlearning.", }
Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities without compromising utility beyond the scope of unlearning. While interest in studying LLM unlearning is growing, the impact of the optimizer choice for LLM unlearning remains unexplored. In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between second-order optimization and influence unlearning (a classical approach using influence functions to update the model for data influence removal). This insight propels us to develop a second-order optimization-based LLM unlearning framework, termed Second-Order UnLearning (SOUL), which extends the static, one-shot model update using influence unlearning to a dynamic, iterative unlearning process. Our extensive experiments show that SOUL consistently outperforms conventional first-order methods across various unlearning tasks, models, and metrics, indicating that second-order optimization offers an effective and broadly applicable solution for LLM unlearning.
[ "Jia, Jinghan", "Zhang, Yihua", "Zhang, Yimeng", "Liu, Jiancheng", "Runwal, Bharat", "Diffenderfer, James", "Kailkhura, Bhavya", "Liu, Sijia" ]
SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning
emnlp-main.245
Poster
2404.18239
[ "https://github.com/optml-group/soul" ]
https://huggingface.co/papers/2404.18239
2
0
0
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.246.bib
https://aclanthology.org/2024.emnlp-main.246/
@inproceedings{hu-etal-2024-reasoning, title = "When Reasoning Meets Information Aggregation: A Case Study with Sports Narratives", author = "Hu, Yebowen and Song, Kaiqiang and Cho, Sangwoo and Wang, Xiaoyang and Yao, Wenlin and Foroosh, Hassan and Yu, Dong and Liu, Fei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.246", pages = "4293--4308", abstract = "Reasoning is most powerful when an LLM accurately aggregates relevant information. We examine the critical role of information aggregation in reasoning by requiring the LLM to analyze sports narratives. To succeed at this task, an LLM must infer points from actions, identify related entities, attribute points accurately to players and teams, and compile key statistics to draw conclusions. We conduct comprehensive experiments with real NBA basketball data and present SportsGen, a new method to synthesize game narratives. By synthesizing data, we can rigorously evaluate LLMs{'} reasoning capabilities under complex scenarios with varying narrative lengths and density of information. Our findings show that most models, including GPT-4o, often fail to accurately aggregate basketball scores due to frequent scoring patterns. Open-source models like Llama-3 further suffer from significant score hallucinations. Finally, the effectiveness of reasoning is influenced by narrative complexity, information density, and domain-specific terms, highlighting the challenges in analytical reasoning tasks.", }
Reasoning is most powerful when an LLM accurately aggregates relevant information. We examine the critical role of information aggregation in reasoning by requiring the LLM to analyze sports narratives. To succeed at this task, an LLM must infer points from actions, identify related entities, attribute points accurately to players and teams, and compile key statistics to draw conclusions. We conduct comprehensive experiments with real NBA basketball data and present SportsGen, a new method to synthesize game narratives. By synthesizing data, we can rigorously evaluate LLMs{'} reasoning capabilities under complex scenarios with varying narrative lengths and density of information. Our findings show that most models, including GPT-4o, often fail to accurately aggregate basketball scores due to frequent scoring patterns. Open-source models like Llama-3 further suffer from significant score hallucinations. Finally, the effectiveness of reasoning is influenced by narrative complexity, information density, and domain-specific terms, highlighting the challenges in analytical reasoning tasks.
[ "Hu, Yebowen", "Song, Kaiqiang", "Cho, Sangwoo", "Wang, Xiaoyang", "Yao, Wenlin", "Foroosh, Hassan", "Yu, Dong", "Liu, Fei" ]
When Reasoning Meets Information Aggregation: A Case Study with Sports Narratives
emnlp-main.246
Poster
2406.12084
[ "https://github.com/yebowenhu/sportsgen" ]
https://huggingface.co/papers/2406.12084
1
0
0
8
[]
[ "huuuyeah/SportsGen" ]
[]
[]
[ "huuuyeah/SportsGen" ]
[]
1
https://aclanthology.org/2024.emnlp-main.247.bib
https://aclanthology.org/2024.emnlp-main.247/
@inproceedings{kim-etal-2024-analysis, title = "An Analysis of Multilingual {FA}ct{S}core", author = "Kim, Vu Trong and Krumdick, Michael and Reddy, Varshini and Dernoncourt, Franck and Lai, Viet Dac", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.247", pages = "4309--4333", abstract = "FActScore has gained popularity as a metric to estimate the factuality of long-form texts generated by Large Language Models (LLMs) in English. However, there has not been any work in studying the behavior of FActScore in other languages. This paper studies the limitations of each component in the four-component pipeline of FActScore in the multilingual setting. We introduce a new dataset for FActScore on texts generated by strong multilingual LLMs. Our evaluation shows that LLMs exhibit distinct behaviors in both fact extraction and fact scoring tasks. No LLM produces consistent and reliable FActScore across languages of varying levels of resources. We also find that the knowledge source plays an important role in the quality of the estimated FActScore. Using Wikipedia as the knowledge source may hinder the true FActScore of long-form text due to its limited coverage in medium- and low-resource languages. We also incorporate 3 mitigations to our knowledge source that ultimately improve FActScore estimation across all languages.", }
FActScore has gained popularity as a metric to estimate the factuality of long-form texts generated by Large Language Models (LLMs) in English. However, there has not been any work in studying the behavior of FActScore in other languages. This paper studies the limitations of each component in the four-component pipeline of FActScore in the multilingual setting. We introduce a new dataset for FActScore on texts generated by strong multilingual LLMs. Our evaluation shows that LLMs exhibit distinct behaviors in both fact extraction and fact scoring tasks. No LLM produces consistent and reliable FActScore across languages of varying levels of resources. We also find that the knowledge source plays an important role in the quality of the estimated FActScore. Using Wikipedia as the knowledge source may hinder the true FActScore of long-form text due to its limited coverage in medium- and low-resource languages. We also incorporate 3 mitigations to our knowledge source that ultimately improve FActScore estimation across all languages.
[ "Kim, Vu Trong", "Krumdick, Michael", "Reddy, Varshini", "Dernoncourt, Franck", "Lai, Viet Dac" ]
An Analysis of Multilingual FActScore
emnlp-main.247
Poster
2406.19415
[ "" ]
https://huggingface.co/papers/2406.19415
2
1
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.248.bib
https://aclanthology.org/2024.emnlp-main.248/
@inproceedings{kim-etal-2024-prometheus, title = "Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models", author = "Kim, Seungone and Suk, Juyoung and Longpre, Shayne and Lin, Bill Yuchen and Shin, Jamin and Welleck, Sean and Neubig, Graham and Lee, Moontae and Lee, Kyungjae and Seo, Minjoon", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.248", pages = "4334--4353", abstract = "Proprietary LMs such as GPT-4 are often employed to assess the quality of responses from various LMs. However, concerns including transparency, controllability, and affordability strongly motivate the development of open-source LMs specialized in evaluations. On the other hand, existing open evaluator LMs exhibit critical shortcomings: 1) they issue scores that significantly diverge from those assigned by humans, and 2) they lack the flexibility to perform both direct assessment and pairwise ranking, the two most prevalent forms of assessment. Additionally, they do not possess the ability to evaluate based on custom evaluation criteria, focusing instead on general attributes like helpfulness and harmlessness. To address these issues, we introduce Prometheus 2, a more powerful evaluator LM than its predecessor that closely mirrors human and GPT-4 judgements. Moreover, it is capable of processing both direct assessment and pair-wise ranking formats grouped with a user-defined evaluation criteria. On four direct assessment benchmarks and four pairwise ranking benchmarks, Prometheus 2 scores the highest correlation and agreement with humans and proprietary LM judges among all tested open evaluator LMs. Our models, code, and data are all publicly available.", }
Proprietary LMs such as GPT-4 are often employed to assess the quality of responses from various LMs. However, concerns including transparency, controllability, and affordability strongly motivate the development of open-source LMs specialized in evaluations. On the other hand, existing open evaluator LMs exhibit critical shortcomings: 1) they issue scores that significantly diverge from those assigned by humans, and 2) they lack the flexibility to perform both direct assessment and pairwise ranking, the two most prevalent forms of assessment. Additionally, they do not possess the ability to evaluate based on custom evaluation criteria, focusing instead on general attributes like helpfulness and harmlessness. To address these issues, we introduce Prometheus 2, a more powerful evaluator LM than its predecessor that closely mirrors human and GPT-4 judgements. Moreover, it is capable of processing both direct assessment and pair-wise ranking formats grouped with a user-defined evaluation criteria. On four direct assessment benchmarks and four pairwise ranking benchmarks, Prometheus 2 scores the highest correlation and agreement with humans and proprietary LM judges among all tested open evaluator LMs. Our models, code, and data are all publicly available.
[ "Kim, Seungone", "Suk, Juyoung", "Longpre, Shayne", "Lin, Bill Yuchen", "Shin, Jamin", "Welleck, Sean", "Neubig, Graham", "Lee, Moontae", "Lee, Kyungjae", "Seo, Minjoon" ]
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
emnlp-main.248
Poster
2405.01535
[ "https://github.com/prometheus-eval/prometheus-eval" ]
https://huggingface.co/papers/2405.01535
7
116
4
10
[ "prometheus-eval/prometheus-7b-v2.0", "prometheus-eval/prometheus-8x7b-v2.0", "vsevolodl/prometheus-7b-v2.0-GGUF", "vsevolodl/prometheus-8x7b-v2.0-GGUF", "prometheus-eval/prometheus-7b-v2.0-GGUF", "chargoddard/prometheus-2-llama-3-8b", "AlekseiPravdin/prometheus-7b-v2_0-gguf", "RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-4bits", "RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf", "thesven/prometheus-7b-v2.0-GPTQ", "zli12321/prometheus2-2B", "zli12321/prometheus2-0.5B", "zli12321/prometheus2-3.8B", "zli12321/prometheus2-llama3.1-8B", "zli12321/prometheus2-560M", "zli12321/prometheus2-1.1B", "RichardErkhov/chargoddard_-_prometheus-2-llama-3-8b-gguf", "avacaondata/prometheus-2-llama3.1-8b-fixed", "RichardErkhov/prometheus-eval_-_prometheus-8x7b-v2.0-gguf", "mav23/prometheus-7b-v2.0-GGUF" ]
[ "prometheus-eval/Preference-Collection", "CharlieJi/HelpSteer2_prometheus", "flowaicom/Feedback-Bench" ]
[ "featherless-ai/try-this-model", "Granther/try-this-model", "AtlaAI/judge-arena", "Darok/Featherless-Feud", "emekaboris/try-this-model", "SC999/NV_Nemotron", "burtenshaw/disticleaner", "iishanbhandarii/llm_eval", "youns2001/vsevolodl-prometheus-7b-v2.0-GGUF" ]
[ "prometheus-eval/prometheus-7b-v2.0", "prometheus-eval/prometheus-8x7b-v2.0", "vsevolodl/prometheus-7b-v2.0-GGUF", "vsevolodl/prometheus-8x7b-v2.0-GGUF", "prometheus-eval/prometheus-7b-v2.0-GGUF", "chargoddard/prometheus-2-llama-3-8b", "AlekseiPravdin/prometheus-7b-v2_0-gguf", "RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-4bits", "RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf", "thesven/prometheus-7b-v2.0-GPTQ", "zli12321/prometheus2-2B", "zli12321/prometheus2-0.5B", "zli12321/prometheus2-3.8B", "zli12321/prometheus2-llama3.1-8B", "zli12321/prometheus2-560M", "zli12321/prometheus2-1.1B", "RichardErkhov/chargoddard_-_prometheus-2-llama-3-8b-gguf", "avacaondata/prometheus-2-llama3.1-8b-fixed", "RichardErkhov/prometheus-eval_-_prometheus-8x7b-v2.0-gguf", "mav23/prometheus-7b-v2.0-GGUF" ]
[ "prometheus-eval/Preference-Collection", "CharlieJi/HelpSteer2_prometheus", "flowaicom/Feedback-Bench" ]
[ "featherless-ai/try-this-model", "Granther/try-this-model", "AtlaAI/judge-arena", "Darok/Featherless-Feud", "emekaboris/try-this-model", "SC999/NV_Nemotron", "burtenshaw/disticleaner", "iishanbhandarii/llm_eval", "youns2001/vsevolodl-prometheus-7b-v2.0-GGUF" ]
1
https://aclanthology.org/2024.emnlp-main.249.bib
https://aclanthology.org/2024.emnlp-main.249/
@inproceedings{han-etal-2024-rag, title = "{RAG}-{QA} Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering", author = "Han, Rujun and Zhang, Yuhao and Qi, Peng and Xu, Yumo and Wang, Jenyuan and Liu, Lan and Wang, William Yang and Min, Bonan and Castelli, Vittorio", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.249", pages = "4354--4374", abstract = "Question answering based on retrieval augmented generation (RAG-QA) is an important research topic in NLP and has a wide range of real-world applications. However, most existing datasets for this task are either constructed using a single source corpus or consist of short extractive answers, which fall short of evaluating large language model (LLM) based RAG-QA systems on cross-domain generalization. To address these limitations, we create Long-form RobustQA (LFRQA), a new dataset comprising human-written long-form answers that integrate short extractive answers from multiple documents into a single, coherent narrative, covering 26K queries and large corpora across seven different domains. We further propose RAG-QA Arena by directly comparing model-generated answers against LFRQA{'}s answers using LLMs as evaluators. We show via extensive experiments that RAG-QA Arena and human judgments on answer quality are highly correlated. Moreover, only 41.3{\%} of the most competitive LLM{'}s answers are preferred to LFRQA{'}s answers, demonstrating RAG-QA Arena as a challenging evaluation platform for future research.", }
Question answering based on retrieval augmented generation (RAG-QA) is an important research topic in NLP and has a wide range of real-world applications. However, most existing datasets for this task are either constructed using a single source corpus or consist of short extractive answers, which fall short of evaluating large language model (LLM) based RAG-QA systems on cross-domain generalization. To address these limitations, we create Long-form RobustQA (LFRQA), a new dataset comprising human-written long-form answers that integrate short extractive answers from multiple documents into a single, coherent narrative, covering 26K queries and large corpora across seven different domains. We further propose RAG-QA Arena by directly comparing model-generated answers against LFRQA{'}s answers using LLMs as evaluators. We show via extensive experiments that RAG-QA Arena and human judgments on answer quality are highly correlated. Moreover, only 41.3{\%} of the most competitive LLM{'}s answers are preferred to LFRQA{'}s answers, demonstrating RAG-QA Arena as a challenging evaluation platform for future research.
[ "Han, Rujun", "Zhang, Yuhao", "Qi, Peng", "Xu, Yumo", "Wang, Jenyuan", "Liu, Lan", "Wang, William Yang", "Min, Bonan", "Castelli, Vittorio" ]
RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering
emnlp-main.249
Poster
2407.13998
[ "https://github.com/awslabs/rag-qa-arena" ]
https://huggingface.co/papers/2407.13998
1
0
0
9
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.250.bib
https://aclanthology.org/2024.emnlp-main.250/
@inproceedings{zhuang-etal-2024-promptreps, title = "{P}rompt{R}eps: Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval", author = "Zhuang, Shengyao and Ma, Xueguang and Koopman, Bevan and Lin, Jimmy and Zuccon, Guido", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.250", pages = "4375--4391", abstract = "Utilizing large language models (LLMs) for zero-shot document ranking is done in one of two ways: (1) prompt-based re-ranking methods, which require no further training but are only feasible for re-ranking a handful of candidate documents due to computational costs; and (2) unsupervised contrastive trained dense retrieval methods, which can retrieve relevant documents from the entire corpus but require a large amount of paired text data for contrastive training.In this paper, we propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus. Our method only requires prompts to guide an LLM to generate query and document representations for effective document retrieval. Specifically, we prompt the LLMs to represent a given text using a single word, and then use the last token{'}s hidden states and the corresponding logits associated with the prediction of the next token to construct a hybrid document retrieval system. The retrieval system harnesses both dense text embedding and sparse bag-of-words representations given by the LLM.Our experimental evaluation on the MSMARCO, TREC deep learning and BEIR zero-shot document retrieval datasets illustrates that this simple prompt-based LLM retrieval method can achieve a similar or higher retrieval effectiveness than state-of-the-art LLM embedding methods that are trained with large amounts of unsupervised data, especially when using a larger LLM.", }
Utilizing large language models (LLMs) for zero-shot document ranking is done in one of two ways: (1) prompt-based re-ranking methods, which require no further training but are only feasible for re-ranking a handful of candidate documents due to computational costs; and (2) unsupervised contrastive trained dense retrieval methods, which can retrieve relevant documents from the entire corpus but require a large amount of paired text data for contrastive training.In this paper, we propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus. Our method only requires prompts to guide an LLM to generate query and document representations for effective document retrieval. Specifically, we prompt the LLMs to represent a given text using a single word, and then use the last token{'}s hidden states and the corresponding logits associated with the prediction of the next token to construct a hybrid document retrieval system. The retrieval system harnesses both dense text embedding and sparse bag-of-words representations given by the LLM.Our experimental evaluation on the MSMARCO, TREC deep learning and BEIR zero-shot document retrieval datasets illustrates that this simple prompt-based LLM retrieval method can achieve a similar or higher retrieval effectiveness than state-of-the-art LLM embedding methods that are trained with large amounts of unsupervised data, especially when using a larger LLM.
[ "Zhuang, Shengyao", "Ma, Xueguang", "Koopman, Bevan", "Lin, Jimmy", "Zuccon, Guido" ]
PromptReps: Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval
emnlp-main.250
Poster
2404.18424
[ "https://github.com/ielab/promptreps" ]
https://huggingface.co/papers/2404.18424
4
1
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.251.bib
https://aclanthology.org/2024.emnlp-main.251/
@inproceedings{ahia-etal-2024-voices, title = "Voices Unheard: {NLP} Resources and Models for {Y}or{\`u}b{\'a} Regional Dialects", author = "Ahia, Orevaoghene and Aremu, Anuoluwapo and Abagyan, Diana and Gonen, Hila and Adelani, David Ifeoluwa and Abolade, Daud and Smith, Noah A. and Tsvetkov, Yulia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.251", pages = "4392--4409", abstract = "Yoruba{---}an African language with roughly 47 million speakers{---}encompasses a continuum with several dialects. Recent efforts to develop NLP technologies for African languages have focused on their standard dialects, resulting in disparities for dialects and varieties for which there are little to no resources or tools. We take steps towards bridging this gap by introducing a new high-quality parallel text and speech corpus; YORULECT across three domains and four regional yoruba dialects. To develop this corpus, we engaged native speakers, traveling to communities where these dialects are spoken, to collect text and speech data. Using our newly created corpus, we conducted extensive experiments on (text) machine translation, automatic speech recognition, and speech-to-text translation. Our results reveal substantial performance disparities between standard yoruba and the other dialects across all tasks. However, we also show that with dialect-adaptive finetuning, we are able to narrow this gap. We believe our dataset and experimental analysis will contribute greatly to developing NLP tools for Yoruba and its dialects, and potentially for other African languages, by improving our understanding of existing challenges and offering a high-quality dataset for further development. We will release YORULECT dataset and models publicly under an open license.", }
Yoruba{---}an African language with roughly 47 million speakers{---}encompasses a continuum with several dialects. Recent efforts to develop NLP technologies for African languages have focused on their standard dialects, resulting in disparities for dialects and varieties for which there are little to no resources or tools. We take steps towards bridging this gap by introducing a new high-quality parallel text and speech corpus; YORULECT across three domains and four regional yoruba dialects. To develop this corpus, we engaged native speakers, traveling to communities where these dialects are spoken, to collect text and speech data. Using our newly created corpus, we conducted extensive experiments on (text) machine translation, automatic speech recognition, and speech-to-text translation. Our results reveal substantial performance disparities between standard yoruba and the other dialects across all tasks. However, we also show that with dialect-adaptive finetuning, we are able to narrow this gap. We believe our dataset and experimental analysis will contribute greatly to developing NLP tools for Yoruba and its dialects, and potentially for other African languages, by improving our understanding of existing challenges and offering a high-quality dataset for further development. We will release YORULECT dataset and models publicly under an open license.
[ "Ahia, Orevaoghene", "Aremu, Anuoluwapo", "Abagyan, Diana", "Gonen, Hila", "Adelani, David Ifeoluwa", "Abolade, Daud", "Smith, Noah A.", "Tsvetkov, Yulia" ]
Voices Unheard: NLP Resources and Models for Yorùbá Regional Dialects
emnlp-main.251
Poster
2406.19564
[ "https://github.com/orevaahia/yorulect" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.252.bib
https://aclanthology.org/2024.emnlp-main.252/
@inproceedings{byun-etal-2024-ares, title = "{ARES}: Alternating Reinforcement Learning and Supervised Fine-Tuning for Enhanced Multi-Modal Chain-of-Thought Reasoning Through Diverse {AI} Feedback", author = "Byun, Ju-Seung and Chun, Jiyun and Kil, Jihyung and Perrault, Andrew", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.252", pages = "4410--4430", abstract = "Large Multimodal Models (LMMs) excel at comprehending human instructions and demonstrate remarkable results across a broad spectrum of tasks. Reinforcement Learning from Human Feedback (RLHF) and AI Feedback (RLAIF) further refine LLMs by aligning them with specific preferences. These methods primarily use ranking-based feedback for entire generations. With advanced AI models (Teacher), such as GPT-4 and Claude 3 Opus, we can request various types of detailed feedback that are expensive for humans to provide. We propose a two-stage algorithm ARES that Alternates REinforcement Learning (RL) and Supervised Fine-Tuning (SFT). First, we ask the Teacher to score how much each sentence contributes to solving the problem in a Chain-of-Thought (CoT). This sentence-level feedback allows us to consider individual valuable segments, providing more granular rewards for the RL procedure. Second, we ask the Teacher to correct wrong reasoning after the RL stage. The RL procedure requires substantial hyperparameter tuning and often generates errors such as repetitive words and incomplete sentences. With correction feedback, we stabilize the RL fine-tuned model through SFT. We conduct experiments on the multi-modal datasets ScienceQA and A-OKVQA to demonstrate the effectiveness of our proposal. The ARES rationale achieves around 70{\%} win rate compared to baseline models judged by GPT-4o. Additionally, we observe that the improved rationale reasoning leads to a 2.5{\%} increase in inference answer accuracy on average for the multi-modal datasets.", }
Large Multimodal Models (LMMs) excel at comprehending human instructions and demonstrate remarkable results across a broad spectrum of tasks. Reinforcement Learning from Human Feedback (RLHF) and AI Feedback (RLAIF) further refine LLMs by aligning them with specific preferences. These methods primarily use ranking-based feedback for entire generations. With advanced AI models (Teacher), such as GPT-4 and Claude 3 Opus, we can request various types of detailed feedback that are expensive for humans to provide. We propose a two-stage algorithm ARES that Alternates REinforcement Learning (RL) and Supervised Fine-Tuning (SFT). First, we ask the Teacher to score how much each sentence contributes to solving the problem in a Chain-of-Thought (CoT). This sentence-level feedback allows us to consider individual valuable segments, providing more granular rewards for the RL procedure. Second, we ask the Teacher to correct wrong reasoning after the RL stage. The RL procedure requires substantial hyperparameter tuning and often generates errors such as repetitive words and incomplete sentences. With correction feedback, we stabilize the RL fine-tuned model through SFT. We conduct experiments on the multi-modal datasets ScienceQA and A-OKVQA to demonstrate the effectiveness of our proposal. The ARES rationale achieves around 70{\%} win rate compared to baseline models judged by GPT-4o. Additionally, we observe that the improved rationale reasoning leads to a 2.5{\%} increase in inference answer accuracy on average for the multi-modal datasets.
[ "Byun, Ju-Seung", "Chun, Jiyun", "Kil, Jihyung", "Perrault, Andrew" ]
ARES: Alternating Reinforcement Learning and Supervised Fine-Tuning for Enhanced Multi-Modal Chain-of-Thought Reasoning Through Diverse AI Feedback
emnlp-main.252
Poster
2407.00087
[ "https://github.com/Amyyyyeah/ARES" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.253.bib
https://aclanthology.org/2024.emnlp-main.253/
@inproceedings{zhang-etal-2024-order, title = "Order of Magnitude Speedups for {LLM} Membership Inference", author = "Zhang, Rongting and Bertran, Martin Andres and Roth, Aaron", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.253", pages = "4431--4443", abstract = "Large Language Models (LLMs) have the promise to revolutionize computing broadly, but their complexity and extensive training data also expose significant privacy vulnerabilities. One of the simplest privacy risks associated with LLMs is their susceptibility to membership inference attacks (MIAs), wherein an adversary aims to determine whether a specific data point was part of the model{'}s training set. Although this is a known risk, state of the art methodologies for MIAs rely on training multiple computationally costly {`}shadow models{'}, making risk evaluation prohibitive for large models. Here we adapt a recent line of work which uses quantile regression to mount membership inference attacks; we extend this work by proposing a low-cost MIA that leverages an ensemble of small quantile regression models to determine if a document belongs to the model{'}s training set or not. We demonstrate the effectiveness of this approach on fine-tuned LLMs of varying families (OPT, Pythia, Llama) and across multiple datasets. Across all scenarios we obtain comparable or improved accuracy compared to state of the art {`}shadow model{'} approaches, with as little as 6{\%} of their computation budget. We demonstrate increased effectiveness across multi-epoch trained target models, and architecture miss-specification robustness, that is, we can mount an effective attack against a model using a different tokenizer and architecture, without requiring knowledge on the target model.", }
Large Language Models (LLMs) have the promise to revolutionize computing broadly, but their complexity and extensive training data also expose significant privacy vulnerabilities. One of the simplest privacy risks associated with LLMs is their susceptibility to membership inference attacks (MIAs), wherein an adversary aims to determine whether a specific data point was part of the model{'}s training set. Although this is a known risk, state of the art methodologies for MIAs rely on training multiple computationally costly {`}shadow models{'}, making risk evaluation prohibitive for large models. Here we adapt a recent line of work which uses quantile regression to mount membership inference attacks; we extend this work by proposing a low-cost MIA that leverages an ensemble of small quantile regression models to determine if a document belongs to the model{'}s training set or not. We demonstrate the effectiveness of this approach on fine-tuned LLMs of varying families (OPT, Pythia, Llama) and across multiple datasets. Across all scenarios we obtain comparable or improved accuracy compared to state of the art {`}shadow model{'} approaches, with as little as 6{\%} of their computation budget. We demonstrate increased effectiveness across multi-epoch trained target models, and architecture miss-specification robustness, that is, we can mount an effective attack against a model using a different tokenizer and architecture, without requiring knowledge on the target model.
[ "Zhang, Rongting", "Bertran, Martin Andres", "Roth, Aaron" ]
Order of Magnitude Speedups for LLM Membership Inference
emnlp-main.253
Poster
2409.14513
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.254.bib
https://aclanthology.org/2024.emnlp-main.254/
@inproceedings{fang-etal-2024-vimi, title = "{VIMI}: Grounding Video Generation through Multi-modal Instruction", author = "Fang, Yuwei and Menapace, Willi and Siarohin, Aliaksandr and Chen, Tsai-Shien and Wang, Kuan-Chieh and Skorokhodov, Ivan and Neubig, Graham and Tulyakov, Sergey", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.254", pages = "4444--4456", abstract = "Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a lack of visual grounding and restricting their versatility and application in multimodal integration. To address this, we construct a large-scale multimodal prompt dataset by employing retrieval methods to pair in-context examples with the given text prompts and then utilize a two-stage training strategy to enable diverse video generation tasks within a model. In the first stage, we propose a multimodal conditional video generation framework for pretraining on these augmented datasets, establishing a foundational model for grounded video generation. Secondly, we fine-tune the model from the first stage on various video generation tasks, incorporating multimodal instructions. This process further refines the model{'}s ability to handle diverse inputs and tasks, ensuring seamless integration of multimodal information. After this two-stage training process, VIMI demonstrates multimodal understanding capabilities, producing contextually rich and personalized videos grounded in the provided inputs, as shown in Figure1. Compared to previous subject-driven video generation methods, our generator can synthesize consistent and temporally coherent videos with large motion while retaining the semantic control. Our generator also achieves state-of-the-art text-to-video generation results on UCF101 benchmark.", }
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a lack of visual grounding and restricting their versatility and application in multimodal integration. To address this, we construct a large-scale multimodal prompt dataset by employing retrieval methods to pair in-context examples with the given text prompts and then utilize a two-stage training strategy to enable diverse video generation tasks within a model. In the first stage, we propose a multimodal conditional video generation framework for pretraining on these augmented datasets, establishing a foundational model for grounded video generation. Secondly, we fine-tune the model from the first stage on various video generation tasks, incorporating multimodal instructions. This process further refines the model{'}s ability to handle diverse inputs and tasks, ensuring seamless integration of multimodal information. After this two-stage training process, VIMI demonstrates multimodal understanding capabilities, producing contextually rich and personalized videos grounded in the provided inputs, as shown in Figure1. Compared to previous subject-driven video generation methods, our generator can synthesize consistent and temporally coherent videos with large motion while retaining the semantic control. Our generator also achieves state-of-the-art text-to-video generation results on UCF101 benchmark.
[ "Fang, Yuwei", "Menapace, Willi", "Siarohin, Aliaks", "r", "Chen, Tsai-Shien", "Wang, Kuan-Chieh", "Skorokhodov, Ivan", "Neubig, Graham", "Tulyakov, Sergey" ]
VIMI: Grounding Video Generation through Multi-modal Instruction
emnlp-main.254
Poster
2407.06304
[ "" ]
https://huggingface.co/papers/2407.06304
6
9
1
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.255.bib
https://aclanthology.org/2024.emnlp-main.255/
@inproceedings{wang-etal-2024-f2rl, title = "{F}$^2${RL}: Factuality and Faithfulness Reinforcement Learning Framework for Claim-Guided Evidence-Supported Counterspeech Generation", author = "Wang, Haiyang and Pan, Yuchen and Song, Xin and Zhao, Xuechen and Hu, Minghao and Zhou, Bin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.255", pages = "4457--4470", abstract = "Hate speech (HS) on social media exacerbates misinformation and baseless prejudices. Evidence-supported counterspeech (CS) is crucial for correcting misinformation and reducing prejudices through facts. Existing methods for generating evidence-supported CS often lack clear guidance with a core claim for organizing evidence and do not adequately address factuality and faithfulness hallucinations in CS within anti-hate contexts. In this paper, to mitigate the aforementioned, we propose F$^2$RL, a Factuality and Faithfulness Reinforcement Learning framework for generating claim-guided and evidence-supported CS. Firstly, we generate counter-claims based on hate speech and design a self-evaluation mechanism to select the most appropriate one. Secondly, we propose a coarse-to-fine evidence retrieval method. This method initially generates broad queries to ensure the diversity of evidence, followed by carefully reranking the retrieved evidence to ensure its relevance to the claim. Finally, we design a reinforcement learning method with a triplet-based factuality reward model and a multi-aspect faithfulness reward model. The method rewards the generator to encourage greater factuality, more accurate refutation of hate speech, consistency with the claim, and better utilization of evidence. Extensive experiments on three benchmark datasets demonstrate that the proposed framework achieves excellent performance in CS generation, with strong factuality and faithfulness.", }
Hate speech (HS) on social media exacerbates misinformation and baseless prejudices. Evidence-supported counterspeech (CS) is crucial for correcting misinformation and reducing prejudices through facts. Existing methods for generating evidence-supported CS often lack clear guidance with a core claim for organizing evidence and do not adequately address factuality and faithfulness hallucinations in CS within anti-hate contexts. In this paper, to mitigate the aforementioned, we propose F$^2$RL, a Factuality and Faithfulness Reinforcement Learning framework for generating claim-guided and evidence-supported CS. Firstly, we generate counter-claims based on hate speech and design a self-evaluation mechanism to select the most appropriate one. Secondly, we propose a coarse-to-fine evidence retrieval method. This method initially generates broad queries to ensure the diversity of evidence, followed by carefully reranking the retrieved evidence to ensure its relevance to the claim. Finally, we design a reinforcement learning method with a triplet-based factuality reward model and a multi-aspect faithfulness reward model. The method rewards the generator to encourage greater factuality, more accurate refutation of hate speech, consistency with the claim, and better utilization of evidence. Extensive experiments on three benchmark datasets demonstrate that the proposed framework achieves excellent performance in CS generation, with strong factuality and faithfulness.
[ "Wang, Haiyang", "Pan, Yuchen", "Song, Xin", "Zhao, Xuechen", "Hu, Minghao", "Zhou, Bin" ]
F^2RL: Factuality and Faithfulness Reinforcement Learning Framework for Claim-Guided Evidence-Supported Counterspeech Generation
emnlp-main.255
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.256.bib
https://aclanthology.org/2024.emnlp-main.256/
@inproceedings{yang-etal-2024-deciphering, title = "Deciphering Rumors: A Multi-Task Learning Approach with Intent-aware Hierarchical Contrastive Learning", author = "Yang, Chang and Zhang, Peng and Gao, Hui and Zhang, Jing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.256", pages = "4471--4483", abstract = "Social networks are rife with noise and misleading information, presenting multifaceted challenges for rumor detection. In this paper, from the perspective of human cognitive subjectivity, we introduce the mining of individual latent intentions and propose a novel multi-task learning framework, the Intent-Aware Rumor Detection Network (IRDNet). IRDNet is designed to discern multi-level rumor semantic features and latent user intentions, addressing the challenges of robustness and key feature mining and alignment that plague existing models. In IRDNet, the multi-level semantic extraction module captures sequential and hierarchical features to generate robust semantic representations. The hierarchical contrastive learning module incorporates two complementary strategies, event-level and intent-level, to establish cognitive anchors that uncover the latent intentions of information disseminators. Event-level contrastive learning employs high-quality data augmentation and adversarial perturbations to enhance model robustness. Intent-level contrastive learning leverages the intent encoder to capture latent intent features and optimize consistency within the same intent while ensuring heterogeneity between different intents to clearly distinguish key features from irrelevant elements. Experimental results demonstrate that IRDNet significantly improves the effectiveness of rumor detection and effectively addresses the challenges present in the field of rumor detection.", }
Social networks are rife with noise and misleading information, presenting multifaceted challenges for rumor detection. In this paper, from the perspective of human cognitive subjectivity, we introduce the mining of individual latent intentions and propose a novel multi-task learning framework, the Intent-Aware Rumor Detection Network (IRDNet). IRDNet is designed to discern multi-level rumor semantic features and latent user intentions, addressing the challenges of robustness and key feature mining and alignment that plague existing models. In IRDNet, the multi-level semantic extraction module captures sequential and hierarchical features to generate robust semantic representations. The hierarchical contrastive learning module incorporates two complementary strategies, event-level and intent-level, to establish cognitive anchors that uncover the latent intentions of information disseminators. Event-level contrastive learning employs high-quality data augmentation and adversarial perturbations to enhance model robustness. Intent-level contrastive learning leverages the intent encoder to capture latent intent features and optimize consistency within the same intent while ensuring heterogeneity between different intents to clearly distinguish key features from irrelevant elements. Experimental results demonstrate that IRDNet significantly improves the effectiveness of rumor detection and effectively addresses the challenges present in the field of rumor detection.
[ "Yang, Chang", "Zhang, Peng", "Gao, Hui", "Zhang, Jing" ]
Deciphering Rumors: A Multi-Task Learning Approach with Intent-aware Hierarchical Contrastive Learning
emnlp-main.256
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.257.bib
https://aclanthology.org/2024.emnlp-main.257/
@inproceedings{zhang-etal-2024-visual, title = "Visual Prompting in {LLM}s for Enhancing Emotion Recognition", author = "Zhang, Qixuan and Wang, Zhifeng and Zhang, Dylan and Niu, Wenjia and Caldwell, Sabrina and Gedeon, Tom and Liu, Yang and Qin, Zhenyue", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.257", pages = "4484--4499", abstract = "Vision Large Language Models (VLLMs) are transforming the intersection of computer vision and natural language processing; however, the potential of using visual prompts for emotion recognition in these models remains largely unexplored and untapped. Traditional methods in VLLMs struggle with spatial localization and often discard valuable global context. We propose a novel Set-of-Vision prompting (SoV) approach that enhances zero-shot emotion recognition by using spatial information, such as bounding boxes and facial landmarks, to mark targets precisely. SoV improves accuracy in face count and emotion categorization while preserving the enriched image context. Through comprehensive experimentation and analysis of recent commercial or open-source VLLMs, we evaluate the SoV model{'}s ability to comprehend facial expressions in natural environments. Our findings demonstrate the effectiveness of integrating spatial visual prompts into VLLMs for improving emotion recognition performance.", }
Vision Large Language Models (VLLMs) are transforming the intersection of computer vision and natural language processing; however, the potential of using visual prompts for emotion recognition in these models remains largely unexplored and untapped. Traditional methods in VLLMs struggle with spatial localization and often discard valuable global context. We propose a novel Set-of-Vision prompting (SoV) approach that enhances zero-shot emotion recognition by using spatial information, such as bounding boxes and facial landmarks, to mark targets precisely. SoV improves accuracy in face count and emotion categorization while preserving the enriched image context. Through comprehensive experimentation and analysis of recent commercial or open-source VLLMs, we evaluate the SoV model{'}s ability to comprehend facial expressions in natural environments. Our findings demonstrate the effectiveness of integrating spatial visual prompts into VLLMs for improving emotion recognition performance.
[ "Zhang, Qixuan", "Wang, Zhifeng", "Zhang, Dylan", "Niu, Wenjia", "Caldwell, Sabrina", "Gedeon, Tom", "Liu, Yang", "Qin, Zhenyue" ]
Visual Prompting in LLMs for Enhancing Emotion Recognition
emnlp-main.257
Poster
2410.02244
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.258.bib
https://aclanthology.org/2024.emnlp-main.258/
@inproceedings{li-etal-2024-ideaw, title = "{IDEAW}: Robust Neural Audio Watermarking with Invertible Dual-Embedding", author = "Li, Pengcheng and Zhang, Xulong and Xiao, Jing and Wang, Jianzong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.258", pages = "4500--4511", abstract = "The audio watermarking technique embeds messages into audio and accurately extracts messages from the watermarked audio. Traditional methods develop algorithms based on expert experience to embed watermarks into the time-domain or transform-domain of signals. With the development of deep neural networks, deep learning-based neural audio watermarking has emerged. Compared to traditional algorithms, neural audio watermarking achieves better robustness by considering various attacks during training. However, current neural watermarking methods suffer from low capacity and unsatisfactory imperceptibility. Additionally, the issue of watermark locating, which is extremely important and even more pronounced in neural audio water- marking, has not been adequately studied. In this paper, we design a dual-embedding wa- termarking model for efficient locating. We also consider the impact of the attack layer on the invertible neural network in robustness training, improving the model to enhance both its reasonableness and stability. Experiments show that the proposed model, IDEAW, can withstand various attacks with higher capacity and more efficient locating ability compared to existing methods.", }
The audio watermarking technique embeds messages into audio and accurately extracts messages from the watermarked audio. Traditional methods develop algorithms based on expert experience to embed watermarks into the time-domain or transform-domain of signals. With the development of deep neural networks, deep learning-based neural audio watermarking has emerged. Compared to traditional algorithms, neural audio watermarking achieves better robustness by considering various attacks during training. However, current neural watermarking methods suffer from low capacity and unsatisfactory imperceptibility. Additionally, the issue of watermark locating, which is extremely important and even more pronounced in neural audio water- marking, has not been adequately studied. In this paper, we design a dual-embedding wa- termarking model for efficient locating. We also consider the impact of the attack layer on the invertible neural network in robustness training, improving the model to enhance both its reasonableness and stability. Experiments show that the proposed model, IDEAW, can withstand various attacks with higher capacity and more efficient locating ability compared to existing methods.
[ "Li, Pengcheng", "Zhang, Xulong", "Xiao, Jing", "Wang, Jianzong" ]
IDEAW: Robust Neural Audio Watermarking with Invertible Dual-Embedding
emnlp-main.258
Poster
2409.19627
[ "" ]
https://huggingface.co/papers/2409.19627
2
1
2
4
[]
[]
[ "llam/Papers" ]
[]
[]
[ "llam/Papers" ]
1
https://aclanthology.org/2024.emnlp-main.259.bib
https://aclanthology.org/2024.emnlp-main.259/
@inproceedings{tsai-etal-2024-leveraging-conflicts, title = "Leveraging Conflicts in Social Media Posts: Unintended Offense Dataset", author = "Tsai, Che Wei and Huang, Yen-Hao and Liao, Tsu-Keng and Estrada, Didier Fernando Salazar and Latifah, Retnani and Chen, Yi-Shin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.259", pages = "4512--4522", abstract = "In multi-person communications, conflicts often arise. Each individual may have their own perspective, which can differ. Additionally, commonly referenced offensive datasets frequently neglect contextual information and are primarily constructed with a focus on intended offenses. This study suggests that conflicts are pivotal in revealing a broader range of human interactions, including instances of unintended offensive language. This paper proposes a conflict-based data collection method to utilize inter-conflict cues in multi-person communications. By focusing on specific cue posts within conversation threads, our proposed approach effectively identifies relevant instances for analysis. Detailed analyses are provided to showcase the proposed approach efficiently gathers data on subtly offensive content. The experimental results indicate that incorporating elements of conflict into data collection significantly enhances the comprehensiveness and accuracy of detecting offensive language but also enriches our understanding of conflict dynamics in digital communication.", }
In multi-person communications, conflicts often arise. Each individual may have their own perspective, which can differ. Additionally, commonly referenced offensive datasets frequently neglect contextual information and are primarily constructed with a focus on intended offenses. This study suggests that conflicts are pivotal in revealing a broader range of human interactions, including instances of unintended offensive language. This paper proposes a conflict-based data collection method to utilize inter-conflict cues in multi-person communications. By focusing on specific cue posts within conversation threads, our proposed approach effectively identifies relevant instances for analysis. Detailed analyses are provided to showcase the proposed approach efficiently gathers data on subtly offensive content. The experimental results indicate that incorporating elements of conflict into data collection significantly enhances the comprehensiveness and accuracy of detecting offensive language but also enriches our understanding of conflict dynamics in digital communication.
[ "Tsai, Che Wei", "Huang, Yen-Hao", "Liao, Tsu-Keng", "Estrada, Didier Fern", "o Salazar", "Latifah, Retnani", "Chen, Yi-Shin" ]
Leveraging Conflicts in Social Media Posts: Unintended Offense Dataset
emnlp-main.259
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.260.bib
https://aclanthology.org/2024.emnlp-main.260/
@inproceedings{hong-etal-2024-outcome, title = "Outcome-Constrained Large Language Models for Countering Hate Speech", author = "Hong, Lingzi and Luo, Pengcheng and Blanco, Eduardo and Song, Xiaoying", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.260", pages = "4523--4536", abstract = "Automatic counterspeech generation methods have been developed to assist efforts in combating hate speech. Existing research focuses on generating counterspeech with linguistic attributes such as being polite, informative, and intent-driven. However, the real impact of counterspeech in online environments is seldom considered. This study aims to develop methods for generating counterspeech constrained by conversation outcomes and evaluate their effectiveness. We experiment with large language models (LLMs) to incorporate into the text generation process two desired conversation outcomes: low conversation incivility and non-hateful hater reentry. Specifically, we experiment with instruction prompts, LLM finetuning, and LLM reinforcement learning (RL). Evaluation results show that our methods effectively steer the generation of counterspeech toward the desired outcomes. Our analyses, however, show that there are differences in the quality and style depending on the model.", }
Automatic counterspeech generation methods have been developed to assist efforts in combating hate speech. Existing research focuses on generating counterspeech with linguistic attributes such as being polite, informative, and intent-driven. However, the real impact of counterspeech in online environments is seldom considered. This study aims to develop methods for generating counterspeech constrained by conversation outcomes and evaluate their effectiveness. We experiment with large language models (LLMs) to incorporate into the text generation process two desired conversation outcomes: low conversation incivility and non-hateful hater reentry. Specifically, we experiment with instruction prompts, LLM finetuning, and LLM reinforcement learning (RL). Evaluation results show that our methods effectively steer the generation of counterspeech toward the desired outcomes. Our analyses, however, show that there are differences in the quality and style depending on the model.
[ "Hong, Lingzi", "Luo, Pengcheng", "Blanco, Eduardo", "Song, Xiaoying" ]
Outcome-Constrained Large Language Models for Countering Hate Speech
emnlp-main.260
Oral
2403.17146
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.261.bib
https://aclanthology.org/2024.emnlp-main.261/
@inproceedings{yang-etal-2024-multiple, title = "Multiple Sources are Better Than One: Incorporating External Knowledge in Low-Resource Glossing", author = "Yang, Changbing and Nicolai, Garrett and Silfverberg, Miikka", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.261", pages = "4537--4552", abstract = "In this paper, we address the data scarcity problem in automatic data-driven glossing for low-resource languages by coordinating multiple sources of linguistic expertise. We enhance models by incorporating both token-level and sentence-level translations, utilizing the extensive linguistic capabilities of modern LLMs, and incorporating available dictionary resources. Our enhancements lead to an average absolute improvement of 5{\%}-points in word-level accuracy over the previous state of the art on a typologically diverse dataset spanning six low-resource languages. The improvements are particularly noticeable for the lowest-resourced language Gitksan, where we achieve a 10{\%}-point improvement. Furthermore, in a simulated ultra-low resource setting for the same six languages, training on fewer than 100 glossed sentences, we establish an average 10{\%}-point improvement in word-level accuracy over the previous state-of-the-art system.", }
In this paper, we address the data scarcity problem in automatic data-driven glossing for low-resource languages by coordinating multiple sources of linguistic expertise. We enhance models by incorporating both token-level and sentence-level translations, utilizing the extensive linguistic capabilities of modern LLMs, and incorporating available dictionary resources. Our enhancements lead to an average absolute improvement of 5{\%}-points in word-level accuracy over the previous state of the art on a typologically diverse dataset spanning six low-resource languages. The improvements are particularly noticeable for the lowest-resourced language Gitksan, where we achieve a 10{\%}-point improvement. Furthermore, in a simulated ultra-low resource setting for the same six languages, training on fewer than 100 glossed sentences, we establish an average 10{\%}-point improvement in word-level accuracy over the previous state-of-the-art system.
[ "Yang, Changbing", "Nicolai, Garrett", "Silfverberg, Miikka" ]
Multiple Sources are Better Than One: Incorporating External Knowledge in Low-Resource Glossing
emnlp-main.261
Poster
2406.11085
[ "https://github.com/changbingy/auto_glossing_stem_translation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.262.bib
https://aclanthology.org/2024.emnlp-main.262/
@inproceedings{wang-etal-2024-adaptive, title = "Adaptive Immune-based Sound-Shape Code Substitution for Adversarial {C}hinese Text Attacks", author = "Wang, Ao and Yang, Xinghao and Li, Chen and Liu, Bao-di and Liu, Weifeng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.262", pages = "4553--4565", abstract = "Adversarial textual examples reveal the vulnerability of natural language processing (NLP) models. Most existing text attack methods are designed for English text, while the robust implementation of the second popular language, i.e., Chinese with 1 billion users, is greatly underestimated. Although several Chinese attack methods have been presented, they either directly transfer from English attacks or adopt simple greedy search to optimize the attack priority, usually leading to unnatural sentences. To address these issues, we propose an adaptive Immune-based Sound-Shape Code (ISSC) algorithm for adversarial Chinese text attacks. Firstly, we leverage the Sound-Shape code to generate natural substitutions, which comprehensively integrate multiple Chinese features. Secondly, we employ adaptive immune algorithm (IA) to determine the replacement order, which can reduce the duplication of population to improve the search ability. Extensive experimental results validate the superiority of our ISSC in producing high-quality Chinese adversarial texts. Our code and data can be found in https://github.com/nohuma/chinese-attack-issc.", }
Adversarial textual examples reveal the vulnerability of natural language processing (NLP) models. Most existing text attack methods are designed for English text, while the robust implementation of the second popular language, i.e., Chinese with 1 billion users, is greatly underestimated. Although several Chinese attack methods have been presented, they either directly transfer from English attacks or adopt simple greedy search to optimize the attack priority, usually leading to unnatural sentences. To address these issues, we propose an adaptive Immune-based Sound-Shape Code (ISSC) algorithm for adversarial Chinese text attacks. Firstly, we leverage the Sound-Shape code to generate natural substitutions, which comprehensively integrate multiple Chinese features. Secondly, we employ adaptive immune algorithm (IA) to determine the replacement order, which can reduce the duplication of population to improve the search ability. Extensive experimental results validate the superiority of our ISSC in producing high-quality Chinese adversarial texts. Our code and data can be found in https://github.com/nohuma/chinese-attack-issc.
[ "Wang, Ao", "Yang, Xinghao", "Li, Chen", "Liu, Bao-di", "Liu, Weifeng" ]
Adaptive Immune-based Sound-Shape Code Substitution for Adversarial Chinese Text Attacks
emnlp-main.262
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.263.bib
https://aclanthology.org/2024.emnlp-main.263/
@inproceedings{zhao-etal-2024-bootstrapped, title = "Bootstrapped Policy Learning for Task-oriented Dialogue through Goal Shaping", author = "Zhao, Yangyang and Niu, Ben and Dastani, Mehdi and Wang, Shihan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.263", pages = "4566--4580", abstract = "Reinforcement learning shows promise in optimizing dialogue policies, but addressing the challenge of reward sparsity remains crucial. While curriculum learning offers a practical solution by strategically training policies from simple to complex, it hinges on the assumption of a gradual increase in goal difficulty to ensure a smooth knowledge transition across varied complexities. In complex dialogue environments without intermediate goals, achieving seamless knowledge transitions becomes tricky. This paper proposes a novel Bootstrapped Policy Learning (BPL) framework, which adaptively tailors progressively challenging subgoal curriculum for each complex goal through goal shaping, ensuring a smooth knowledge transition. Goal shaping involves goal decomposition and evolution, decomposing complex goals into subgoals with solvable maximum difficulty and progressively increasing difficulty as the policy improves. Moreover, to enhance BPL{'}s adaptability across various environments, we explore various combinations of goal decomposition and evolution within BPL, and identify two universal curriculum patterns that remain effective across different dialogue environments, independent of specific environmental constraints. By integrating the summarized curriculum patterns, our BPL has exhibited efficacy and versatility across four publicly available datasets with different difficulty levels.", }
Reinforcement learning shows promise in optimizing dialogue policies, but addressing the challenge of reward sparsity remains crucial. While curriculum learning offers a practical solution by strategically training policies from simple to complex, it hinges on the assumption of a gradual increase in goal difficulty to ensure a smooth knowledge transition across varied complexities. In complex dialogue environments without intermediate goals, achieving seamless knowledge transitions becomes tricky. This paper proposes a novel Bootstrapped Policy Learning (BPL) framework, which adaptively tailors progressively challenging subgoal curriculum for each complex goal through goal shaping, ensuring a smooth knowledge transition. Goal shaping involves goal decomposition and evolution, decomposing complex goals into subgoals with solvable maximum difficulty and progressively increasing difficulty as the policy improves. Moreover, to enhance BPL{'}s adaptability across various environments, we explore various combinations of goal decomposition and evolution within BPL, and identify two universal curriculum patterns that remain effective across different dialogue environments, independent of specific environmental constraints. By integrating the summarized curriculum patterns, our BPL has exhibited efficacy and versatility across four publicly available datasets with different difficulty levels.
[ "Zhao, Yangyang", "Niu, Ben", "Dastani, Mehdi", "Wang, Shihan" ]
Bootstrapped Policy Learning for Task-oriented Dialogue through Goal Shaping
emnlp-main.263
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.264.bib
https://aclanthology.org/2024.emnlp-main.264/
@inproceedings{qiu-etal-2024-psyguard, title = "{P}sy{GUARD}: An Automated System for Suicide Detection and Risk Assessment in Psychological Counseling", author = "Qiu, Huachuan and Ma, Lizhi and Lan, Zhenzhong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.264", pages = "4581--4607", abstract = "As awareness of mental health issues grows, online counseling support services are becoming increasingly prevalent worldwide. Detecting whether users express suicidal ideation in text-based counseling services is crucial for identifying and prioritizing at-risk individuals. However, the lack of domain-specific systems to facilitate fine-grained suicide detection and corresponding risk assessment in online counseling poses a significant challenge for automated crisis intervention aimed at suicide prevention. In this paper, we propose PsyGUARD, an automated system for detecting suicide ideation and assessing risk in psychological counseling. To achieve this, we first develop a detailed taxonomy for detecting suicide ideation based on foundational theories. We then curate a large-scale, high-quality dataset called PsySUICIDE for suicide detection. To evaluate the capabilities of automated systems in fine-grained suicide detection, we establish a range of baselines. Subsequently, to assist automated services in providing safe, helpful, and tailored responses for further assessment, we propose to build a suite of risk assessment frameworks. Our study not only provides an insightful analysis of the effectiveness of automated risk assessment systems based on fine-grained suicide detection but also highlights their potential to improve mental health services on online counseling platforms. Code, data, and models are available at https://github.com/qiuhuachuan/PsyGUARD.", }
As awareness of mental health issues grows, online counseling support services are becoming increasingly prevalent worldwide. Detecting whether users express suicidal ideation in text-based counseling services is crucial for identifying and prioritizing at-risk individuals. However, the lack of domain-specific systems to facilitate fine-grained suicide detection and corresponding risk assessment in online counseling poses a significant challenge for automated crisis intervention aimed at suicide prevention. In this paper, we propose PsyGUARD, an automated system for detecting suicide ideation and assessing risk in psychological counseling. To achieve this, we first develop a detailed taxonomy for detecting suicide ideation based on foundational theories. We then curate a large-scale, high-quality dataset called PsySUICIDE for suicide detection. To evaluate the capabilities of automated systems in fine-grained suicide detection, we establish a range of baselines. Subsequently, to assist automated services in providing safe, helpful, and tailored responses for further assessment, we propose to build a suite of risk assessment frameworks. Our study not only provides an insightful analysis of the effectiveness of automated risk assessment systems based on fine-grained suicide detection but also highlights their potential to improve mental health services on online counseling platforms. Code, data, and models are available at https://github.com/qiuhuachuan/PsyGUARD.
[ "Qiu, Huachuan", "Ma, Lizhi", "Lan, Zhenzhong" ]
PsyGUARD: An Automated System for Suicide Detection and Risk Assessment in Psychological Counseling
emnlp-main.264
Oral
2409.20243
[ "https://github.com/qiuhuachuan/psyguard" ]
https://huggingface.co/papers/2409.20243
0
0
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.265.bib
https://aclanthology.org/2024.emnlp-main.265/
@inproceedings{wang-etal-2024-world, title = "World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and Filtering", author = "Wang, Jiacong and Wu, Bohong and Jiang, Haiyong and Xun, Zhou and Xiao, Xin and Guo, Haoyuan and Xiao, Jun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.265", pages = "4608--4623", abstract = "Recent advances in Vision-Language Models (VLMs) and the scarcity of high-quality multi-modal alignment data have inspired numerous researches on synthetic VLM data generation. The conventional norm in VLM data construction uses a mixture of specialists in caption and OCR, or stronger VLM APIs and expensive human annotation.In this paper, we present World to Code ($W2C$), a meticulously curated multi-modal data construction pipeline that organizes the final generation output into a Python code format. The pipeline leverages the VLM itself to extract cross-modal information via different prompts and filter the generated outputs again via a consistency filtering strategy. Experiments have demonstrated the high quality of $W2C$ by improving various existing visual question answering and visual grounding benchmarks across different VLMs. Further analysis also demonstrates that the new code parsing ability of VLMs presents better cross-modal equivalence than the commonly used detail caption ability. Our code is available at https://github.com/foundation-multimodal-models/World2Code.", }
Recent advances in Vision-Language Models (VLMs) and the scarcity of high-quality multi-modal alignment data have inspired numerous researches on synthetic VLM data generation. The conventional norm in VLM data construction uses a mixture of specialists in caption and OCR, or stronger VLM APIs and expensive human annotation.In this paper, we present World to Code ($W2C$), a meticulously curated multi-modal data construction pipeline that organizes the final generation output into a Python code format. The pipeline leverages the VLM itself to extract cross-modal information via different prompts and filter the generated outputs again via a consistency filtering strategy. Experiments have demonstrated the high quality of $W2C$ by improving various existing visual question answering and visual grounding benchmarks across different VLMs. Further analysis also demonstrates that the new code parsing ability of VLMs presents better cross-modal equivalence than the commonly used detail caption ability. Our code is available at https://github.com/foundation-multimodal-models/World2Code.
[ "Wang, Jiacong", "Wu, Bohong", "Jiang, Haiyong", "Xun, Zhou", "Xiao, Xin", "Guo, Haoyuan", "Xiao, Jun" ]
World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and Filtering
emnlp-main.265
Poster
2409.20424
[ "https://github.com/foundation-multimodal-models/world2code" ]
https://huggingface.co/papers/2409.20424
1
0
0
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.266.bib
https://aclanthology.org/2024.emnlp-main.266/
@inproceedings{jin-etal-2024-dvd, title = "{DVD}: Dynamic Contrastive Decoding for Knowledge Amplification in Multi-Document Question Answering", author = "Jin, Jing and Wang, Houfeng and Zhang, Hao and Li, Xiaoguang and Guo, Zhijiang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.266", pages = "4624--4637", abstract = "Large language models (LLMs) are widely used in question-answering (QA) systems but often generate information with hallucinations. Retrieval-augmented generation (RAG) offers a potential remedy, yet the uneven retrieval quality and irrelevant contents may distract LLMs.In this work, we address these issues at the generation phase by treating RAG as a multi-document QA task.We propose a novel decoding strategy, Dynamic Contrastive Decoding, which dynamically amplifies knowledge from selected documents during the generation phase. involves constructing inputs batchwise, designing new selection criteria to identify documents worth amplifying, and applying contrastive decoding with a specialized weight calculation to adjust the final logits used for sampling answer tokens. Zero-shot experimental results on ALCE-ASQA, NQ, TQA and PopQA benchmarks show that our method outperforms other decoding strategies. Additionally, we conduct experiments to validate the effectiveness of our selection criteria, weight calculation, and general multi-document scenarios. Our method requires no training and can be integrated with other methods to improve the RAG performance. Our codes will be publicly available at https://github.com/JulieJin-km/Dynamic{\_}Contrastive{\_}Decoding.", }
Large language models (LLMs) are widely used in question-answering (QA) systems but often generate information with hallucinations. Retrieval-augmented generation (RAG) offers a potential remedy, yet the uneven retrieval quality and irrelevant contents may distract LLMs.In this work, we address these issues at the generation phase by treating RAG as a multi-document QA task.We propose a novel decoding strategy, Dynamic Contrastive Decoding, which dynamically amplifies knowledge from selected documents during the generation phase. involves constructing inputs batchwise, designing new selection criteria to identify documents worth amplifying, and applying contrastive decoding with a specialized weight calculation to adjust the final logits used for sampling answer tokens. Zero-shot experimental results on ALCE-ASQA, NQ, TQA and PopQA benchmarks show that our method outperforms other decoding strategies. Additionally, we conduct experiments to validate the effectiveness of our selection criteria, weight calculation, and general multi-document scenarios. Our method requires no training and can be integrated with other methods to improve the RAG performance. Our codes will be publicly available at https://github.com/JulieJin-km/Dynamic{\_}Contrastive{\_}Decoding.
[ "Jin, Jing", "Wang, Houfeng", "Zhang, Hao", "Li, Xiaoguang", "Guo, Zhijiang" ]
DVD: Dynamic Contrastive Decoding for Knowledge Amplification in Multi-Document Question Answering
emnlp-main.266
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.267.bib
https://aclanthology.org/2024.emnlp-main.267/
@inproceedings{li-etal-2024-humans, title = "How Do Humans Write Code? Large Models Do It the Same Way Too", author = "Li, Long and He, Xuzheng and Wang, Haozhe and Wang, Linlin and He, Liang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.267", pages = "4638--4649", abstract = "Program-of-Thought (PoT) replaces natural language-based Chain-of-Thought (CoT) as the most popular method in Large Language Models (LLMs) mathematical reasoning tasks by utilizing external tool calls to circumvent computational errors. However, our evaluation of the GPT-4 and Llama series reveals that using PoT introduces more reasoning errors, such as incorrect formulas or flawed logic, compared to CoT. To address this issue, we propose Human-Think Language (HTL), which leverages a suite of strategies that help integrate PoT and CoT, encompassing: (1) a new generation paradigm that uses full CoT reasoning to control code generation. (2) Focus Attention, that directs model attention to the CoT reasoning during PoT to generate more logical code. (3) reinforcement learning that utilizes the accuracy of both CoT and PoT responses as rewards to prevent repetitive reasoning steps in LLMs when solving difficult math problems. Our method achieves an average improvement of 6.5{\%} on the Llama-Base model and 4.3{\%} on the Mistral-Base model across 8 mathematical calculation datasets. It also shows significant effectiveness on five out-of-domain datasets by controlling the model{'}s information flow, exhibiting strong transferability. Additionally, HTL shows the most significant improvement in non-mathematical natural language inference task, contributing to a unified reasoning task framework.", }
Program-of-Thought (PoT) replaces natural language-based Chain-of-Thought (CoT) as the most popular method in Large Language Models (LLMs) mathematical reasoning tasks by utilizing external tool calls to circumvent computational errors. However, our evaluation of the GPT-4 and Llama series reveals that using PoT introduces more reasoning errors, such as incorrect formulas or flawed logic, compared to CoT. To address this issue, we propose Human-Think Language (HTL), which leverages a suite of strategies that help integrate PoT and CoT, encompassing: (1) a new generation paradigm that uses full CoT reasoning to control code generation. (2) Focus Attention, that directs model attention to the CoT reasoning during PoT to generate more logical code. (3) reinforcement learning that utilizes the accuracy of both CoT and PoT responses as rewards to prevent repetitive reasoning steps in LLMs when solving difficult math problems. Our method achieves an average improvement of 6.5{\%} on the Llama-Base model and 4.3{\%} on the Mistral-Base model across 8 mathematical calculation datasets. It also shows significant effectiveness on five out-of-domain datasets by controlling the model{'}s information flow, exhibiting strong transferability. Additionally, HTL shows the most significant improvement in non-mathematical natural language inference task, contributing to a unified reasoning task framework.
[ "Li, Long", "He, Xuzheng", "Wang, Haozhe", "Wang, Linlin", "He, Liang" ]
How Do Humans Write Code? Large Models Do It the Same Way Too
emnlp-main.267
Poster
2402.15729
[ "https://github.com/seamoke/Human-Think-Language" ]
https://huggingface.co/papers/2402.15729
0
0
0
2
[ "seamoke111/HTL-CodeLlama-7B" ]
[]
[]
[ "seamoke111/HTL-CodeLlama-7B" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.268.bib
https://aclanthology.org/2024.emnlp-main.268/
@inproceedings{xiang-etal-2024-retrospex, title = "Retrospex: Language Agent Meets Offline Reinforcement Learning Critic", author = "Xiang, Yufei and Shen, Yiqun and Zhang, Yeqin and Cam-Tu, Nguyen", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.268", pages = "4650--4666", abstract = "Large language models (LLMs) possess extensive knowledge and commonsense reasoning capabilities, making them valuable for creating powerful agents. However, existing LLM agent frameworks have not fully utilized past experiences for improvement. This work introduces a new LLM-based agent framework called Retrospex, which addresses this challenge by analyzing past experiences in depth. Unlike previous approaches, Retrospex does not directly integrate experiences into the LLM{'}s context. Instead, it combines the LLM{'}s action likelihood with action values estimated by a Reinforcement Learning (RL) Critic, which is trained on past experiences through an offline {``}retrospection{''} process. Additionally, Retrospex employs a dynamic action rescoring mechanism that increases the importance of experience-based values for tasks that require more interaction with the environment. We evaluate Retrospex in ScienceWorld, ALFWorld and Webshop environments, demonstrating its advantages over strong baselines.", }
Large language models (LLMs) possess extensive knowledge and commonsense reasoning capabilities, making them valuable for creating powerful agents. However, existing LLM agent frameworks have not fully utilized past experiences for improvement. This work introduces a new LLM-based agent framework called Retrospex, which addresses this challenge by analyzing past experiences in depth. Unlike previous approaches, Retrospex does not directly integrate experiences into the LLM{'}s context. Instead, it combines the LLM{'}s action likelihood with action values estimated by a Reinforcement Learning (RL) Critic, which is trained on past experiences through an offline {``}retrospection{''} process. Additionally, Retrospex employs a dynamic action rescoring mechanism that increases the importance of experience-based values for tasks that require more interaction with the environment. We evaluate Retrospex in ScienceWorld, ALFWorld and Webshop environments, demonstrating its advantages over strong baselines.
[ "Xiang, Yufei", "Shen, Yiqun", "Zhang, Yeqin", "Cam-Tu, Nguyen" ]
Retrospex: Language Agent Meets Offline Reinforcement Learning Critic
emnlp-main.268
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.269.bib
https://aclanthology.org/2024.emnlp-main.269/
@inproceedings{liu-etal-2024-forgetting, title = "Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-Context Models", author = "Liu, Xinyu and Zhao, Runsong and Huang, Pengcheng and Xiao, Chunyang and Li, Bei and Wang, Jingang and Xiao, Tong and Zhu, JingBo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.269", pages = "4667--4682", abstract = "Numerous recent works target to extend effective context length for language models and various methods, tasks and benchmarks exist to measure model{'}s effective memory length. However, through thorough investigations, we find limitations for currently existing evaluations on model{'}s memory. We provide an extensive survey for limitations in this work and propose a new method called forgetting curve to measure the memorization capability of long-context models. We show that forgetting curve has the advantage of being robust to the tested corpus and the experimental settings, of not relying on prompt and can be applied to any model size. We apply our forgetting curve to a large variety of models involving both transformer and RNN/SSM based architectures. Our measurement provides empirical evidence for the effectiveness of transformer extension techniques while raises questions for the effective length of RNN/SSM based models. We also examine the difference between our measurement and existing benchmarks as well as popular metrics for various models.", }
Numerous recent works target to extend effective context length for language models and various methods, tasks and benchmarks exist to measure model{'}s effective memory length. However, through thorough investigations, we find limitations for currently existing evaluations on model{'}s memory. We provide an extensive survey for limitations in this work and propose a new method called forgetting curve to measure the memorization capability of long-context models. We show that forgetting curve has the advantage of being robust to the tested corpus and the experimental settings, of not relying on prompt and can be applied to any model size. We apply our forgetting curve to a large variety of models involving both transformer and RNN/SSM based architectures. Our measurement provides empirical evidence for the effectiveness of transformer extension techniques while raises questions for the effective length of RNN/SSM based models. We also examine the difference between our measurement and existing benchmarks as well as popular metrics for various models.
[ "Liu, Xinyu", "Zhao, Runsong", "Huang, Pengcheng", "Xiao, Chunyang", "Li, Bei", "Wang, Jingang", "Xiao, Tong", "Zhu, JingBo" ]
Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-Context Models
emnlp-main.269
Poster
2410.04727
[ "https://github.com/1azybug/forgettingcurve" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.270.bib
https://aclanthology.org/2024.emnlp-main.270/
@inproceedings{lyu-etal-2024-retrieve, title = "Retrieve-Plan-Generation: An Iterative Planning and Answering Framework for Knowledge-Intensive {LLM} Generation", author = "Lyu, Yuanjie and Niu, Zihan and Xie, Zheyong and Zhang, Chao and Xu, Tong and Wang, Yang and Chen, Enhong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.270", pages = "4683--4702", abstract = "Despite the significant progress of large language models (LLMs) in various tasks, they often produce factual errors due to their limited internal knowledge. Retrieval-Augmented Generation (RAG), which enhances LLMs with external knowledge sources, offers a promising solution. However, these methods can be misled by irrelevant paragraphs in retrieved documents. Due to the inherent uncertainty in LLM generation, inputting the entire document may introduce off-topic information, causing the model to deviate from the central topic and affecting the relevance of the generated content. To address these issues, we propose the Retrieve-Plan-Generation (RPG) framework. RPG generates plan tokens to guide subsequent generation in the plan stage. In the answer stage, the model selects relevant fine-grained paragraphs based on the plan and uses them for further answer generation. This plan-answer process is repeated iteratively until completion, enhancing generation relevance by focusing on specific topics. To implement this framework efficiently, we utilize a simple but effective multi-task prompt-tuning method, enabling the existing LLMs to handle both planning and answering. We comprehensively compare RPG with baselines across 5 knowledge-intensive generation tasks, demonstrating the effectiveness of our approach.", }
Despite the significant progress of large language models (LLMs) in various tasks, they often produce factual errors due to their limited internal knowledge. Retrieval-Augmented Generation (RAG), which enhances LLMs with external knowledge sources, offers a promising solution. However, these methods can be misled by irrelevant paragraphs in retrieved documents. Due to the inherent uncertainty in LLM generation, inputting the entire document may introduce off-topic information, causing the model to deviate from the central topic and affecting the relevance of the generated content. To address these issues, we propose the Retrieve-Plan-Generation (RPG) framework. RPG generates plan tokens to guide subsequent generation in the plan stage. In the answer stage, the model selects relevant fine-grained paragraphs based on the plan and uses them for further answer generation. This plan-answer process is repeated iteratively until completion, enhancing generation relevance by focusing on specific topics. To implement this framework efficiently, we utilize a simple but effective multi-task prompt-tuning method, enabling the existing LLMs to handle both planning and answering. We comprehensively compare RPG with baselines across 5 knowledge-intensive generation tasks, demonstrating the effectiveness of our approach.
[ "Lyu, Yuanjie", "Niu, Zihan", "Xie, Zheyong", "Zhang, Chao", "Xu, Tong", "Wang, Yang", "Chen, Enhong" ]
Retrieve-Plan-Generation: An Iterative Planning and Answering Framework for Knowledge-Intensive LLM Generation
emnlp-main.270
Poster
2406.14979
[ "https://github.com/haruhi-sudo/RPG" ]
https://huggingface.co/papers/2406.14979
0
0
0
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.271.bib
https://aclanthology.org/2024.emnlp-main.271/
@inproceedings{li-etal-2024-coevol, title = "{C}o{E}vol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation", author = "Li, Renhao and Tan, Minghuan and Wong, Derek F. and Yang, Min", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.271", pages = "4703--4721", abstract = "In recent years, instruction fine-tuning (IFT) on large language models (LLMs) has garnered considerable attention to enhance model performance on unseen tasks. Attempts have been made on automatic construction and effective selection for IFT data. However, we posit that previous methods have not fully harnessed the potential of LLMs for enhancing data quality. The responses within IFT data could be further enhanced by leveraging the capabilities of LLMs themselves.In this paper, we propose CoEvol, an LLM-based multi-agent cooperation framework for the improvement of responses for instructions. To effectively refine the responses, we develop an iterative framework following a {\_}debate-advise-edit-judge{\_} paradigm. A two-stage multi-agent debate strategy is further devised to ensure the diversity and reliability of editing suggestions within the framework. Empirically, models equipped with CoEvol outperform competitive baselines evaluated by MT-Bench and AlpacaEval, demonstrating its effectiveness in enhancing instruction-following capabilities for LLMs.", }
In recent years, instruction fine-tuning (IFT) on large language models (LLMs) has garnered considerable attention to enhance model performance on unseen tasks. Attempts have been made on automatic construction and effective selection for IFT data. However, we posit that previous methods have not fully harnessed the potential of LLMs for enhancing data quality. The responses within IFT data could be further enhanced by leveraging the capabilities of LLMs themselves.In this paper, we propose CoEvol, an LLM-based multi-agent cooperation framework for the improvement of responses for instructions. To effectively refine the responses, we develop an iterative framework following a {\_}debate-advise-edit-judge{\_} paradigm. A two-stage multi-agent debate strategy is further devised to ensure the diversity and reliability of editing suggestions within the framework. Empirically, models equipped with CoEvol outperform competitive baselines evaluated by MT-Bench and AlpacaEval, demonstrating its effectiveness in enhancing instruction-following capabilities for LLMs.
[ "Li, Renhao", "Tan, Minghuan", "Wong, Derek F.", "Yang, Min" ]
CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation
emnlp-main.271
Poster
2406.07054
[ "https://github.com/lirenhao1997/coevol" ]
https://huggingface.co/papers/2406.07054
0
0
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.272.bib
https://aclanthology.org/2024.emnlp-main.272/
@inproceedings{jiang-etal-2024-peek, title = "A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners", author = "Jiang, Bowen and Xie, Yangxinyu and Hao, Zhuoqun and Wang, Xiaomeng and Mallick, Tanwi and Su, Weijie J and Taylor, Camillo Jose and Roth, Dan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.272", pages = "4722--4756", abstract = "This study introduces a hypothesis-testing framework to assess whether large language models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We go beyond evaluating LLMs on accuracy; rather, we aim to investigate their token bias in solving logical reasoning tasks. Specifically, we develop carefully controlled synthetic datasets, featuring conjunction fallacy and syllogistic problems. Our framework outlines a list of hypotheses where token biases are readily identifiable, with all null hypotheses assuming genuine reasoning capabilities of LLMs. The findings in this study suggest, with statistical guarantee, that most LLMs still struggle with logical reasoning. While they may perform well on classic problems, their success largely depends on recognizing superficial patterns with strong token bias, thereby raising concerns about their actual reasoning and generalization abilities.", }
This study introduces a hypothesis-testing framework to assess whether large language models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We go beyond evaluating LLMs on accuracy; rather, we aim to investigate their token bias in solving logical reasoning tasks. Specifically, we develop carefully controlled synthetic datasets, featuring conjunction fallacy and syllogistic problems. Our framework outlines a list of hypotheses where token biases are readily identifiable, with all null hypotheses assuming genuine reasoning capabilities of LLMs. The findings in this study suggest, with statistical guarantee, that most LLMs still struggle with logical reasoning. While they may perform well on classic problems, their success largely depends on recognizing superficial patterns with strong token bias, thereby raising concerns about their actual reasoning and generalization abilities.
[ "Jiang, Bowen", "Xie, Yangxinyu", "Hao, Zhuoqun", "Wang, Xiaomeng", "Mallick, Tanwi", "Su, Weijie J", "Taylor, Camillo Jose", "Roth, Dan" ]
A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners
emnlp-main.272
Poster
2406.11050
[ "https://github.com/bowen-upenn/llm_token_bias" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.273.bib
https://aclanthology.org/2024.emnlp-main.273/
@inproceedings{gao-etal-2024-bayesian, title = "{B}ayesian Calibration of Win Rate Estimation with {LLM} Evaluators", author = "Gao, Yicheng and Xu, Gonghan and Wang, Zhe and Cohan, Arman", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.273", pages = "4757--4769", abstract = "Recent advances in large language models (LLMs) show the potential of using LLMs as evaluators for assessing the quality of text generations from LLMs. However, applying LLM evaluators naively to compare different systems can lead to unreliable results due to the inaccuracy and intrinsic bias of LLM evaluators. In order to mitigate this problem, we propose two calibration methods, Bayesian Win-Rate Sampling (BWRS) and Bayesian Dawid-Skene, both of which leverage Bayesian inference to more accurately infer the true win rate of generative language models. We empirically validate our methods on six datasets covering story generation, summarization, and instruction following tasks. We show that both our methods are effective in improving the accuracy of win rate estimation using LLMs as evaluators, offering a promising direction for reliable automatic text quality evaluation.", }
Recent advances in large language models (LLMs) show the potential of using LLMs as evaluators for assessing the quality of text generations from LLMs. However, applying LLM evaluators naively to compare different systems can lead to unreliable results due to the inaccuracy and intrinsic bias of LLM evaluators. In order to mitigate this problem, we propose two calibration methods, Bayesian Win-Rate Sampling (BWRS) and Bayesian Dawid-Skene, both of which leverage Bayesian inference to more accurately infer the true win rate of generative language models. We empirically validate our methods on six datasets covering story generation, summarization, and instruction following tasks. We show that both our methods are effective in improving the accuracy of win rate estimation using LLMs as evaluators, offering a promising direction for reliable automatic text quality evaluation.
[ "Gao, Yicheng", "Xu, Gonghan", "Wang, Zhe", "Cohan, Arman" ]
Bayesian Calibration of Win Rate Estimation with LLM Evaluators
emnlp-main.273
Poster
2411.04424
[ "https://github.com/yale-nlp/bay-calibration-llm-evaluators" ]
https://huggingface.co/papers/2411.04424
0
0
0
4
[]
[ "bay-calibration-llm-evaluators/hanna-annotated-latest", "bay-calibration-llm-evaluators/meva-annotated-latest", "bay-calibration-llm-evaluators/summeval-annotated-latest" ]
[]
[]
[ "bay-calibration-llm-evaluators/hanna-annotated-latest", "bay-calibration-llm-evaluators/meva-annotated-latest", "bay-calibration-llm-evaluators/summeval-annotated-latest" ]
[]
1
https://aclanthology.org/2024.emnlp-main.274.bib
https://aclanthology.org/2024.emnlp-main.274/
@inproceedings{yin-etal-2024-mumath, title = "{M}u{M}ath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning", author = "Yin, Shuo and You, Weihao and Ji, Zhilong and Zhong, Guoqiang and Bai, Jinfeng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.274", pages = "4770--4785", abstract = "The tool-use Large Language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs, while tool-free methods chose another track: augmenting math reasoning data. However, a great method to integrate the above two research paths and combine their advantages remains to be explored. In this work, we firstly include new math questions via **mu**lti-perspective data augmenting methods and then synthesize **code**-nested solutions to them. The open LLMs (e.g., Llama-2) are finetuned on the augmented dataset to get the resulting models, **MuMath-Code** ($\mu$-Math-Code). During the inference phase, our MuMath-Code generates code and interacts with the external python interpreter to get the execution results. Therefore, MuMath-Code leverages the advantages of both the external tool and data augmentation. To fully leverage the advantages of our augmented data, we propose a two-stage training strategy: In Stage-1, we finetune Llama-2 on pure CoT data to get an intermediate model, which then is trained on the code-nested data in Stage-2 to get the resulting MuMath-Code.Our MuMath-Code-7B achieves 83.8{\%} on GSM8K and 52.4{\%} on MATH, while MuMath-Code-70B model achieves new state-of-the-art performance among open methods{---}achieving 90.7{\%} on GSM8K and 55.1{\%} on MATH. Extensive experiments validate the combination of tool use and data augmentation, as well as our two-stage training strategy.We release the proposed dataset along with the associated code for public use: https://github.com/youweihao-tal/MuMath-Code.", }
The tool-use Large Language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs, while tool-free methods chose another track: augmenting math reasoning data. However, a great method to integrate the above two research paths and combine their advantages remains to be explored. In this work, we firstly include new math questions via **mu**lti-perspective data augmenting methods and then synthesize **code**-nested solutions to them. The open LLMs (e.g., Llama-2) are finetuned on the augmented dataset to get the resulting models, **MuMath-Code** ($\mu$-Math-Code). During the inference phase, our MuMath-Code generates code and interacts with the external python interpreter to get the execution results. Therefore, MuMath-Code leverages the advantages of both the external tool and data augmentation. To fully leverage the advantages of our augmented data, we propose a two-stage training strategy: In Stage-1, we finetune Llama-2 on pure CoT data to get an intermediate model, which then is trained on the code-nested data in Stage-2 to get the resulting MuMath-Code.Our MuMath-Code-7B achieves 83.8{\%} on GSM8K and 52.4{\%} on MATH, while MuMath-Code-70B model achieves new state-of-the-art performance among open methods{---}achieving 90.7{\%} on GSM8K and 55.1{\%} on MATH. Extensive experiments validate the combination of tool use and data augmentation, as well as our two-stage training strategy.We release the proposed dataset along with the associated code for public use: https://github.com/youweihao-tal/MuMath-Code.
[ "Yin, Shuo", "You, Weihao", "Ji, Zhilong", "Zhong, Guoqiang", "Bai, Jinfeng" ]
MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning
emnlp-main.274
Poster
2405.07551
[ "" ]
https://huggingface.co/papers/2405.07551
1
0
1
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.275.bib
https://aclanthology.org/2024.emnlp-main.275/
@inproceedings{li-etal-2024-seeing, title = "Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients", author = "Li, Weijun and Xu, Qiongkai and Dras, Mark", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.275", pages = "4786--4798", abstract = "Recent studies have shown that distributed machine learning is vulnerable to gradient inversion attacks, where private training data can be reconstructed by analyzing the gradients of the models shared in training. Previous attacks established that such reconstructions are possible using gradients from all parameters in the entire models. However, we hypothesize that most of the involved modules, or even their sub-modules, are at risk of training data leakage, and we validate such vulnerabilities in various intermediate layers of language models. Our extensive experiments reveal that gradients from a single Transformer layer, or even a single linear component with 0.54{\%} parameters, are susceptible to training data leakage. Additionally, we show that applying differential privacy on gradients during training offers limited protection against the novel vulnerability of data disclosure.", }
Recent studies have shown that distributed machine learning is vulnerable to gradient inversion attacks, where private training data can be reconstructed by analyzing the gradients of the models shared in training. Previous attacks established that such reconstructions are possible using gradients from all parameters in the entire models. However, we hypothesize that most of the involved modules, or even their sub-modules, are at risk of training data leakage, and we validate such vulnerabilities in various intermediate layers of language models. Our extensive experiments reveal that gradients from a single Transformer layer, or even a single linear component with 0.54{\%} parameters, are susceptible to training data leakage. Additionally, we show that applying differential privacy on gradients during training offers limited protection against the novel vulnerability of data disclosure.
[ "Li, Weijun", "Xu, Qiongkai", "Dras, Mark" ]
Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients
emnlp-main.275
Poster
2406.00999
[ "https://github.com/weijun-l/partial-gradients-leakage" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.276.bib
https://aclanthology.org/2024.emnlp-main.276/
@inproceedings{gu-etal-2024-rwkv, title = "{RWKV}-{CLIP}: A Robust Vision-Language Representation Learner", author = "Gu, Tiancheng and Yang, Kaicheng and An, Xiang and Feng, Ziyong and Liu, Dongnan and Cai, Weidong and Deng, Jiankang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.276", pages = "4799--4812", abstract = "Contrastive Language-Image Pre-training (CLIP) has significantly improved performance in various vision-language tasks by expanding the dataset with image-text pairs obtained from the web. This paper further explores CLIP from the perspectives of data and model architecture. To mitigate the impact of the noise data and enhance the quality of large-scale image-text data crawled from the internet, we introduce a diverse description generation framework that can leverage Large Language Models (LLMs) to combine and refine information from web-based image-text pairs, synthetic captions, and detection tags. Additionally, we propose RWKV-CLIP, the first RWKV-driven vision-language representation learning model that combines the effective parallel training of transformers with the efficient inference of RNNs. Extensive experiments across different model scales and pre-training datasets demonstrate that RWKV-CLIP is a robust vision-language representation learner and it achieves state-of-the-art performance across multiple downstream tasks, including linear probing, zero-shot classification, and zero-shot image-text retrieval. To facilitate future research, the code and pre-trained models are released at https://github.com/deepglint/RWKV-CLIP.", }
Contrastive Language-Image Pre-training (CLIP) has significantly improved performance in various vision-language tasks by expanding the dataset with image-text pairs obtained from the web. This paper further explores CLIP from the perspectives of data and model architecture. To mitigate the impact of the noise data and enhance the quality of large-scale image-text data crawled from the internet, we introduce a diverse description generation framework that can leverage Large Language Models (LLMs) to combine and refine information from web-based image-text pairs, synthetic captions, and detection tags. Additionally, we propose RWKV-CLIP, the first RWKV-driven vision-language representation learning model that combines the effective parallel training of transformers with the efficient inference of RNNs. Extensive experiments across different model scales and pre-training datasets demonstrate that RWKV-CLIP is a robust vision-language representation learner and it achieves state-of-the-art performance across multiple downstream tasks, including linear probing, zero-shot classification, and zero-shot image-text retrieval. To facilitate future research, the code and pre-trained models are released at https://github.com/deepglint/RWKV-CLIP.
[ "Gu, Tiancheng", "Yang, Kaicheng", "An, Xiang", "Feng, Ziyong", "Liu, Dongnan", "Cai, Weidong", "Deng, Jiankang" ]
RWKV-CLIP: A Robust Vision-Language Representation Learner
emnlp-main.276
Poster
2406.06973
[ "https://github.com/deepglint/rwkv-clip" ]
https://huggingface.co/papers/2406.06973
2
0
0
7
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[ "Kaichengalex/YFCC15M" ]
[ "smarttang/blingsec" ]
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[ "Kaichengalex/YFCC15M" ]
[ "smarttang/blingsec" ]
1
https://aclanthology.org/2024.emnlp-main.277.bib
https://aclanthology.org/2024.emnlp-main.277/
@inproceedings{nayeem-rafiei-2024-kidlm, title = "{K}id{LM}: Advancing Language Models for Children {--} Early Insights and Future Directions", author = "Nayeem, Mir Tafseer and Rafiei, Davood", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.277", pages = "4813--4836", abstract = "Recent studies highlight the potential of large language models in creating educational tools for children, yet significant challenges remain in maintaining key child-specific properties such as linguistic nuances, cognitive needs, and safety standards. In this paper, we explore foundational steps toward the development of child-specific language models, emphasizing the necessity of high-quality pre-training data. We introduce a novel user-centric data collection pipeline that involves gathering and validating a corpus specifically written for and sometimes by children. Additionally, we propose a new training objective, Stratified Masking, which dynamically adjusts masking probabilities based on our domain-specific child language data, enabling models to prioritize vocabulary and concepts more suitable for children. Experimental evaluations demonstrate that our model excels in understanding lower grade-level text, maintains safety by avoiding stereotypes, and captures children{'}s unique preferences. Furthermore, we provide actionable insights for future research and development in child-specific language modeling.", }
Recent studies highlight the potential of large language models in creating educational tools for children, yet significant challenges remain in maintaining key child-specific properties such as linguistic nuances, cognitive needs, and safety standards. In this paper, we explore foundational steps toward the development of child-specific language models, emphasizing the necessity of high-quality pre-training data. We introduce a novel user-centric data collection pipeline that involves gathering and validating a corpus specifically written for and sometimes by children. Additionally, we propose a new training objective, Stratified Masking, which dynamically adjusts masking probabilities based on our domain-specific child language data, enabling models to prioritize vocabulary and concepts more suitable for children. Experimental evaluations demonstrate that our model excels in understanding lower grade-level text, maintains safety by avoiding stereotypes, and captures children{'}s unique preferences. Furthermore, we provide actionable insights for future research and development in child-specific language modeling.
[ "Nayeem, Mir Tafseer", "Rafiei, Davood" ]
KidLM: Advancing Language Models for Children – Early Insights and Future Directions
emnlp-main.277
Poster
[ "https://github.com/tafseer-nayeem/KidLM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.278.bib
https://aclanthology.org/2024.emnlp-main.278/
@inproceedings{barua-etal-2024-using, title = "Using Language Models to Disambiguate Lexical Choices in Translation", author = "Barua, Josh and Subramanian, Sanjay and Yin, Kayo and Suhr, Alane", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.278", pages = "4837--4848", abstract = "In translation, a concept represented by a single word in a source language can have multiple variations in a target language. The task of lexical selection requires using context to identify which variation is most appropriate for a source text. We work with native speakers of nine languages to create DTAiLS, a dataset of 1,377 sentence pairs that exhibit cross-lingual concept variation when translating from English. We evaluate recent LLMs and neural machine translation systems on DTAiLS, with the best-performing model, GPT-4, achieving from 67 to 85{\%} accuracy across languages. Finally, we use language models to generate English rules describing target-language concept variations. Providing weaker models with high-quality lexical rules improves accuracy substantially, in some cases reaching or outperforming GPT-4.", }
In translation, a concept represented by a single word in a source language can have multiple variations in a target language. The task of lexical selection requires using context to identify which variation is most appropriate for a source text. We work with native speakers of nine languages to create DTAiLS, a dataset of 1,377 sentence pairs that exhibit cross-lingual concept variation when translating from English. We evaluate recent LLMs and neural machine translation systems on DTAiLS, with the best-performing model, GPT-4, achieving from 67 to 85{\%} accuracy across languages. Finally, we use language models to generate English rules describing target-language concept variations. Providing weaker models with high-quality lexical rules improves accuracy substantially, in some cases reaching or outperforming GPT-4.
[ "Barua, Josh", "Subramanian, Sanjay", "Yin, Kayo", "Suhr, Alane" ]
Using Language Models to Disambiguate Lexical Choices in Translation
emnlp-main.278
Poster
2411.05781
[ "https://github.com/berkeley-nlp/lex-rules" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.279.bib
https://aclanthology.org/2024.emnlp-main.279/
@inproceedings{li-etal-2024-disclosure, title = "How Does the Disclosure of {AI} Assistance Affect the Perceptions of Writing?", author = "Li, Zhuoyan and Liang, Chen and Peng, Jing and Yin, Ming", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.279", pages = "4849--4868", abstract = "Recent advances in generative AI technologies like large language models have boosted the incorporation of AI assistance in writing workflows, leading to the rise of a new paradigm of human-AI co-creation in writing. To understand how people perceive writings that are produced under this paradigm, in this paper, we conduct an experimental study to understand whether and how the disclosure of the level and type of AI assistance in the writing process would affect people{'}s perceptions of the writing on various aspects, including their evaluation on the quality of the writing, and their ranking of different writings. Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings for both argumentative essays and creative stories. This decrease in the average quality ratings often comes with an increased level of variations in different individuals{'} quality evaluations of the same writing. Indeed, factors such as an individual{'}s writing confidence and familiarity with AI writing assistants are shown to moderate the impact of AI assistance disclosure on their writing quality evaluations. We also find that disclosing the use of AI assistance may significantly reduce the proportion of writings produced with AI{'}s content generation assistance among the top-ranked writings.", }
Recent advances in generative AI technologies like large language models have boosted the incorporation of AI assistance in writing workflows, leading to the rise of a new paradigm of human-AI co-creation in writing. To understand how people perceive writings that are produced under this paradigm, in this paper, we conduct an experimental study to understand whether and how the disclosure of the level and type of AI assistance in the writing process would affect people{'}s perceptions of the writing on various aspects, including their evaluation on the quality of the writing, and their ranking of different writings. Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings for both argumentative essays and creative stories. This decrease in the average quality ratings often comes with an increased level of variations in different individuals{'} quality evaluations of the same writing. Indeed, factors such as an individual{'}s writing confidence and familiarity with AI writing assistants are shown to moderate the impact of AI assistance disclosure on their writing quality evaluations. We also find that disclosing the use of AI assistance may significantly reduce the proportion of writings produced with AI{'}s content generation assistance among the top-ranked writings.
[ "Li, Zhuoyan", "Liang, Chen", "Peng, Jing", "Yin, Ming" ]
How Does the Disclosure of AI Assistance Affect the Perceptions of Writing?
emnlp-main.279
Poster
2410.04545
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.280.bib
https://aclanthology.org/2024.emnlp-main.280/
@inproceedings{edin-etal-2024-unsupervised, title = "An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare Records", author = "Edin, Joakim and Maistro, Maria and Maal{\o}e, Lars and Borgholt, Lasse and Havtorn, Jakob Drachmann and Ruotsalo, Tuukka", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.280", pages = "4869--4890", abstract = "Electronic healthcare records are vital for patient safety as they document conditions, plans, and procedures in both free text and medical codes. Language models have significantly enhanced the processing of such records, streamlining workflows and reducing manual data entry, thereby saving healthcare providers significant resources. However, the black-box nature of these models often leaves healthcare professionals hesitant to trust them. State-of-the-art explainability methods increase model transparency but rely on human-annotated evidence spans, which are costly. In this study, we propose an approach to produce plausible and faithful explanations without needing such annotations. We demonstrate on the automated medical coding task that adversarial robustness training improves explanation plausibility and introduce AttInGrad, a new explanation method superior to previous ones. By combining both contributions in a fully unsupervised setup, we produce explanations of comparable quality, or better, to that of a supervised approach. We release our code and model weights.", }
Electronic healthcare records are vital for patient safety as they document conditions, plans, and procedures in both free text and medical codes. Language models have significantly enhanced the processing of such records, streamlining workflows and reducing manual data entry, thereby saving healthcare providers significant resources. However, the black-box nature of these models often leaves healthcare professionals hesitant to trust them. State-of-the-art explainability methods increase model transparency but rely on human-annotated evidence spans, which are costly. In this study, we propose an approach to produce plausible and faithful explanations without needing such annotations. We demonstrate on the automated medical coding task that adversarial robustness training improves explanation plausibility and introduce AttInGrad, a new explanation method superior to previous ones. By combining both contributions in a fully unsupervised setup, we produce explanations of comparable quality, or better, to that of a supervised approach. We release our code and model weights.
[ "Edin, Joakim", "Maistro, Maria", "Maal{\\o}e, Lars", "Borgholt, Lasse", "Havtorn, Jakob Drachmann", "Ruotsalo, Tuukka" ]
An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare Records
emnlp-main.280
Oral
2406.08958
[ "https://github.com/JoakimEdin/explainable-medical-coding" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.281.bib
https://aclanthology.org/2024.emnlp-main.281/
@inproceedings{wang-etal-2024-crafting, title = "Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs", author = "Wang, Zheng and Li, Zhongyang and Jiang, Zeren and Tu, Dandan and Shi, Wei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.281", pages = "4891--4906", abstract = "In the age of mobile internet, user data, often referred to as memories, is continuously generated on personal devices. Effectively managing and utilizing this data to deliver services to users is a compelling research topic. In this paper, we introduce a novel task of crafting personalized agents powered by large language models (LLMs), which utilize a user{'}s smartphone memories to enhance downstream applications with advanced LLM capabilities. To achieve this goal, we introduce EMG-RAG, a solution that combines Retrieval-Augmented Generation (RAG) techniques with an Editable Memory Graph (EMG). This approach is further optimized using Reinforcement Learning to address three distinct challenges: data collection, editability, and selectability. Extensive experiments on a real-world dataset validate the effectiveness of EMG-RAG, achieving an improvement of approximately 10{\%} over the best existing approach. Additionally, the personalized agents have been transferred into a real smartphone AI assistant, which leads to enhanced usability.", }
In the age of mobile internet, user data, often referred to as memories, is continuously generated on personal devices. Effectively managing and utilizing this data to deliver services to users is a compelling research topic. In this paper, we introduce a novel task of crafting personalized agents powered by large language models (LLMs), which utilize a user{'}s smartphone memories to enhance downstream applications with advanced LLM capabilities. To achieve this goal, we introduce EMG-RAG, a solution that combines Retrieval-Augmented Generation (RAG) techniques with an Editable Memory Graph (EMG). This approach is further optimized using Reinforcement Learning to address three distinct challenges: data collection, editability, and selectability. Extensive experiments on a real-world dataset validate the effectiveness of EMG-RAG, achieving an improvement of approximately 10{\%} over the best existing approach. Additionally, the personalized agents have been transferred into a real smartphone AI assistant, which leads to enhanced usability.
[ "Wang, Zheng", "Li, Zhongyang", "Jiang, Zeren", "Tu, D", "an", "Shi, Wei" ]
Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs
emnlp-main.281
Poster
2409.19401
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.282.bib
https://aclanthology.org/2024.emnlp-main.282/
@inproceedings{liu-etal-2024-evedit, title = "{EVEDIT}: Event-based Knowledge Editing for Deterministic Knowledge Propagation", author = "Liu, Jiateng and Yu, Pengfei and Zhang, Yuji and Li, Sha and Zhang, Zixuan and Sarikaya, Ruhi and Small, Kevin and Ji, Heng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.282", pages = "4907--4926", abstract = "The dynamic nature of real-world information necessitates knowledge editing (KE) in large language models (LLMs). The edited knowledge should propagate and facilitate the deduction of new information based on existing model knowledge. We term the existing related knowledge in LLM serving as the origination of knowledge propagation as {''}deduction anchors{''}. However, current KE approaches, which only operate on (subject, relation, object) triple. We both theoretically and empirically observe that this simplified setting often leads to uncertainty when determining the deduction anchors, causing low confidence in their answers. To mitigate this issue, we propose a novel task of event-based knowledge editing that pairs facts with event descriptions. This task manifests not only a closer simulation of real-world editing scenarios but also a more logically sound setting, implicitly defining the deduction anchor and enabling LLMs to propagate knowledge confidently. We curate a new benchmark dataset Evedit derived from the CounterFact dataset and validate its superiority in improving model confidence. Moreover, while we observe that the event-based setting is significantly challenging for existing approaches, we propose a novel approach Self-Edit that showcases stronger performance, achieving 55.6{\%} consistency improvement while maintaining the naturalness of generation.", }
The dynamic nature of real-world information necessitates knowledge editing (KE) in large language models (LLMs). The edited knowledge should propagate and facilitate the deduction of new information based on existing model knowledge. We term the existing related knowledge in LLM serving as the origination of knowledge propagation as {''}deduction anchors{''}. However, current KE approaches, which only operate on (subject, relation, object) triple. We both theoretically and empirically observe that this simplified setting often leads to uncertainty when determining the deduction anchors, causing low confidence in their answers. To mitigate this issue, we propose a novel task of event-based knowledge editing that pairs facts with event descriptions. This task manifests not only a closer simulation of real-world editing scenarios but also a more logically sound setting, implicitly defining the deduction anchor and enabling LLMs to propagate knowledge confidently. We curate a new benchmark dataset Evedit derived from the CounterFact dataset and validate its superiority in improving model confidence. Moreover, while we observe that the event-based setting is significantly challenging for existing approaches, we propose a novel approach Self-Edit that showcases stronger performance, achieving 55.6{\%} consistency improvement while maintaining the naturalness of generation.
[ "Liu, Jiateng", "Yu, Pengfei", "Zhang, Yuji", "Li, Sha", "Zhang, Zixuan", "Sarikaya, Ruhi", "Small, Kevin", "Ji, Heng" ]
EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation
emnlp-main.282
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.283.bib
https://aclanthology.org/2024.emnlp-main.283/
@inproceedings{aoyama-schneider-2024-modeling, title = "Modeling Nonnative Sentence Processing with {L}2 Language Models", author = "Aoyama, Tatsuya and Schneider, Nathan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.283", pages = "4927--4940", abstract = "We study LMs pretrained sequentially on two languages ({``}L2LMs{''}) for modeling nonnative sentence processing. In particular, we pretrain GPT2 on 6 different first languages (L1s), followed by English as the second language (L2). We examine the effect of the choice of pretraining L1 on the model{'}s ability to predict human reading times, evaluating on English readers from a range of L1 backgrounds. Experimental results show that, while all of the LMs{'} word surprisals improve prediction of L2 reading times, especially for human L1s distant from English, there is no reliable effect of the choice of L2LM{'}s L1. We also evaluate the learning trajectory of a monolingual English LM: for predicting L2 as opposed to L1 reading, it peaks much earlier and immediately falls off, possibly mirroring the difference in proficiency between the native and nonnative populations. Lastly, we provide examples of L2LMs{'} surprisals, which could potentially generate hypotheses about human L2 reading.", }
We study LMs pretrained sequentially on two languages ({``}L2LMs{''}) for modeling nonnative sentence processing. In particular, we pretrain GPT2 on 6 different first languages (L1s), followed by English as the second language (L2). We examine the effect of the choice of pretraining L1 on the model{'}s ability to predict human reading times, evaluating on English readers from a range of L1 backgrounds. Experimental results show that, while all of the LMs{'} word surprisals improve prediction of L2 reading times, especially for human L1s distant from English, there is no reliable effect of the choice of L2LM{'}s L1. We also evaluate the learning trajectory of a monolingual English LM: for predicting L2 as opposed to L1 reading, it peaks much earlier and immediately falls off, possibly mirroring the difference in proficiency between the native and nonnative populations. Lastly, we provide examples of L2LMs{'} surprisals, which could potentially generate hypotheses about human L2 reading.
[ "Aoyama, Tatsuya", "Schneider, Nathan" ]
Modeling Nonnative Sentence Processing with L2 Language Models
emnlp-main.283
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.284.bib
https://aclanthology.org/2024.emnlp-main.284/
@inproceedings{cheng-etal-2024-least, title = "From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data Synthesis", author = "Cheng, Chuanqi and Guan, Jian and Wu, Wei and Yan, Rui", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.284", pages = "4941--4957", abstract = "We explore multi-step reasoning in vision-language models (VLMs). The problem is challenging, as reasoning data consisting of multiple steps of visual and language processing are barely available. To overcome the challenge, we first introduce a least-to-most visual reasoning paradigm, which interleaves steps of decomposing a question into sub-questions and invoking external tools for resolving sub-questions. Based on the paradigm, we further propose a novel data synthesis approach that can automatically create questions and multi-step reasoning paths for an image in a bottom-up manner. Our approach divides the complex synthesis task into a few simple sub-tasks, and (almost entirely) relies on open-sourced models to accomplish the sub-tasks. Therefore, the entire synthesis process is reproducible and cost-efficient, and the synthesized data is quality guaranteed. With the approach, we construct 50k visual reasoning examples. Then, we develop a visual reasoner through supervised fine-tuning, which is capable of generally enhancing the reasoning abilities of a wide range of existing VLMs in a plug-and-play fashion. Extensive experiments indicate that the visual reasoner can consistently and significantly improve four VLMs on four VQA benchmarks.", }
We explore multi-step reasoning in vision-language models (VLMs). The problem is challenging, as reasoning data consisting of multiple steps of visual and language processing are barely available. To overcome the challenge, we first introduce a least-to-most visual reasoning paradigm, which interleaves steps of decomposing a question into sub-questions and invoking external tools for resolving sub-questions. Based on the paradigm, we further propose a novel data synthesis approach that can automatically create questions and multi-step reasoning paths for an image in a bottom-up manner. Our approach divides the complex synthesis task into a few simple sub-tasks, and (almost entirely) relies on open-sourced models to accomplish the sub-tasks. Therefore, the entire synthesis process is reproducible and cost-efficient, and the synthesized data is quality guaranteed. With the approach, we construct 50k visual reasoning examples. Then, we develop a visual reasoner through supervised fine-tuning, which is capable of generally enhancing the reasoning abilities of a wide range of existing VLMs in a plug-and-play fashion. Extensive experiments indicate that the visual reasoner can consistently and significantly improve four VLMs on four VQA benchmarks.
[ "Cheng, Chuanqi", "Guan, Jian", "Wu, Wei", "Yan, Rui" ]
From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data Synthesis
emnlp-main.284
Poster
2406.19934
[ "https://github.com/steven-ccq/visualreasoner" ]
https://huggingface.co/papers/2406.19934
0
0
0
4
[]
[ "orange-sk/VisualReasoner-1M" ]
[]
[]
[ "orange-sk/VisualReasoner-1M" ]
[]
1
https://aclanthology.org/2024.emnlp-main.285.bib
https://aclanthology.org/2024.emnlp-main.285/
@inproceedings{iskander-etal-2024-quality, title = "Quality Matters: Evaluating Synthetic Data for Tool-Using {LLM}s", author = "Iskander, Shadi and Tolmach, Sofia and Shapira, Ori and Cohen, Nachshon and Karnin, Zohar", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.285", pages = "4958--4976", abstract = "Training large language models (LLMs) for external tool usage is a rapidly expanding field, with recent research focusing on generating synthetic data to address the shortage of available data. However, the absence of systematic data quality checks poses complications for properly training and testing models. To that end, we propose two approaches for assessing the reliability of data for training LLMs to use external tools. The first approach uses intuitive, human-defined correctness criteria. The second approach uses a model-driven assessment with in-context evaluation. We conduct a thorough evaluation of data quality on two popular benchmarks, followed by an extrinsic evaluation that showcases the impact of data quality on model performance. Our results demonstrate that models trained on high-quality data outperform those trained on unvalidated data, even when trained with a smaller quantity of data. These findings empirically support the significance of assessing and ensuring the reliability of training data for tool-using LLMs.", }
Training large language models (LLMs) for external tool usage is a rapidly expanding field, with recent research focusing on generating synthetic data to address the shortage of available data. However, the absence of systematic data quality checks poses complications for properly training and testing models. To that end, we propose two approaches for assessing the reliability of data for training LLMs to use external tools. The first approach uses intuitive, human-defined correctness criteria. The second approach uses a model-driven assessment with in-context evaluation. We conduct a thorough evaluation of data quality on two popular benchmarks, followed by an extrinsic evaluation that showcases the impact of data quality on model performance. Our results demonstrate that models trained on high-quality data outperform those trained on unvalidated data, even when trained with a smaller quantity of data. These findings empirically support the significance of assessing and ensuring the reliability of training data for tool-using LLMs.
[ "Isk", "er, Shadi", "Tolmach, Sofia", "Shapira, Ori", "Cohen, Nachshon", "Karnin, Zohar" ]
Quality Matters: Evaluating Synthetic Data for Tool-Using LLMs
emnlp-main.285
Poster
2409.16341
[ "" ]
https://huggingface.co/papers/2409.16341
2
0
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.286.bib
https://aclanthology.org/2024.emnlp-main.286/
@inproceedings{li-etal-2024-cross, title = "Cross-Domain Audio Deepfake Detection: Dataset and Analysis", author = "Li, Yuang and Zhang, Min and Ren, Mengxin and Qiao, Xiaosong and Ma, Miaomiao and Wei, Daimeng and Yang, Hao", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.286", pages = "4977--4983", abstract = "Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy. Recent zero-shot text-to-speech (TTS) models pose higher risks as they can clone voices with a single utterance. However, the existing ADD datasets are outdated, leading to suboptimal generalization of detection models. In this paper, we construct a new cross-domain ADD dataset comprising over 300 hours of speech data that is generated by five advanced zero-shot TTS models. To simulate real-world scenarios, we employ diverse attack methods and audio prompts from different datasets. Experiments show that, through novel attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve equal error rates of 4.1{\%} and 6.5{\%} respectively. Additionally, we demonstrate our models{'} outstanding few-shot ADD ability by fine-tuning with just one minute of target-domain data. Nonetheless, neural codec compressors greatly affect the detection accuracy, necessitating further research. Our dataset is publicly available (https://github.com/leolya/CD-ADD).", }
Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy. Recent zero-shot text-to-speech (TTS) models pose higher risks as they can clone voices with a single utterance. However, the existing ADD datasets are outdated, leading to suboptimal generalization of detection models. In this paper, we construct a new cross-domain ADD dataset comprising over 300 hours of speech data that is generated by five advanced zero-shot TTS models. To simulate real-world scenarios, we employ diverse attack methods and audio prompts from different datasets. Experiments show that, through novel attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve equal error rates of 4.1{\%} and 6.5{\%} respectively. Additionally, we demonstrate our models{'} outstanding few-shot ADD ability by fine-tuning with just one minute of target-domain data. Nonetheless, neural codec compressors greatly affect the detection accuracy, necessitating further research. Our dataset is publicly available (https://github.com/leolya/CD-ADD).
[ "Li, Yuang", "Zhang, Min", "Ren, Mengxin", "Qiao, Xiaosong", "Ma, Miaomiao", "Wei, Daimeng", "Yang, Hao" ]
Cross-Domain Audio Deepfake Detection: Dataset and Analysis
emnlp-main.286
Poster
2404.04904
[ "" ]
https://huggingface.co/papers/2404.04904
0
0
0
6
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.287.bib
https://aclanthology.org/2024.emnlp-main.287/
@inproceedings{liu-etal-2024-mapper, title = "{M}a{PPER}: Multimodal Prior-guided Parameter Efficient Tuning for Referring Expression Comprehension", author = "Liu, Ting and Xu, Zunnan and Hu, Yue and Shi, Liangtao and Wang, Zhiqiang and Yin, Quanjun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.287", pages = "4984--4994", abstract = "Referring Expression Comprehension (REC), which aims to ground a local visual region via natural language, is a task that heavily relies on multimodal alignment. Most existing methods utilize powerful pre-trained models to transfer visual/linguistic knowledge by full fine-tuning. However, full fine-tuning the entire backbone not only breaks the rich prior knowledge embedded in the pre-training, but also incurs significant computational costs. Motivated by the recent emergence of Parameter-Efficient Transfer Learning (PETL) methods, we aim to solve the REC task in an effective and efficient manner. Directly applying these PETL methods to the REC task is inappropriate, as they lack the specific-domain abilities for precise local visual perception and visual-language alignment. Therefore, we propose a novel framework of Multimodal Prior-guided Parameter Efficient Tuning, namely MaPPER. Specifically, MaPPER comprises Dynamic Prior Adapters guided by a aligned prior, and Local Convolution Adapters to extract precise local semantics for better visual perception. Moreover, the Prior-Guided Text module is proposed to further utilize the prior for facilitating the cross-modal alignment. Experimental results on three widely-used benchmarks demonstrate that MaPPER achieves the best accuracy compared to the full fine-tuning and other PETL methods with only 1.41{\%} tunable backbone parameters.", }
Referring Expression Comprehension (REC), which aims to ground a local visual region via natural language, is a task that heavily relies on multimodal alignment. Most existing methods utilize powerful pre-trained models to transfer visual/linguistic knowledge by full fine-tuning. However, full fine-tuning the entire backbone not only breaks the rich prior knowledge embedded in the pre-training, but also incurs significant computational costs. Motivated by the recent emergence of Parameter-Efficient Transfer Learning (PETL) methods, we aim to solve the REC task in an effective and efficient manner. Directly applying these PETL methods to the REC task is inappropriate, as they lack the specific-domain abilities for precise local visual perception and visual-language alignment. Therefore, we propose a novel framework of Multimodal Prior-guided Parameter Efficient Tuning, namely MaPPER. Specifically, MaPPER comprises Dynamic Prior Adapters guided by a aligned prior, and Local Convolution Adapters to extract precise local semantics for better visual perception. Moreover, the Prior-Guided Text module is proposed to further utilize the prior for facilitating the cross-modal alignment. Experimental results on three widely-used benchmarks demonstrate that MaPPER achieves the best accuracy compared to the full fine-tuning and other PETL methods with only 1.41{\%} tunable backbone parameters.
[ "Liu, Ting", "Xu, Zunnan", "Hu, Yue", "Shi, Liangtao", "Wang, Zhiqiang", "Yin, Quanjun" ]
MaPPER: Multimodal Prior-guided Parameter Efficient Tuning for Referring Expression Comprehension
emnlp-main.287
Poster
2409.13609
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.288.bib
https://aclanthology.org/2024.emnlp-main.288/
@inproceedings{ko-etal-2024-hierarchical, title = "Hierarchical Deconstruction of {LLM} Reasoning: A Graph-Based Framework for Analyzing Knowledge Utilization", author = "Ko, Miyoung and Park, Sue Hyun and Park, Joonsuk and Seo, Minjoon", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.288", pages = "4995--5027", abstract = "Despite the advances in large language models (LLMs), how they use their knowledge for reasoning is not yet well understood.In this study, we propose a method that deconstructs complex real-world questions into a graph, representing each question as a node with predecessors of background knowledge needed to solve the question. We develop the DepthQA dataset, deconstructing questions into three depths: (i) recalling conceptual knowledge, (ii) applying procedural knowledge, and (iii) analyzing strategic knowledge. Based on a hierarchical graph, we quantify forward discrepancy, a discrepancy in LLM performance on simpler sub-problems versus complex questions. We also measure backward discrepancy where LLMs answer complex questions but struggle with simpler ones. Our analysis shows that smaller models exhibit more discrepancies than larger models. Distinct patterns of discrepancies are observed across model capacity and possibility of training data memorization. Additionally, guiding models from simpler to complex questions through multi-turn interactions improves performance across model sizes, highlighting the importance of structured intermediate steps in knowledge reasoning. This work enhances our understanding of LLM reasoning and suggests ways to improve their problem-solving abilities.", }
Despite the advances in large language models (LLMs), how they use their knowledge for reasoning is not yet well understood.In this study, we propose a method that deconstructs complex real-world questions into a graph, representing each question as a node with predecessors of background knowledge needed to solve the question. We develop the DepthQA dataset, deconstructing questions into three depths: (i) recalling conceptual knowledge, (ii) applying procedural knowledge, and (iii) analyzing strategic knowledge. Based on a hierarchical graph, we quantify forward discrepancy, a discrepancy in LLM performance on simpler sub-problems versus complex questions. We also measure backward discrepancy where LLMs answer complex questions but struggle with simpler ones. Our analysis shows that smaller models exhibit more discrepancies than larger models. Distinct patterns of discrepancies are observed across model capacity and possibility of training data memorization. Additionally, guiding models from simpler to complex questions through multi-turn interactions improves performance across model sizes, highlighting the importance of structured intermediate steps in knowledge reasoning. This work enhances our understanding of LLM reasoning and suggests ways to improve their problem-solving abilities.
[ "Ko, Miyoung", "Park, Sue Hyun", "Park, Joonsuk", "Seo, Minjoon" ]
Hierarchical Deconstruction of LLM Reasoning: A Graph-Based Framework for Analyzing Knowledge Utilization
emnlp-main.288
Poster
2406.19502
[ "https://github.com/kaistai/knowledge-reasoning" ]
https://huggingface.co/papers/2406.19502
1
2
0
4
[]
[ "kaist-ai/DepthQA" ]
[]
[]
[ "kaist-ai/DepthQA" ]
[]
1
https://aclanthology.org/2024.emnlp-main.289.bib
https://aclanthology.org/2024.emnlp-main.289/
@inproceedings{huang-etal-2024-aligning, title = "Aligning Translation-Specific Understanding to General Understanding in Large Language Models", author = "Huang, Yichong and Li, Baohang and Feng, Xiaocheng and Huo, Wenshuai and Fu, Chengpeng and Liu, Ting and Qin, Bing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.289", pages = "5028--5041", abstract = "Large Language models (LLMs) have exhibited remarkable abilities in understanding complex texts, offering a promising path towards human-like translation performance. However, this study reveals the misalignment between the translation-specific understanding and the general understanding inside LLMs. This understanding misalignment leads to LLMs mistakenly or literally translating some complicated concepts that they accurately comprehend in the general scenarios (e.g., QA). To align the translation-specific understanding to the general one, we propose a novel translation process, DUAT (Difficult words Understanding Aligned Translation), explicitly incorporating the general understanding on the complicated content incurring inconsistent understandings to guide the translation. Specifically, DUAT performs cross-lingual interpretation for the difficult-to-translate words and enhances the translation with the generated interpretations. Furthermore, we reframe the external tools to improve DUAT in detecting difficult words and generating helpful interpretations. We conduct experiments on the self-constructed benchmark Challenge-WMT, consisting of samples that are prone to mistranslation. Human evaluation results on high-resource and low-resource language pairs indicate that DUAT significantly facilitates the understanding alignment, which improves the translation quality (up to +3.85 COMET) and reduces translation literalness by -25{\%} ∼ -51{\%}.", }
Large Language models (LLMs) have exhibited remarkable abilities in understanding complex texts, offering a promising path towards human-like translation performance. However, this study reveals the misalignment between the translation-specific understanding and the general understanding inside LLMs. This understanding misalignment leads to LLMs mistakenly or literally translating some complicated concepts that they accurately comprehend in the general scenarios (e.g., QA). To align the translation-specific understanding to the general one, we propose a novel translation process, DUAT (Difficult words Understanding Aligned Translation), explicitly incorporating the general understanding on the complicated content incurring inconsistent understandings to guide the translation. Specifically, DUAT performs cross-lingual interpretation for the difficult-to-translate words and enhances the translation with the generated interpretations. Furthermore, we reframe the external tools to improve DUAT in detecting difficult words and generating helpful interpretations. We conduct experiments on the self-constructed benchmark Challenge-WMT, consisting of samples that are prone to mistranslation. Human evaluation results on high-resource and low-resource language pairs indicate that DUAT significantly facilitates the understanding alignment, which improves the translation quality (up to +3.85 COMET) and reduces translation literalness by -25{\%} ∼ -51{\%}.
[ "Huang, Yichong", "Li, Baohang", "Feng, Xiaocheng", "Huo, Wenshuai", "Fu, Chengpeng", "Liu, Ting", "Qin, Bing" ]
Aligning Translation-Specific Understanding to General Understanding in Large Language Models
emnlp-main.289
Poster
2401.05072
[ "https://github.com/orangeinsouth/challengewmt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.290.bib
https://aclanthology.org/2024.emnlp-main.290/
@inproceedings{ballout-etal-2024-fool, title = "{FOOL} {ME} {IF} {YOU} {CAN}! An Adversarial Dataset to Investigate the Robustness of {LM}s in Word Sense Disambiguation", author = {Ballout, Mohamad and Dedert, Anne and Abdelmoneim, Nohayr Muhammad and Krumnack, Ulf and Heidemann, Gunther and K{\"u}hnberger, Kai-Uwe}, editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.290", pages = "5042--5059", abstract = "Word sense disambiguation (WSD) is a key task in natural language processing and lexical semantics. Pre-trained language models with contextualized word embeddings have significantly improved performance in regular WSD tasks. However, these models still struggle with recognizing semantic boundaries and often misclassify homonyms in adversarial context. Therefore, we propose FOOL: FOur-fold Obscure Lexical, a new coarse-grained WSD dataset, which includes four different test sets designed to assess the robustness of language models in WSD tasks. Two sets feature typical WSD scenarios, while the other two include sentences with opposing contexts to challenge the models further.We tested two types of models on the proposed dataset: models with encoders, such as the BERT and T5 series of varying sizes by probing their embeddings, and state-of-the-art large decoder models like GPT-4o and the LlaMA3 family, using zero shot prompting. Across different state-of-the-art language models, we observed a decrease in performance in the latter two sets compared to the first two, with some models being affected more than others. We show interesting findings where small models like T5-large and BERT-large performed better than GPT-4o on Set 3 of the dataset. This indicates that, despite excelling in regular WSD tasks, these models still struggle to correctly disambiguate homonyms in artificial (Set 3) or realistic adversarial contexts (Set 4).", }
Word sense disambiguation (WSD) is a key task in natural language processing and lexical semantics. Pre-trained language models with contextualized word embeddings have significantly improved performance in regular WSD tasks. However, these models still struggle with recognizing semantic boundaries and often misclassify homonyms in adversarial context. Therefore, we propose FOOL: FOur-fold Obscure Lexical, a new coarse-grained WSD dataset, which includes four different test sets designed to assess the robustness of language models in WSD tasks. Two sets feature typical WSD scenarios, while the other two include sentences with opposing contexts to challenge the models further.We tested two types of models on the proposed dataset: models with encoders, such as the BERT and T5 series of varying sizes by probing their embeddings, and state-of-the-art large decoder models like GPT-4o and the LlaMA3 family, using zero shot prompting. Across different state-of-the-art language models, we observed a decrease in performance in the latter two sets compared to the first two, with some models being affected more than others. We show interesting findings where small models like T5-large and BERT-large performed better than GPT-4o on Set 3 of the dataset. This indicates that, despite excelling in regular WSD tasks, these models still struggle to correctly disambiguate homonyms in artificial (Set 3) or realistic adversarial contexts (Set 4).
[ "Ballout, Mohamad", "Dedert, Anne", "Abdelmoneim, Nohayr Muhammad", "Krumnack, Ulf", "Heidemann, Gunther", "K{\\\"u}hnberger, Kai-Uwe" ]
FOOL ME IF YOU CAN! An Adversarial Dataset to Investigate the Robustness of LMs in Word Sense Disambiguation
emnlp-main.290
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.291.bib
https://aclanthology.org/2024.emnlp-main.291/
@inproceedings{lee-etal-2024-concept, title = "Concept-skill Transferability-based Data Selection for Large Vision-Language Models", author = "Lee, Jaewoo and Li, Boyang and Hwang, Sung Ju", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.291", pages = "5060--5080", abstract = "Instruction tuning, or supervised finetuning on extensive task-specific data, is necessary for Large Vision-Language Models (LVLMs) to generalize well across a broad range of vision-language (VL) tasks. However, training on large VL datasets can become prohibitively expensive. In this work, we introduce COINCIDE, an effective and scalable data selection technique that uses a small model as a reference model to select visual instruction tuning data for efficient finetuning of a target LVLM, focusing on diversity and transferability. Specifically, we cluster the training data using internal activations from a small model, which identifies VL concept-skill compositions needed by a target LVLM. We then sample data from these diverse clusters by considering their density and transferability, or the ability to transfer well to other concept-skill compositions. This approach ensures the diversity of these compositions, which is vital for LVLM generalization. Extensive experiments demonstrate that COINCIDE achieves superior performance and data selection efficiency against 8 strong baselines on two distinct datasets: LLaVA-1.5 and Vision-Flan. Using only 20{\%} of the LLaVA-1.5 dataset, COINCIDE achieves performance comparable to the LVLM finetuned on the whole dataset, with 70{\%} reduction of the wall-clock running time. On the Vision-Flan dataset, our method achieves superior results with only 16.7{\%} of the training data.", }
Instruction tuning, or supervised finetuning on extensive task-specific data, is necessary for Large Vision-Language Models (LVLMs) to generalize well across a broad range of vision-language (VL) tasks. However, training on large VL datasets can become prohibitively expensive. In this work, we introduce COINCIDE, an effective and scalable data selection technique that uses a small model as a reference model to select visual instruction tuning data for efficient finetuning of a target LVLM, focusing on diversity and transferability. Specifically, we cluster the training data using internal activations from a small model, which identifies VL concept-skill compositions needed by a target LVLM. We then sample data from these diverse clusters by considering their density and transferability, or the ability to transfer well to other concept-skill compositions. This approach ensures the diversity of these compositions, which is vital for LVLM generalization. Extensive experiments demonstrate that COINCIDE achieves superior performance and data selection efficiency against 8 strong baselines on two distinct datasets: LLaVA-1.5 and Vision-Flan. Using only 20{\%} of the LLaVA-1.5 dataset, COINCIDE achieves performance comparable to the LVLM finetuned on the whole dataset, with 70{\%} reduction of the wall-clock running time. On the Vision-Flan dataset, our method achieves superior results with only 16.7{\%} of the training data.
[ "Lee, Jaewoo", "Li, Boyang", "Hwang, Sung Ju" ]
Concept-skill Transferability-based Data Selection for Large Vision-Language Models
emnlp-main.291
Poster
2406.10995
[ "https://github.com/g-jwlee/coincide_code" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.292.bib
https://aclanthology.org/2024.emnlp-main.292/
@inproceedings{du-etal-2024-llms, title = "{LLM}s Assist {NLP} Researchers: Critique Paper (Meta-)Reviewing", author = "Du, Jiangshu and Wang, Yibo and Zhao, Wenting and Deng, Zhongfen and Liu, Shuaiqi and Lou, Renze and Zou, Henry Peng and Narayanan Venkit, Pranav and Zhang, Nan and Srinath, Mukund and Zhang, Haoran Ranran and Gupta, Vipul and Li, Yinghui and Li, Tao and Wang, Fei and Liu, Qin and Liu, Tianlin and Gao, Pengzhi and Xia, Congying and Xing, Chen and Jiayang, Cheng and Wang, Zhaowei and Su, Ying and Shah, Raj Sanjay and Guo, Ruohao and Gu, Jing and Li, Haoran and Wei, Kangda and Wang, Zihao and Cheng, Lu and Ranathunga, Surangika and Fang, Meng and Fu, Jie and Liu, Fei and Huang, Ruihong and Blanco, Eduardo and Cao, Yixin and Zhang, Rui and Yu, Philip S. and Yin, Wenpeng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.292", pages = "5081--5099", abstract = "Claim: This work is not advocating the use of LLMs for paper (meta-)reviewing. Instead, wepresent a comparative analysis to identify and distinguish LLM activities from human activities. Two research goals: i) Enable better recognition of instances when someone implicitly uses LLMs for reviewing activities; ii) Increase community awareness that LLMs, and AI in general, are currently inadequate for performing tasks that require a high level of expertise and nuanced judgment.This work is motivated by two key trends. On one hand, large language models (LLMs) have shown remarkable versatility in various generative tasks such as writing, drawing, and question answering, significantly reducing the time required for many routine tasks. On the other hand, researchers, whose work is not only time-consuming but also highly expertise-demanding, face increasing challenges as they have to spend more time reading, writing, and reviewing papers. This raises the question: how can LLMs potentially assist researchers in alleviating their heavy workload?This study focuses on the topic of LLMs as NLP Researchers, particularly examining the effectiveness of LLMs in assisting paper (meta-)reviewing and its recognizability. To address this, we constructed the ReviewCritique dataset, which includes two types of information: (i) NLP papers (initial submissions rather than camera-ready) with both human-written and LLM-generated reviews, and (ii) each review comes with {``}deficiency{''} labels and corresponding explanations for individual segments, annotated by experts. Using ReviewCritique, this study explores two threads of research questions: (i) {``}LLMs as Reviewers{''}, how do reviews generated by LLMs compare with those written by humans in terms of quality and distinguishability? (ii) {``}LLMs as Metareviewers{''}, how effectively can LLMs identify potential issues, such as Deficient or unprofessional review segments, within individual paper reviews? To our knowledge, this is the first work to provide such a comprehensive analysis.", }
Claim: This work is not advocating the use of LLMs for paper (meta-)reviewing. Instead, wepresent a comparative analysis to identify and distinguish LLM activities from human activities. Two research goals: i) Enable better recognition of instances when someone implicitly uses LLMs for reviewing activities; ii) Increase community awareness that LLMs, and AI in general, are currently inadequate for performing tasks that require a high level of expertise and nuanced judgment.This work is motivated by two key trends. On one hand, large language models (LLMs) have shown remarkable versatility in various generative tasks such as writing, drawing, and question answering, significantly reducing the time required for many routine tasks. On the other hand, researchers, whose work is not only time-consuming but also highly expertise-demanding, face increasing challenges as they have to spend more time reading, writing, and reviewing papers. This raises the question: how can LLMs potentially assist researchers in alleviating their heavy workload?This study focuses on the topic of LLMs as NLP Researchers, particularly examining the effectiveness of LLMs in assisting paper (meta-)reviewing and its recognizability. To address this, we constructed the ReviewCritique dataset, which includes two types of information: (i) NLP papers (initial submissions rather than camera-ready) with both human-written and LLM-generated reviews, and (ii) each review comes with {``}deficiency{''} labels and corresponding explanations for individual segments, annotated by experts. Using ReviewCritique, this study explores two threads of research questions: (i) {``}LLMs as Reviewers{''}, how do reviews generated by LLMs compare with those written by humans in terms of quality and distinguishability? (ii) {``}LLMs as Metareviewers{''}, how effectively can LLMs identify potential issues, such as Deficient or unprofessional review segments, within individual paper reviews? To our knowledge, this is the first work to provide such a comprehensive analysis.
[ "Du, Jiangshu", "Wang, Yibo", "Zhao, Wenting", "Deng, Zhongfen", "Liu, Shuaiqi", "Lou, Renze", "Zou, Henry Peng", "Narayanan Venkit, Pranav", "Zhang, Nan", "Srinath, Mukund", "Zhang, Haoran Ranran", "Gupta, Vipul", "Li, Yinghui", "Li, Tao", "Wang, Fei", "Liu, Qin", "Liu, Tianlin", "Gao, Pengzhi", "Xia, Congying", "Xing, Chen", "Jiayang, Cheng", "Wang, Zhaowei", "Su, Ying", "Shah, Raj Sanjay", "Guo, Ruohao", "Gu, Jing", "Li, Haoran", "Wei, Kangda", "Wang, Zihao", "Cheng, Lu", "Ranathunga, Surangika", "Fang, Meng", "Fu, Jie", "Liu, Fei", "Huang, Ruihong", "Blanco, Eduardo", "Cao, Yixin", "Zhang, Rui", "Yu, Philip S.", "Yin, Wenpeng" ]
LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing
emnlp-main.292
Poster
2406.16253
[ "https://github.com/jiangshdd/reviewcritique" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.293.bib
https://aclanthology.org/2024.emnlp-main.293/
@inproceedings{dredze-etal-2024-academics, title = "Academics Can Contribute to Domain-Specialized Language Models", author = "Dredze, Mark and Winata, Genta Indra and Kambadur, Prabhanjan and Wu, Shijie and Irsoy, Ozan and Lu, Steven and Dabravolski, Vadim and Rosenberg, David S and Gehrmann, Sebastian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.293", pages = "5100--5110", abstract = "Commercially available models dominate academic leaderboards. While impressive, this has concentrated research on creating and adapting general-purpose models to improve NLP leaderboard standings for large language models. However, leaderboards collect many individual tasks and general-purpose models often underperform in specialized domains; domain-specific or adapted models yield superior results. This focus on large general-purpose models excludes many academics and draws attention away from areas where they can make important contributions. We advocate for a renewed focus on developing and evaluating domain- and task-specific models, and highlight the unique role of academics in this endeavor.", }
Commercially available models dominate academic leaderboards. While impressive, this has concentrated research on creating and adapting general-purpose models to improve NLP leaderboard standings for large language models. However, leaderboards collect many individual tasks and general-purpose models often underperform in specialized domains; domain-specific or adapted models yield superior results. This focus on large general-purpose models excludes many academics and draws attention away from areas where they can make important contributions. We advocate for a renewed focus on developing and evaluating domain- and task-specific models, and highlight the unique role of academics in this endeavor.
[ "Dredze, Mark", "Winata, Genta Indra", "Kambadur, Prabhanjan", "Wu, Shijie", "Irsoy, Ozan", "Lu, Steven", "Dabravolski, Vadim", "Rosenberg, David S", "Gehrmann, Sebastian" ]
Academics Can Contribute to Domain-Specialized Language Models
emnlp-main.293
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.294.bib
https://aclanthology.org/2024.emnlp-main.294/
@inproceedings{noh-etal-2024-beyond, title = "Beyond Reference: Evaluating High Quality Translations Better than Human References", author = "Noh, Keonwoong and Oh, Seokjin and Jung, Woohwan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.294", pages = "5111--5127", abstract = "In Machine Translation (MT) evaluations, the conventional approach is to compare a translated sentence against its human-created reference sentence. MT metrics provide an absolute score (e.g., from 0 to 1) to a candidate sentence based on the similarity with the reference sentence. Thus, existing MT metrics give the maximum score to the reference sentence. However, this approach overlooks the potential for a candidate sentence to exceed the reference sentence in terms of quality. In particular, recent advancements in Large Language Models (LLMs) have highlighted this issue, as LLM-generated sentences often exceed the quality of human-written sentences. To address the problem, we introduce the Residual score Metric (ResuMe), which evaluates the relative quality between reference and candidate sentences. ResuMe assigns a positive score to candidate sentences that outperform their reference sentences, and a negative score when they fall short. By adding the residual scores from ResuMe to the absolute scores from MT metrics, it can be possible to allocate higher scores to candidate sentences than what reference sentences are received from MT metrics. Experimental results demonstrate that ResuMe enhances the alignments between MT metrics and human judgments both at the segment-level and the system-level.", }
In Machine Translation (MT) evaluations, the conventional approach is to compare a translated sentence against its human-created reference sentence. MT metrics provide an absolute score (e.g., from 0 to 1) to a candidate sentence based on the similarity with the reference sentence. Thus, existing MT metrics give the maximum score to the reference sentence. However, this approach overlooks the potential for a candidate sentence to exceed the reference sentence in terms of quality. In particular, recent advancements in Large Language Models (LLMs) have highlighted this issue, as LLM-generated sentences often exceed the quality of human-written sentences. To address the problem, we introduce the Residual score Metric (ResuMe), which evaluates the relative quality between reference and candidate sentences. ResuMe assigns a positive score to candidate sentences that outperform their reference sentences, and a negative score when they fall short. By adding the residual scores from ResuMe to the absolute scores from MT metrics, it can be possible to allocate higher scores to candidate sentences than what reference sentences are received from MT metrics. Experimental results demonstrate that ResuMe enhances the alignments between MT metrics and human judgments both at the segment-level and the system-level.
[ "Noh, Keonwoong", "Oh, Seokjin", "Jung, Woohwan" ]
Beyond Reference: Evaluating High Quality Translations Better than Human References
emnlp-main.294
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.295.bib
https://aclanthology.org/2024.emnlp-main.295/
@inproceedings{zhan-etal-2024-unveiling, title = "Unveiling the Lexical Sensitivity of {LLM}s: Combinatorial Optimization for Prompt Enhancement", author = "Zhan, Pengwei and Xu, Zhen and Tan, Qian and Song, Jie and Xie, Ru", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.295", pages = "5128--5154", abstract = "Large language models (LLMs) demonstrate exceptional instruct-following ability to complete various downstream tasks. Although this impressive ability makes LLMs flexible task solvers, their performance in solving tasks also heavily relies on instructions. In this paper, we reveal that LLMs are over-sensitive to lexical variations in task instructions, even when the variations are imperceptible to humans. By providing models with neighborhood instructions, which are closely situated in the latent representation space and differ by only one semantically similar word, the performance on downstream tasks can be vastly different. Following this property, we propose a black-box Combinatorial Optimization framework for Prompt Lexical Enhancement (COPLE). COPLE performs iterative lexical optimization according to the feedback from a batch of proxy tasks, using a search strategy related to word influence. Experiments show that even widely-used human-crafted prompts for current benchmarks suffer from the lexical sensitivity of models, and COPLE recovers the declined model ability in both instruct-following and solving downstream tasks.", }
Large language models (LLMs) demonstrate exceptional instruct-following ability to complete various downstream tasks. Although this impressive ability makes LLMs flexible task solvers, their performance in solving tasks also heavily relies on instructions. In this paper, we reveal that LLMs are over-sensitive to lexical variations in task instructions, even when the variations are imperceptible to humans. By providing models with neighborhood instructions, which are closely situated in the latent representation space and differ by only one semantically similar word, the performance on downstream tasks can be vastly different. Following this property, we propose a black-box Combinatorial Optimization framework for Prompt Lexical Enhancement (COPLE). COPLE performs iterative lexical optimization according to the feedback from a batch of proxy tasks, using a search strategy related to word influence. Experiments show that even widely-used human-crafted prompts for current benchmarks suffer from the lexical sensitivity of models, and COPLE recovers the declined model ability in both instruct-following and solving downstream tasks.
[ "Zhan, Pengwei", "Xu, Zhen", "Tan, Qian", "Song, Jie", "Xie, Ru" ]
Unveiling the Lexical Sensitivity of LLMs: Combinatorial Optimization for Prompt Enhancement
emnlp-main.295
Poster
2405.20701
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.296.bib
https://aclanthology.org/2024.emnlp-main.296/
@inproceedings{lovenia-etal-2024-seacrowd, title = "{SEAC}rowd: A Multilingual Multimodal Data Hub and Benchmark Suite for {S}outheast {A}sian Languages", author = {Lovenia, Holy and Mahendra, Rahmad and Akbar, Salsabil Maulana and Miranda, Lester James Validad and Santoso, Jennifer and Aco, Elyanah and Fadhilah, Akhdan and Mansurov, Jonibek and Imperial, Joseph Marvin and Kampman, Onno P. and Moniz, Joel Ruben Antony and Habibi, Muhammad Ravi Shulthan and Hudi, Frederikus and Montalan, Jann Railey and Hadiwijaya, Ryan Ignatius and Lopo, Joanito Agili and Nixon, William and Karlsson, B{\"o}rje F. and Jaya, James and Diandaru, Ryandito and Gao, Yuze and Irawan, Patrick Amadeus and Wang, Bin and Cruz, Jan Christian Blaise and Whitehouse, Chenxi and Parmonangan, Ivan Halim and Khelli, Maria and Zhang, Wenyu and Susanto, Lucky and Ryanda, Reynard Adha and Hermawan, Sonny Lazuardi and Velasco, Dan John and Kautsar, Muhammad Dehan Al and Hendria, Willy Fitra and Moslem, Yasmin and Flynn, Noah and Adilazuarda, Muhammad Farid and Li, Haochen and Lee, Johanes and Damanhuri, R. and Sun, Shuo and Qorib, Muhammad Reza and Djanibekov, Amirbek and Leong, Wei Qi and Do, Quyet V. and Muennighoff, Niklas and Pansuwan, Tanrada and Putra, Ilham Firdausi and Xu, Yan and Chia, Tai Ngee and Purwarianti, Ayu and Ruder, Sebastian and Tjhi, William Chandra and Limkonchotiwat, Peerat and Aji, Alham Fikri and Keh, Sedrick and Winata, Genta Indra and Zhang, Ruochen and Koto, Fajri and Yong, Zheng Xin and Cahyawijaya, Samuel}, editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.296", pages = "5155--5203", abstract = "Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, through a collaborative movement, we introduce SEACrowd, a comprehensive resource center that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in Southeast Asia.", }
Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, through a collaborative movement, we introduce SEACrowd, a comprehensive resource center that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in Southeast Asia.
[ "Lovenia, Holy", "Mahendra, Rahmad", "Akbar, Salsabil Maulana", "Mir", "a, Lester James Validad", "Santoso, Jennifer", "Aco, Elyanah", "Fadhilah, Akhdan", "Mansurov, Jonibek", "Imperial, Joseph Marvin", "Kampman, Onno P.", "Moniz, Joel Ruben Antony", "Habibi, Muhammad Ravi Shulthan", "Hudi, Frederikus", "Montalan, Jann Railey", "Hadiwijaya, Ryan Ignatius", "Lopo, Joanito Agili", "Nixon, William", "Karlsson, B{\\\"o}rje F.", "Jaya, James", "Di", "aru, Ry", "ito", "Gao, Yuze", "Irawan, Patrick Amadeus", "Wang, Bin", "Cruz, Jan Christian Blaise", "Whitehouse, Chenxi", "Parmonangan, Ivan Halim", "Khelli, Maria", "Zhang, Wenyu", "Susanto, Lucky", "Ry", "a, Reynard Adha", "Hermawan, Sonny Lazuardi", "Velasco, Dan John", "Kautsar, Muhammad Dehan Al", "Hendria, Willy Fitra", "Moslem, Yasmin", "Flynn, Noah", "Adilazuarda, Muhammad Farid", "Li, Haochen", "Lee, Johanes", "Damanhuri, R.", "Sun, Shuo", "Qorib, Muhammad Reza", "Djanibekov, Amirbek", "Leong, Wei Qi", "Do, Quyet V.", "Muennighoff, Niklas", "Pansuwan, Tanrada", "Putra, Ilham Firdausi", "Xu, Yan", "Chia, Tai Ngee", "Purwarianti, Ayu", "Ruder, Sebastian", "Tjhi, William Ch", "ra", "Limkonchotiwat, Peerat", "Aji, Alham Fikri", "Keh, Sedrick", "Winata, Genta Indra", "Zhang, Ruochen", "Koto, Fajri", "Yong, Zheng Xin", "Cahyawijaya, Samuel" ]
SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages
emnlp-main.296
Poster
2406.10118
[ "https://github.com/SEACrowd/seacrowd-datahub" ]
https://huggingface.co/papers/2406.10118
16
30
1
61
[ "SEACrowd/mdeberta-v3_sea_translationese" ]
[ "SEACrowd/indo4b", "SEACrowd/sea_translationese_resampled", "SEACrowd/sentiment_nathasa_review", "SEACrowd/x_fact", "SEACrowd/flores200", "SEACrowd/indo_religious_mt_en_id", "SEACrowd/wisesight_thai_sentiment", "SEACrowd/hoasa", "SEACrowd/id_google_play_review", "SEACrowd/indo_general_mt_en_id", "SEACrowd/vimmrc", "SEACrowd/wikiann", "SEACrowd/nusax_mt", "SEACrowd/indo4b_plus", "SEACrowd/vintext", "SEACrowd/cc3m_35l", "SEACrowd/up2", "SEACrowd/vivos", "SEACrowd/wili_2018", "SEACrowd/pfsa_id", "SEACrowd/fleurs", "SEACrowd/cosem", "SEACrowd/aya_collection_templated", "SEACrowd/indowiki", "SEACrowd/tatoeba", "SEACrowd/muse", "SEACrowd/tatabahasa", "SEACrowd/burmese_romanize", "SEACrowd/vihealthqa", "SEACrowd/indonesian_news_dataset", "SEACrowd/indonesiannmt", "SEACrowd/mypos", "SEACrowd/uit_vion", "SEACrowd/idk_mrc_nli", "SEACrowd/uit_visfd", "SEACrowd/culturax", "SEACrowd/uit_vsfc", "SEACrowd/vivqa", "SEACrowd/melayu_sabah", "SEACrowd/kde4", "SEACrowd/icon", "SEACrowd/xnli", "SEACrowd/wit", "SEACrowd/multilingual_nli_26lang", "SEACrowd/m3ls", "SEACrowd/uit_viwikiqa", "SEACrowd/copal", "SEACrowd/seaeval", "SEACrowd/cc_aligned_sent", "SEACrowd/indosmd", "SEACrowd/hplt", "SEACrowd/unimorph", "SEACrowd/uit_viic", "SEACrowd/phoatis", "SEACrowd/alorese", "SEACrowd/tydiqa", "SEACrowd/fsl_105", "SEACrowd/mkqa", "SEACrowd/glotstorybook", "SEACrowd/uit_vicov19qa", "SEACrowd/wikihow_gosc", "SEACrowd/aya_evaluation_suite", "SEACrowd/xquadr", "SEACrowd/toxicity_200", "SEACrowd/filwordnet", "SEACrowd/ucla_phonetic", "SEACrowd/multispider", "SEACrowd/okapi_m_mmlu", "SEACrowd/thai_tnhc2_books", "SEACrowd/indocamrest", "SEACrowd/commonvoice_120", "SEACrowd/melayu_brunei", "SEACrowd/thai_databricks_dolly", "SEACrowd/palito", "SEACrowd/crosssum", "SEACrowd/uit_victsd", "SEACrowd/etos", "SEACrowd/total_defense_meme", "SEACrowd/gatitos", "SEACrowd/asr_ibsc", "SEACrowd/bloom_lm", "SEACrowd/xm3600", "SEACrowd/bud500", "SEACrowd/creole_rc", "SEACrowd/mabl", "SEACrowd/qed", "SEACrowd/scb_mt_en_th", "SEACrowd/id_newspaper_2018", "SEACrowd/belebele", "SEACrowd/idner_news_2k", "SEACrowd/mysentence", "SEACrowd/wongnai_reviews", "SEACrowd/vsolscsum", "SEACrowd/filipino_hatespeech_election", "SEACrowd/vilexnorm", "SEACrowd/lexitron", "SEACrowd/tha_lao_embassy_parcor", "SEACrowd/lazada_review_filipino", "SEACrowd/cebuaner", "SEACrowd/thaigov" ]
[]
[ "SEACrowd/mdeberta-v3_sea_translationese" ]
[ "SEACrowd/indo4b", "SEACrowd/sea_translationese_resampled", "SEACrowd/sentiment_nathasa_review", "SEACrowd/x_fact", "SEACrowd/flores200", "SEACrowd/indo_religious_mt_en_id", "SEACrowd/wisesight_thai_sentiment", "SEACrowd/hoasa", "SEACrowd/id_google_play_review", "SEACrowd/indo_general_mt_en_id", "SEACrowd/vimmrc", "SEACrowd/wikiann", "SEACrowd/nusax_mt", "SEACrowd/indo4b_plus", "SEACrowd/vintext", "SEACrowd/cc3m_35l", "SEACrowd/up2", "SEACrowd/vivos", "SEACrowd/wili_2018", "SEACrowd/pfsa_id", "SEACrowd/fleurs", "SEACrowd/cosem", "SEACrowd/aya_collection_templated", "SEACrowd/indowiki", "SEACrowd/tatoeba", "SEACrowd/muse", "SEACrowd/tatabahasa", "SEACrowd/burmese_romanize", "SEACrowd/vihealthqa", "SEACrowd/indonesian_news_dataset", "SEACrowd/indonesiannmt", "SEACrowd/mypos", "SEACrowd/uit_vion", "SEACrowd/idk_mrc_nli", "SEACrowd/uit_visfd", "SEACrowd/culturax", "SEACrowd/uit_vsfc", "SEACrowd/vivqa", "SEACrowd/melayu_sabah", "SEACrowd/kde4", "SEACrowd/icon", "SEACrowd/xnli", "SEACrowd/wit", "SEACrowd/multilingual_nli_26lang", "SEACrowd/m3ls", "SEACrowd/uit_viwikiqa", "SEACrowd/copal", "SEACrowd/seaeval", "SEACrowd/cc_aligned_sent", "SEACrowd/indosmd", "SEACrowd/hplt", "SEACrowd/unimorph", "SEACrowd/uit_viic", "SEACrowd/phoatis", "SEACrowd/alorese", "SEACrowd/tydiqa", "SEACrowd/fsl_105", "SEACrowd/mkqa", "SEACrowd/glotstorybook", "SEACrowd/uit_vicov19qa", "SEACrowd/wikihow_gosc", "SEACrowd/aya_evaluation_suite", "SEACrowd/xquadr", "SEACrowd/toxicity_200", "SEACrowd/filwordnet", "SEACrowd/ucla_phonetic", "SEACrowd/multispider", "SEACrowd/okapi_m_mmlu", "SEACrowd/thai_tnhc2_books", "SEACrowd/indocamrest", "SEACrowd/commonvoice_120", "SEACrowd/melayu_brunei", "SEACrowd/thai_databricks_dolly", "SEACrowd/palito", "SEACrowd/crosssum", "SEACrowd/uit_victsd", "SEACrowd/etos", "SEACrowd/total_defense_meme", "SEACrowd/gatitos", "SEACrowd/asr_ibsc", "SEACrowd/bloom_lm", "SEACrowd/xm3600", "SEACrowd/bud500", "SEACrowd/creole_rc", "SEACrowd/mabl", "SEACrowd/qed", "SEACrowd/scb_mt_en_th", "SEACrowd/id_newspaper_2018", "SEACrowd/belebele", "SEACrowd/idner_news_2k", "SEACrowd/mysentence", "SEACrowd/wongnai_reviews", "SEACrowd/vsolscsum", "SEACrowd/filipino_hatespeech_election", "SEACrowd/vilexnorm", "SEACrowd/lexitron", "SEACrowd/tha_lao_embassy_parcor", "SEACrowd/lazada_review_filipino", "SEACrowd/cebuaner", "SEACrowd/thaigov" ]
[]
1
https://aclanthology.org/2024.emnlp-main.297.bib
https://aclanthology.org/2024.emnlp-main.297/
@inproceedings{chen-etal-2024-induct, title = "Induct-Learn: Short Phrase Prompting with Instruction Induction", author = "Chen, Po-Chun and Wei, Sheng-Lun and Huang, Hen-Hsen and Chen, Hsin-Hsi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.297", pages = "5204--5231", abstract = "Large Language Models (LLMs) have demonstrated capability in {``}instruction induction,{''} generating instructions from demonstrations (input-output pairs). However, existing methods often rely on large datasets or numerous examples, which is impractical and costly in real-world scenarios. In this work, we propose a low-cost, task-level framework called Induct-Learn. It induces pseudo instructions from a few demonstrations and a short phrase, adding a CoT process into existing demonstrations. When encountering new problems, the learned pseudo instructions and demonstrations with the pseudo CoT process can be combined into a prompt to guide the LLM{'}s problem-solving process. We validate our approach on the BBH-Induct and Evals-Induct datasets, and the results show that the Induct-Learn framework outperforms state-of-the-art methods. We also exhibit cross-model adaptability and achieve superior performance at a lower cost compared to existing methods.", }
Large Language Models (LLMs) have demonstrated capability in {``}instruction induction,{''} generating instructions from demonstrations (input-output pairs). However, existing methods often rely on large datasets or numerous examples, which is impractical and costly in real-world scenarios. In this work, we propose a low-cost, task-level framework called Induct-Learn. It induces pseudo instructions from a few demonstrations and a short phrase, adding a CoT process into existing demonstrations. When encountering new problems, the learned pseudo instructions and demonstrations with the pseudo CoT process can be combined into a prompt to guide the LLM{'}s problem-solving process. We validate our approach on the BBH-Induct and Evals-Induct datasets, and the results show that the Induct-Learn framework outperforms state-of-the-art methods. We also exhibit cross-model adaptability and achieve superior performance at a lower cost compared to existing methods.
[ "Chen, Po-Chun", "Wei, Sheng-Lun", "Huang, Hen-Hsen", "Chen, Hsin-Hsi" ]
Induct-Learn: Short Phrase Prompting with Instruction Induction
emnlp-main.297
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.298.bib
https://aclanthology.org/2024.emnlp-main.298/
@inproceedings{mingcong-etal-2024-multi, title = "Multi-Granularity History and Entity Similarity Learning for Temporal Knowledge Graph Reasoning", author = "Mingcong, Shi and Zhu, Chunjiang and Zhang, Detian and Wen, Shiting and Qing, Li", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.298", pages = "5232--5243", }
No abstract found
[ "Mingcong, Shi", "Zhu, Chunjiang", "Zhang, Detian", "Wen, Shiting", "Qing, Li" ]
Multi-Granularity History and Entity Similarity Learning for Temporal Knowledge Graph Reasoning
emnlp-main.298
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.299.bib
https://aclanthology.org/2024.emnlp-main.299/
@inproceedings{zhang-etal-2024-luq, title = "{LUQ}: Long-text Uncertainty Quantification for {LLM}s", author = "Zhang, Caiqi and Liu, Fangyu and Basaldella, Marco and Collier, Nigel", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.299", pages = "5244--5262", abstract = "Large Language Models (LLMs) have demonstrated remarkable capability in a variety of NLP tasks. However, LLMs are also prone to generate nonfactual content. Uncertainty Quantification (UQ) is pivotal in enhancing our understanding of a model{'}s confidence on its generation, thereby aiding in the mitigation of nonfactual outputs. Existing research on UQ predominantly targets short text generation, typically yielding brief, word-limited responses. However, real-world applications frequently necessitate much longer responses. Our study first highlights the limitations of current UQ methods in handling long text generation. We then introduce Luq and its two variations, a series of novel sampling-based UQ approaches specifically designed for long text. Our findings reveal that Luq outperforms existing baseline methods in correlating with the model{'}s factuality scores (negative coefficient of -0.85 observed for Gemini Pro). To further improve the factuality of LLM responses, we propose Luq-Ensemble, a method that ensembles responses from multiple models and selects the response with the lowest uncertainty. The ensembling method greatly improves the response factuality upon the best standalone LLM.", }
Large Language Models (LLMs) have demonstrated remarkable capability in a variety of NLP tasks. However, LLMs are also prone to generate nonfactual content. Uncertainty Quantification (UQ) is pivotal in enhancing our understanding of a model{'}s confidence on its generation, thereby aiding in the mitigation of nonfactual outputs. Existing research on UQ predominantly targets short text generation, typically yielding brief, word-limited responses. However, real-world applications frequently necessitate much longer responses. Our study first highlights the limitations of current UQ methods in handling long text generation. We then introduce Luq and its two variations, a series of novel sampling-based UQ approaches specifically designed for long text. Our findings reveal that Luq outperforms existing baseline methods in correlating with the model{'}s factuality scores (negative coefficient of -0.85 observed for Gemini Pro). To further improve the factuality of LLM responses, we propose Luq-Ensemble, a method that ensembles responses from multiple models and selects the response with the lowest uncertainty. The ensembling method greatly improves the response factuality upon the best standalone LLM.
[ "Zhang, Caiqi", "Liu, Fangyu", "Basaldella, Marco", "Collier, Nigel" ]
LUQ: Long-text Uncertainty Quantification for LLMs
emnlp-main.299
Poster
2403.20279
[ "https://github.com/caiqizh/LUQ" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.300.bib
https://aclanthology.org/2024.emnlp-main.300/
@inproceedings{zhang-etal-2024-pretraining, title = "Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method", author = "Zhang, Weichao and Zhang, Ruqing and Guo, Jiafeng and de Rijke, Maarten and Fan, Yixing and Cheng, Xueqi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.300", pages = "5263--5274", abstract = "As the scale of training corpora for large language models (LLMs) grows, model developers become increasingly reluctant to disclose details on their data. This lack of transparency poses challenges to scientific evaluation and ethical deployment. Recently, pretraining data detection approaches, which infer whether a given text was part of an LLM{'}s training data through black-box access, have been explored. The Min-K{\%} Prob method, which has achieved state-of-the-art results, assumes that a non-training example tends to contain a few outlier words with low token probabilities. However, the effectiveness may be limited as it tends to misclassify non-training texts that contain many common words with high probabilities predicted by LLMs. To address this issue, we introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection. We compute the cross-entropy (i.e., the divergence) between the token probability distribution and the token frequency distribution to derive a detection score.We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text. Experimental results on English-language benchmarks and PatentMIA demonstrate that our proposed method significantly outperforms existing methods. Our code and PatentMIA benchmark are available at \url{https://github.com/zhang-wei-chao/DC-PDD}.", }
As the scale of training corpora for large language models (LLMs) grows, model developers become increasingly reluctant to disclose details on their data. This lack of transparency poses challenges to scientific evaluation and ethical deployment. Recently, pretraining data detection approaches, which infer whether a given text was part of an LLM{'}s training data through black-box access, have been explored. The Min-K{\%} Prob method, which has achieved state-of-the-art results, assumes that a non-training example tends to contain a few outlier words with low token probabilities. However, the effectiveness may be limited as it tends to misclassify non-training texts that contain many common words with high probabilities predicted by LLMs. To address this issue, we introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection. We compute the cross-entropy (i.e., the divergence) between the token probability distribution and the token frequency distribution to derive a detection score.We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text. Experimental results on English-language benchmarks and PatentMIA demonstrate that our proposed method significantly outperforms existing methods. Our code and PatentMIA benchmark are available at \url{https://github.com/zhang-wei-chao/DC-PDD}.
[ "Zhang, Weichao", "Zhang, Ruqing", "Guo, Jiafeng", "de Rijke, Maarten", "Fan, Yixing", "Cheng, Xueqi" ]
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
emnlp-main.300
Poster
2409.14781
[ "https://github.com/zhang-wei-chao/dc-pdd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0