bibtex_url
stringlengths 41
50
| bibtext
stringlengths 693
2.88k
| abstract
stringlengths 0
2k
| authors
sequencelengths 1
45
| title
stringlengths 21
199
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
28
| upvotes
int64 -1
255
| num_comments
int64 -1
23
| n_authors
int64 -1
35
| proceedings
stringlengths 38
47
| Models
sequencelengths 0
57
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.acl-long.701.bib | @inproceedings{shen-etal-2024-learning,
title = "Learning to Decode Collaboratively with Multiple Language Models",
author = "Shen, Zejiang and
Lang, Hunter and
Wang, Bailin and
Kim, Yoon and
Sontag, David",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.701",
pages = "12974--12990",
abstract = "We propose a method to teach multiple large language models (LLM) to collaborate by interleaving their generations at the token level. We model the decision of which LLM generates the next token as a latent variable. By optimizing the marginal likelihood of a training set under our latent variable model, the base LLM automatically learns when to generate itself and when to call on one of the {``}assistant{''} language models to generate, all without direct supervision. Token-level collaboration during decoding allows for a fusion of each model{'}s expertise in a manner tailored to the specific task at hand. Our collaborative decoding is especially useful in cross-domain settings where a generalist base LLM learns to invoke domain expert models. On instruction-following, domain-specific QA, and reasoning tasks, we show that the performance of the joint system exceeds that of the individual models. Through qualitative analysis, we show models trained with our method exhibit several interesting collaboration patterns, e.g., template-filling, by visualizing the learned latent decisions.",
}
| We propose a method to teach multiple large language models (LLM) to collaborate by interleaving their generations at the token level. We model the decision of which LLM generates the next token as a latent variable. By optimizing the marginal likelihood of a training set under our latent variable model, the base LLM automatically learns when to generate itself and when to call on one of the {``}assistant{''} language models to generate, all without direct supervision. Token-level collaboration during decoding allows for a fusion of each model{'}s expertise in a manner tailored to the specific task at hand. Our collaborative decoding is especially useful in cross-domain settings where a generalist base LLM learns to invoke domain expert models. On instruction-following, domain-specific QA, and reasoning tasks, we show that the performance of the joint system exceeds that of the individual models. Through qualitative analysis, we show models trained with our method exhibit several interesting collaboration patterns, e.g., template-filling, by visualizing the learned latent decisions. | [
"Shen, Zejiang",
"Lang, Hunter",
"Wang, Bailin",
"Kim, Yoon",
"Sontag, David"
] | Learning to Decode Collaboratively with Multiple Language Models | acl-long.701 | Poster | 2403.03870 | [
"https://github.com/clinicalml/co-llm"
] | https://huggingface.co/papers/2403.03870 | 3 | 17 | 4 | 5 | https://aclanthology.org/2024.acl-long.701/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.702.bib | @inproceedings{su-etal-2024-dragin,
title = "{DRAGIN}: Dynamic Retrieval Augmented Generation based on the Real-time Information Needs of Large Language Models",
author = "Su, Weihang and
Tang, Yichen and
Ai, Qingyao and
Wu, Zhijing and
Liu, Yiqun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.702",
pages = "12991--13013",
abstract = "Dynamic retrieval augmented generation (RAG) paradigm actively decides when and what to retrieve during the text generation process of Large Language Models (LLMs).There are two key elements of this paradigm: identifying the optimal moment to activate the retrieval module (deciding when to retrieve) and crafting the appropriate query once retrieval is triggered (determining what to retrieve).However, current dynamic RAG methods fall short in both aspects. Firstly, the strategies for deciding when to retrieve often rely on static rules. Moreover, the strategies for deciding what to retrieve typically limit themselves to the LLM{'}s most recent sentence or the last few tokens, while the LLM{'}s information needs may span across the entire context.To overcome these limitations, we introduce a new framework, DRAGIN, i.e., Dynamic Retrieval Augmented Generation based on the Information Needs of LLMs. Our framework is specifically designed to make decisions on when and what to retrieve based on the LLM{'}s information needs during the text generation process.We evaluate DRAGIN along with existing methods comprehensively over 4 knowledge-intensive generation datasets. Experimental results show that DRAGIN achieves superior performance on all tasks, demonstrating the effectiveness of our method.",
}
| Dynamic retrieval augmented generation (RAG) paradigm actively decides when and what to retrieve during the text generation process of Large Language Models (LLMs).There are two key elements of this paradigm: identifying the optimal moment to activate the retrieval module (deciding when to retrieve) and crafting the appropriate query once retrieval is triggered (determining what to retrieve).However, current dynamic RAG methods fall short in both aspects. Firstly, the strategies for deciding when to retrieve often rely on static rules. Moreover, the strategies for deciding what to retrieve typically limit themselves to the LLM{'}s most recent sentence or the last few tokens, while the LLM{'}s information needs may span across the entire context.To overcome these limitations, we introduce a new framework, DRAGIN, i.e., Dynamic Retrieval Augmented Generation based on the Information Needs of LLMs. Our framework is specifically designed to make decisions on when and what to retrieve based on the LLM{'}s information needs during the text generation process.We evaluate DRAGIN along with existing methods comprehensively over 4 knowledge-intensive generation datasets. Experimental results show that DRAGIN achieves superior performance on all tasks, demonstrating the effectiveness of our method. | [
"Su, Weihang",
"Tang, Yichen",
"Ai, Qingyao",
"Wu, Zhijing",
"Liu, Yiqun"
] | DRAGIN: Dynamic Retrieval Augmented Generation based on the Real-time Information Needs of Large Language Models | acl-long.702 | Oral | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.702/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.703.bib | @inproceedings{su-etal-2024-living,
title = "Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?",
author = "Su, Zhaochen and
Li, Juntao and
Zhang, Jun and
Zhu, Tong and
Qu, Xiaoye and
Zhou, Pan and
Bowen, Yan and
Cheng, Yu and
Zhang, Min",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.703",
pages = "13014--13033",
abstract = "Temporal reasoning is fundamental for large language models (LLMs) to comprehend the world. Current temporal reasoning datasets are limited to questions about single or isolated events, falling short in mirroring the realistic temporal characteristics involving concurrent nature and intricate temporal interconnections. In this paper, we introduce CoTempQA, a comprehensive co-temporal Question Answering (QA) benchmark containing four co-temporal scenarios (Equal, Overlap, During, Mix) with 4,748 samples for evaluating the co-temporal comprehension and reasoning abilities of LLMs. Our extensive experiments reveal a significant gap between the performance of current LLMs and human-level reasoning on CoTempQA tasks. Even when enhanced with Chain of Thought (CoT) methodologies, models consistently struggle with our task. In our preliminary exploration, we discovered that mathematical reasoning plays a significant role in handling co-temporal events and proposed a strategy to boost LLMs{'} co-temporal reasoning from a mathematical perspective. We hope that our CoTempQA datasets will encourage further advancements in improving the co-temporal reasoning capabilities of LLMs.",
}
| Temporal reasoning is fundamental for large language models (LLMs) to comprehend the world. Current temporal reasoning datasets are limited to questions about single or isolated events, falling short in mirroring the realistic temporal characteristics involving concurrent nature and intricate temporal interconnections. In this paper, we introduce CoTempQA, a comprehensive co-temporal Question Answering (QA) benchmark containing four co-temporal scenarios (Equal, Overlap, During, Mix) with 4,748 samples for evaluating the co-temporal comprehension and reasoning abilities of LLMs. Our extensive experiments reveal a significant gap between the performance of current LLMs and human-level reasoning on CoTempQA tasks. Even when enhanced with Chain of Thought (CoT) methodologies, models consistently struggle with our task. In our preliminary exploration, we discovered that mathematical reasoning plays a significant role in handling co-temporal events and proposed a strategy to boost LLMs{'} co-temporal reasoning from a mathematical perspective. We hope that our CoTempQA datasets will encourage further advancements in improving the co-temporal reasoning capabilities of LLMs. | [
"Su, Zhaochen",
"Li, Juntao",
"Zhang, Jun",
"Zhu, Tong",
"Qu, Xiaoye",
"Zhou, Pan",
"Bowen, Yan",
"Cheng, Yu",
"Zhang, Min"
] | Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning? | acl-long.703 | Oral | 2406.09072 | [
"https://github.com/zhaochen0110/cotempqa"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.703/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.704.bib | @inproceedings{ke-etal-2024-critiquellm,
title = "{C}ritique{LLM}: Towards an Informative Critique Generation Model for Evaluation of Large Language Model Generation",
author = "Ke, Pei and
Wen, Bosi and
Feng, Andrew and
Liu, Xiao and
Lei, Xuanyu and
Cheng, Jiale and
Wang, Shengyuan and
Zeng, Aohan and
Dong, Yuxiao and
Wang, Hongning and
Tang, Jie and
Huang, Minlie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.704",
pages = "13034--13054",
abstract = "Since the natural language processing (NLP) community started to make large language models (LLMs) act as a critic to evaluate the quality of generated texts, most of the existing works train a critique generation model on the evaluation data labeled by GPT-4{'}s direct prompting. We observe that these models lack the ability to generate informative critiques in both pointwise grading and pairwise comparison especially without references. As a result, their generated critiques cannot provide fine-grained distinguishability on generated texts, causing unsatisfactory evaluation performance. In this paper, we propose a simple yet effective method called Eval-Instruct, which can first acquire pointwise grading critiques with pseudo references and then revise these critiques via multi-path prompting to obtain informative evaluation data in different tasks and settings, including pointwise grading and pairwise comparison with / without references. After fine-tuning on these data, the resulting model CritiqueLLM is empirically shown to outperform ChatGPT and all the open-source baselines and even achieve comparable evaluation performance to GPT-4 in system-level correlations of pointwise grading. We also demonstrate that our generated critiques can act as scalable feedback to further improve the generation quality of strong LLMs like ChatGPT.",
}
| Since the natural language processing (NLP) community started to make large language models (LLMs) act as a critic to evaluate the quality of generated texts, most of the existing works train a critique generation model on the evaluation data labeled by GPT-4{'}s direct prompting. We observe that these models lack the ability to generate informative critiques in both pointwise grading and pairwise comparison especially without references. As a result, their generated critiques cannot provide fine-grained distinguishability on generated texts, causing unsatisfactory evaluation performance. In this paper, we propose a simple yet effective method called Eval-Instruct, which can first acquire pointwise grading critiques with pseudo references and then revise these critiques via multi-path prompting to obtain informative evaluation data in different tasks and settings, including pointwise grading and pairwise comparison with / without references. After fine-tuning on these data, the resulting model CritiqueLLM is empirically shown to outperform ChatGPT and all the open-source baselines and even achieve comparable evaluation performance to GPT-4 in system-level correlations of pointwise grading. We also demonstrate that our generated critiques can act as scalable feedback to further improve the generation quality of strong LLMs like ChatGPT. | [
"Ke, Pei",
"Wen, Bosi",
"Feng, Andrew",
"Liu, Xiao",
"Lei, Xuanyu",
"Cheng, Jiale",
"Wang, Shengyuan",
"Zeng, Aohan",
"Dong, Yuxiao",
"Wang, Hongning",
"Tang, Jie",
"Huang, Minlie"
] | CritiqueLLM: Towards an Informative Critique Generation Model for Evaluation of Large Language Model Generation | acl-long.704 | Poster | 2311.18702 | [
"https://github.com/thu-coai/critiquellm"
] | https://huggingface.co/papers/2311.18702 | 0 | 0 | 0 | 12 | https://aclanthology.org/2024.acl-long.704/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.705.bib | @inproceedings{chen-etal-2024-llmarena,
title = "{LLMA}rena: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments",
author = "Chen, Junzhe and
Hu, Xuming and
Liu, Shuodi and
Huang, Shiyu and
Tu, Wei-Wei and
He, Zhaofeng and
Wen, Lijie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.705",
pages = "13055--13077",
abstract = "Recent advancements in large language models (LLMs) have revealed their potential for achieving autonomous agents possessing human-level intelligence. However, existing benchmarks for evaluating LLM Agents either use static datasets, potentially leading to data leakage or focus only on single-agent scenarios, overlooking the complexities of multi-agent interactions. There is a lack of a benchmark that evaluates the diverse capabilities of LLM agents in multi-agent, dynamic environments. To this end, we introduce LLMArena, a novel and easily extensible framework for evaluating the diverse capabilities of LLM in multi-agent dynamic environments. LLMArena encompasses seven distinct gaming environments, employing Trueskill scoring to assess crucial abilities in LLM agents, including spatial reasoning, strategic planning, numerical reasoning, risk assessment, communication, opponent modeling, and team collaboration. We conduct an extensive experiment and human evaluation among different sizes and types of LLMs, showing that LLMs still have a significant journey ahead in their development towards becoming fully autonomous agents, especially in opponent modeling and team collaboration. We hope LLMArena could guide future research towards enhancing these capabilities in LLMs, ultimately leading to more sophisticated and practical applications in dynamic, multi-agent settings.",
}
| Recent advancements in large language models (LLMs) have revealed their potential for achieving autonomous agents possessing human-level intelligence. However, existing benchmarks for evaluating LLM Agents either use static datasets, potentially leading to data leakage or focus only on single-agent scenarios, overlooking the complexities of multi-agent interactions. There is a lack of a benchmark that evaluates the diverse capabilities of LLM agents in multi-agent, dynamic environments. To this end, we introduce LLMArena, a novel and easily extensible framework for evaluating the diverse capabilities of LLM in multi-agent dynamic environments. LLMArena encompasses seven distinct gaming environments, employing Trueskill scoring to assess crucial abilities in LLM agents, including spatial reasoning, strategic planning, numerical reasoning, risk assessment, communication, opponent modeling, and team collaboration. We conduct an extensive experiment and human evaluation among different sizes and types of LLMs, showing that LLMs still have a significant journey ahead in their development towards becoming fully autonomous agents, especially in opponent modeling and team collaboration. We hope LLMArena could guide future research towards enhancing these capabilities in LLMs, ultimately leading to more sophisticated and practical applications in dynamic, multi-agent settings. | [
"Chen, Junzhe",
"Hu, Xuming",
"Liu, Shuodi",
"Huang, Shiyu",
"Tu, Wei-Wei",
"He, Zhaofeng",
"Wen, Lijie"
] | LLMArena: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments | acl-long.705 | Poster | 2402.16499 | [
"https://github.com/THU-BPM/LLMArena"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.705/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.706.bib | @inproceedings{ravi-etal-2024-small,
title = "Small But Funny: A Feedback-Driven Approach to Humor Distillation",
author = "Ravi, Sahithya and
Huber, Patrick and
Shrivastava, Akshat and
Shwartz, Vered and
Einolghozati, Arash",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.706",
pages = "13078--13090",
abstract = "The emergence of Large Language Models (LLMs) has brought to light promising language generation capabilities, particularly in performing tasks like complex reasoning and creative writing. Consequently, distillation through imitation of teacher responses has emerged as a popular technique to transfer knowledge from LLMs to more accessible, Small Language Models (SLMs). While this works well for simpler tasks, there is a substantial performance gap on tasks requiring intricate language comprehension and creativity, such as humor generation. We hypothesize that this gap may stem from the fact that creative tasks might be hard to learn by imitation alone and explore whether an approach, involving supplementary guidance from the teacher, could yield higher performance. To address this, we study the effect of assigning a dual role to the LLM - as a {``}teacher{''} generating data, as well as a {``}critic{''} evaluating the student{'}s performance. Our experiments on humor generation reveal that the incorporation of feedback significantly narrows the performance gap between SLMs and their larger counterparts compared to merely relying on imitation. As a result, our research highlights the potential of using feedback as an additional dimension to data when transferring complex language abilities via distillation.",
}
| The emergence of Large Language Models (LLMs) has brought to light promising language generation capabilities, particularly in performing tasks like complex reasoning and creative writing. Consequently, distillation through imitation of teacher responses has emerged as a popular technique to transfer knowledge from LLMs to more accessible, Small Language Models (SLMs). While this works well for simpler tasks, there is a substantial performance gap on tasks requiring intricate language comprehension and creativity, such as humor generation. We hypothesize that this gap may stem from the fact that creative tasks might be hard to learn by imitation alone and explore whether an approach, involving supplementary guidance from the teacher, could yield higher performance. To address this, we study the effect of assigning a dual role to the LLM - as a {``}teacher{''} generating data, as well as a {``}critic{''} evaluating the student{'}s performance. Our experiments on humor generation reveal that the incorporation of feedback significantly narrows the performance gap between SLMs and their larger counterparts compared to merely relying on imitation. As a result, our research highlights the potential of using feedback as an additional dimension to data when transferring complex language abilities via distillation. | [
"Ravi, Sahithya",
"Huber, Patrick",
"Shrivastava, Akshat",
"Shwartz, Vered",
"Einolghozati, Arash"
] | Small But Funny: A Feedback-Driven Approach to Humor Distillation | acl-long.706 | Oral | 2402.18113 | [
""
] | https://huggingface.co/papers/2402.18113 | 0 | 0 | 0 | 7 | https://aclanthology.org/2024.acl-long.706/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.707.bib | @inproceedings{xu-etal-2024-symbol,
title = "Symbol-{LLM}: Towards Foundational Symbol-centric Interface For Large Language Models",
author = "Xu, Fangzhi and
Wu, Zhiyong and
Sun, Qiushi and
Ren, Siyu and
Yuan, Fei and
Yuan, Shuai and
Lin, Qika and
Qiao, Yu and
Liu, Jun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.707",
pages = "13091--13116",
abstract = "Although Large Language Models (LLMs) demonstrate remarkable ability in processing and generating human-like text, they do have limitations when it comes to comprehending and expressing world knowledge that extends beyond the boundaries of natural language(e.g., chemical molecular formula). Injecting a collection of symbolic data directly into the training of LLMs can be problematic, as it disregards the synergies among different symbolic families and overlooks the need for a balanced mixture of natural and symbolic data. In this work, we tackle these challenges from both a data and framework perspective and introduce Symbol-LLM series models. First, we curated a data collection consisting of 34 tasks and incorporating 20 distinct symbolic families, intending to capture the interrelations and foster synergies between symbols. Then, a two-stage tuning framework succeeds in injecting symbolic knowledge without loss of the generality ability. Extensive experiments on both symbol- and NL-centric tasks demonstrate the balanced and superior performances of Symbol-LLM series models.",
}
| Although Large Language Models (LLMs) demonstrate remarkable ability in processing and generating human-like text, they do have limitations when it comes to comprehending and expressing world knowledge that extends beyond the boundaries of natural language(e.g., chemical molecular formula). Injecting a collection of symbolic data directly into the training of LLMs can be problematic, as it disregards the synergies among different symbolic families and overlooks the need for a balanced mixture of natural and symbolic data. In this work, we tackle these challenges from both a data and framework perspective and introduce Symbol-LLM series models. First, we curated a data collection consisting of 34 tasks and incorporating 20 distinct symbolic families, intending to capture the interrelations and foster synergies between symbols. Then, a two-stage tuning framework succeeds in injecting symbolic knowledge without loss of the generality ability. Extensive experiments on both symbol- and NL-centric tasks demonstrate the balanced and superior performances of Symbol-LLM series models. | [
"Xu, Fangzhi",
"Wu, Zhiyong",
"Sun, Qiushi",
"Ren, Siyu",
"Yuan, Fei",
"Yuan, Shuai",
"Lin, Qika",
"Qiao, Yu",
"Liu, Jun"
] | Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models | acl-long.707 | Poster | 2311.09278 | [
""
] | https://huggingface.co/papers/2311.09278 | 3 | 7 | 0 | 9 | https://aclanthology.org/2024.acl-long.707/ | [
"Symbol-LLM/Symbol-LLM-7B-Instruct",
"Symbol-LLM/Symbol-LLM-13B-Instruct",
"Symbol-LLM/Symbol-LLM-8B-Instruct"
] | [
"Symbol-LLM/Symbolic_Collection"
] | [] | 1 |
https://aclanthology.org/2024.acl-long.708.bib | @inproceedings{ghosh-etal-2024-sights,
title = "From Sights to Insights: Towards Summarization of Multimodal Clinical Documents",
author = "Ghosh, Akash and
Tomar, Mohit and
Tiwari, Abhisek and
Saha, Sriparna and
Salve, Jatin and
Sinha, Setu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.708",
pages = "13117--13129",
abstract = "The advancement of Artificial Intelligence is pivotal in reshaping healthcare, enhancing diagnostic precision, and facilitating personalized treatment strategies. One major challenge for healthcare professionals is quickly navigating through long clinical documents to provide timely and effective solutions. Doctors often struggle to draw quick conclusions from these extensive documents. To address this issue and save time for healthcare professionals, an effective summarization model is essential. Most current models assume the data is only text-based. However, patients often include images of their medical conditions in clinical documents. To effectively summarize these multimodal documents, we introduce \textbf{ \textit{EDI-Summ}}, an innovative Image-Guided Encoder-Decoder Model. This model uses modality-aware contextual attention on the encoder and an image cross-attention mechanism on the decoder, enhancing the BART base model to create detailed visual-guided summaries. We have tested our model extensively on three multimodal clinical benchmarks involving multimodal question and dialogue summarization tasks. Our analysis demonstrates that \textbf{ \textit{EDI-Summ}} outperforms state-of-the-art large language and vision-aware models in these summarization tasks. \textbf{Disclaimer}: The work includes vivid medical illustrations, depicting the essential aspects of the subject matter.",
}
| The advancement of Artificial Intelligence is pivotal in reshaping healthcare, enhancing diagnostic precision, and facilitating personalized treatment strategies. One major challenge for healthcare professionals is quickly navigating through long clinical documents to provide timely and effective solutions. Doctors often struggle to draw quick conclusions from these extensive documents. To address this issue and save time for healthcare professionals, an effective summarization model is essential. Most current models assume the data is only text-based. However, patients often include images of their medical conditions in clinical documents. To effectively summarize these multimodal documents, we introduce \textbf{ \textit{EDI-Summ}}, an innovative Image-Guided Encoder-Decoder Model. This model uses modality-aware contextual attention on the encoder and an image cross-attention mechanism on the decoder, enhancing the BART base model to create detailed visual-guided summaries. We have tested our model extensively on three multimodal clinical benchmarks involving multimodal question and dialogue summarization tasks. Our analysis demonstrates that \textbf{ \textit{EDI-Summ}} outperforms state-of-the-art large language and vision-aware models in these summarization tasks. \textbf{Disclaimer}: The work includes vivid medical illustrations, depicting the essential aspects of the subject matter. | [
"Ghosh, Akash",
"Tomar, Mohit",
"Tiwari, Abhisek",
"Saha, Sriparna",
"Salve, Jatin",
"Sinha, Setu"
] | From Sights to Insights: Towards Summarization of Multimodal Clinical Documents | acl-long.708 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.708/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.709.bib | @inproceedings{wang-etal-2024-phrases,
title = "When Phrases Meet Probabilities: Enabling Open Relation Extraction with Cooperating Large Language Models",
author = "Wang, Jiaxin and
Zhang, Lingling and
Lee, Wee Sun and
Zhong, Yujie and
Kang, Liwei and
Liu, Jun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.709",
pages = "13130--13147",
abstract = "Current clustering-based open relation extraction (OpenRE) methods usually apply clustering algorithms on top of pre-trained language models. However, this practice has three drawbacks. First, embeddings from language models are high-dimensional and anisotropic, so using simple metrics to calculate distances between these embeddings may not accurately reflect the relational similarity. Second, there exists a gap between the pre-trained language models and downstream clustering for their different objective forms. Third, clustering with embeddings deviates from the primary aim of relation extraction, as it does not directly obtain relations. In this work, we propose a new idea for OpenRE in the era of LLMs, that is, extracting relational phrases and directly exploiting the knowledge in LLMs to assess the semantic similarity between phrases without relying on any additional metrics. Based on this idea, we developed a framework, oreLLM, that makes two LLMs work collaboratively to achieve clustering and address the above issues. Experimental results on different datasets show that oreLLM outperforms current baselines by $1.4\%\sim 3.13\%$ in terms of clustering accuracy.",
}
| Current clustering-based open relation extraction (OpenRE) methods usually apply clustering algorithms on top of pre-trained language models. However, this practice has three drawbacks. First, embeddings from language models are high-dimensional and anisotropic, so using simple metrics to calculate distances between these embeddings may not accurately reflect the relational similarity. Second, there exists a gap between the pre-trained language models and downstream clustering for their different objective forms. Third, clustering with embeddings deviates from the primary aim of relation extraction, as it does not directly obtain relations. In this work, we propose a new idea for OpenRE in the era of LLMs, that is, extracting relational phrases and directly exploiting the knowledge in LLMs to assess the semantic similarity between phrases without relying on any additional metrics. Based on this idea, we developed a framework, oreLLM, that makes two LLMs work collaboratively to achieve clustering and address the above issues. Experimental results on different datasets show that oreLLM outperforms current baselines by $1.4\%\sim 3.13\%$ in terms of clustering accuracy. | [
"Wang, Jiaxin",
"Zhang, Lingling",
"Lee, Wee Sun",
"Zhong, Yujie",
"Kang, Liwei",
"Liu, Jun"
] | When Phrases Meet Probabilities: Enabling Open Relation Extraction with Cooperating Large Language Models | acl-long.709 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.709/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.710.bib | @inproceedings{cegin-etal-2024-effects,
title = "Effects of diversity incentives on sample diversity and downstream model performance in {LLM}-based text augmentation",
author = "Cegin, Jan and
Pecher, Branislav and
Simko, Jakub and
Srba, Ivan and
Bielikova, Maria and
Brusilovsky, Peter",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.710",
pages = "13148--13171",
abstract = "The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering methods, or model settings affect the quality of paraphrased data (and downstream models). In this study, we investigate three text diversity incentive methods well established in crowdsourcing: taboo words, hints by previous outlier solutions, and chaining on previous outlier solutions. Using these incentive methods as part of instructions to LLMs augmenting text datasets, we measure their effects on generated texts{'} lexical diversity and downstream model performance. We compare the effects over 5 different LLMs, 6 datasets and 2 downstream models. We show that diversity is most increased by taboo words, but downstream model performance is highest with hints.",
}
| The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering methods, or model settings affect the quality of paraphrased data (and downstream models). In this study, we investigate three text diversity incentive methods well established in crowdsourcing: taboo words, hints by previous outlier solutions, and chaining on previous outlier solutions. Using these incentive methods as part of instructions to LLMs augmenting text datasets, we measure their effects on generated texts{'} lexical diversity and downstream model performance. We compare the effects over 5 different LLMs, 6 datasets and 2 downstream models. We show that diversity is most increased by taboo words, but downstream model performance is highest with hints. | [
"Cegin, Jan",
"Pecher, Branislav",
"Simko, Jakub",
"Srba, Ivan",
"Bielikova, Maria",
"Brusilovsky, Peter"
] | Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation | acl-long.710 | Poster | 2401.06643 | [
"https://github.com/kinit-sk/llm-div-incts"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.710/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.711.bib | @inproceedings{el-kheir-etal-2024-beyond,
title = "Beyond Orthography: Automatic Recovery of Short Vowels and Dialectal Sounds in {A}rabic",
author = "El Kheir, Yassine and
Mubarak, Hamdy and
Ali, Ahmed and
Chowdhury, Shammur",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.711",
pages = "13172--13184",
abstract = "This paper presents a novel Dialectal Sound and Vowelization Recovery framework, designed to recognize borrowed and dialectal sounds within phonologically diverse and dialect-rich languages, that extends beyond its standard orthographic sound sets. The proposed framework utilized quantized sequence of input with(out) continuous pretrained self-supervised representation. We show the efficacy of the pipeline using limited data for Arabic, a dialect-rich language containing more than 22 major dialects. Phonetically correct transcribed speech resources for dialectal Arabic is scare. Therefore, we introduce ArabVoice15, a first of its kind, curated test set featuring 5 hours of dialectal speech across 15 Arab countries, with phonetically accurate transcriptions, including borrowed and dialect-specific sounds. We described in detail the annotation guideline along with the analysis of the dialectal confusion pairs. Our extensive evaluation includes both subjective {--} human perception tests and objective measures. Our empirical results, reported with three test sets, show that with only one and half hours of training data, our model improve character error rate by {\mbox{$\approx$}}7{\%} in ArabVoice15 compared to the baseline.",
}
| This paper presents a novel Dialectal Sound and Vowelization Recovery framework, designed to recognize borrowed and dialectal sounds within phonologically diverse and dialect-rich languages, that extends beyond its standard orthographic sound sets. The proposed framework utilized quantized sequence of input with(out) continuous pretrained self-supervised representation. We show the efficacy of the pipeline using limited data for Arabic, a dialect-rich language containing more than 22 major dialects. Phonetically correct transcribed speech resources for dialectal Arabic is scare. Therefore, we introduce ArabVoice15, a first of its kind, curated test set featuring 5 hours of dialectal speech across 15 Arab countries, with phonetically accurate transcriptions, including borrowed and dialect-specific sounds. We described in detail the annotation guideline along with the analysis of the dialectal confusion pairs. Our extensive evaluation includes both subjective {--} human perception tests and objective measures. Our empirical results, reported with three test sets, show that with only one and half hours of training data, our model improve character error rate by {\mbox{$\approx$}}7{\%} in ArabVoice15 compared to the baseline. | [
"El Kheir, Yassine",
"Mubarak, Hamdy",
"Ali, Ahmed",
"Chowdhury, Shammur"
] | Beyond Orthography: Automatic Recovery of Short Vowels and Dialectal Sounds in Arabic | acl-long.711 | Poster | 2408.02430 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.711/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.712.bib | @inproceedings{pal-etal-2024-document,
title = "Document-Level Machine Translation with Large-Scale Public Parallel Corpora",
author = "Pal, Proyag and
Birch, Alexandra and
Heafield, Kenneth",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.712",
pages = "13185--13197",
abstract = "Despite the fact that document-level machine translation has inherent advantages over sentence-level machine translation due to additional information available to a model from document context, most translation systems continue to operate at a sentence level. This is primarily due to the severe lack of publicly available large-scale parallel corpora at the document level. We release a large-scale open parallel corpus with document context extracted from ParaCrawl in five language pairs, along with code to compile document-level datasets for any language pair supported by ParaCrawl. We train context-aware models on these datasets and find improvements in terms of overall translation quality and targeted document-level phenomena. We also analyse how much long-range information is useful to model some of these discourse phenomena and find models are able to utilise context from several preceding sentences.",
}
| Despite the fact that document-level machine translation has inherent advantages over sentence-level machine translation due to additional information available to a model from document context, most translation systems continue to operate at a sentence level. This is primarily due to the severe lack of publicly available large-scale parallel corpora at the document level. We release a large-scale open parallel corpus with document context extracted from ParaCrawl in five language pairs, along with code to compile document-level datasets for any language pair supported by ParaCrawl. We train context-aware models on these datasets and find improvements in terms of overall translation quality and targeted document-level phenomena. We also analyse how much long-range information is useful to model some of these discourse phenomena and find models are able to utilise context from several preceding sentences. | [
"Pal, Proyag",
"Birch, Alex",
"ra",
"Heafield, Kenneth"
] | Document-Level Machine Translation with Large-Scale Public Parallel Corpora | acl-long.712 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.712/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.713.bib | @inproceedings{lan-etal-2024-bridging,
title = "Bridging the Empirical-Theoretical Gap in Neural Network Formal Language Learning Using Minimum Description Length",
author = "Lan, Nur and
Chemla, Emmanuel and
Katzir, Roni",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.713",
pages = "13198--13210",
abstract = "Neural networks offer good approximation to many tasks but consistently fail to reach perfect generalization, even when theoretical work shows that such perfect solutions can be expressed by certain architectures. Using the task of formal language learning, we focus on one simple formal language and show that the theoretically correct solution is in fact not an optimum of commonly used objectives {---} even with regularization techniques that according to common wisdom should lead to simple weights and good generalization (L1, L2) or other meta-heuristics (early-stopping, dropout). On the other hand, replacing standard targets with the Minimum Description Length objective (MDL) results in the correct solution being an optimum.",
}
| Neural networks offer good approximation to many tasks but consistently fail to reach perfect generalization, even when theoretical work shows that such perfect solutions can be expressed by certain architectures. Using the task of formal language learning, we focus on one simple formal language and show that the theoretically correct solution is in fact not an optimum of commonly used objectives {---} even with regularization techniques that according to common wisdom should lead to simple weights and good generalization (L1, L2) or other meta-heuristics (early-stopping, dropout). On the other hand, replacing standard targets with the Minimum Description Length objective (MDL) results in the correct solution being an optimum. | [
"Lan, Nur",
"Chemla, Emmanuel",
"Katzir, Roni"
] | Bridging the Empirical-Theoretical Gap in Neural Network Formal Language Learning Using Minimum Description Length | acl-long.713 | Poster | 2402.10013 | [
"https://github.com/0xnurl/mdl-lstm"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.713/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.714.bib | @inproceedings{du-etal-2024-context,
title = "Context versus Prior Knowledge in Language Models",
author = "Du, Kevin and
Sn{\ae}bjarnarson, V{\'e}steinn and
Stoehr, Niklas and
White, Jennifer and
Schein, Aaron and
Cotterell, Ryan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.714",
pages = "13211--13235",
abstract = "To answer a question, language models often need to integrate prior knowledge learned during pretraining and new information presented in context. We hypothesize that models perform this integration in a predictable way across different questions and contexts: models will rely more on prior knowledge for questions about entities (e.g., persons, places, etc.) that they are more familiar with due to higher exposure in the training corpus, and be more easily persuaded by some contexts than others. To formalize this problem, we propose two mutual information-based metrics to measure a model{'}s dependency on a context and on its prior about an entity: first, the persuasion score of a given context represents how much a model depends on the context in its decision, and second, the susceptibility score of a given entity represents how much the model can be swayed away from its original answer distribution about an entity. We empirically test our metrics for their validity and reliability. Finally, we explore and find a relationship between the scores and the model{'}s expected familiarity with an entity, and provide two use cases to illustrate their benefits.",
}
| To answer a question, language models often need to integrate prior knowledge learned during pretraining and new information presented in context. We hypothesize that models perform this integration in a predictable way across different questions and contexts: models will rely more on prior knowledge for questions about entities (e.g., persons, places, etc.) that they are more familiar with due to higher exposure in the training corpus, and be more easily persuaded by some contexts than others. To formalize this problem, we propose two mutual information-based metrics to measure a model{'}s dependency on a context and on its prior about an entity: first, the persuasion score of a given context represents how much a model depends on the context in its decision, and second, the susceptibility score of a given entity represents how much the model can be swayed away from its original answer distribution about an entity. We empirically test our metrics for their validity and reliability. Finally, we explore and find a relationship between the scores and the model{'}s expected familiarity with an entity, and provide two use cases to illustrate their benefits. | [
"Du, Kevin",
"Sn{\\ae}bjarnarson, V{\\'e}steinn",
"Stoehr, Niklas",
"White, Jennifer",
"Schein, Aaron",
"Cotterell, Ryan"
] | Context versus Prior Knowledge in Language Models | acl-long.714 | Poster | 2404.04633 | [
""
] | https://huggingface.co/papers/2404.04633 | 0 | 5 | 0 | 6 | https://aclanthology.org/2024.acl-long.714/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.715.bib | @inproceedings{li-etal-2024-word,
title = "Word Matters: What Influences Domain Adaptation in Summarization?",
author = "Li, Yinghao and
Miao, Siyu and
Huang, Heyan and
Gao, Yang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.715",
pages = "13236--13249",
abstract = "Domain adaptation aims to enable Large Language Models (LLMs) to generalize domain datasets unseen effectively during the training phase. However, factors such as the size of the model parameters and the scale of training data are general influencers and do not reflect the nuances of domain adaptation performance. This paper investigates the fine-grained factors affecting domain adaptation performance, analyzing the specific impact of {`}words{'} in training data on summarization tasks. We propose quantifying dataset learning difficulty as the learning difficulty of generative summarization, which is determined by two indicators: word-based compression rate and abstraction level. Our experiments conclude that, when considering dataset learning difficulty, the cross-domain overlap and the performance gain in summarization tasks exhibit an approximate linear relationship, which is not directly related to the number of words. Based on this finding, predicting a model{'}s performance on unknown domain datasets is possible without undergoing training. Source code and scripts are available at https://github.com/li-aolong/Word-Matters.",
}
| Domain adaptation aims to enable Large Language Models (LLMs) to generalize domain datasets unseen effectively during the training phase. However, factors such as the size of the model parameters and the scale of training data are general influencers and do not reflect the nuances of domain adaptation performance. This paper investigates the fine-grained factors affecting domain adaptation performance, analyzing the specific impact of {`}words{'} in training data on summarization tasks. We propose quantifying dataset learning difficulty as the learning difficulty of generative summarization, which is determined by two indicators: word-based compression rate and abstraction level. Our experiments conclude that, when considering dataset learning difficulty, the cross-domain overlap and the performance gain in summarization tasks exhibit an approximate linear relationship, which is not directly related to the number of words. Based on this finding, predicting a model{'}s performance on unknown domain datasets is possible without undergoing training. Source code and scripts are available at https://github.com/li-aolong/Word-Matters. | [
"Li, Yinghao",
"Miao, Siyu",
"Huang, Heyan",
"Gao, Yang"
] | Word Matters: What Influences Domain Adaptation in Summarization? | acl-long.715 | Poster | 2406.14828 | [
"https://github.com/li-aolong/Word-Matters"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.715/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.716.bib | @inproceedings{li-etal-2024-visualization,
title = "Visualization Recommendation with Prompt-based Reprogramming of Large Language Models",
author = "Li, Xinhang and
Zhou, Jingbo and
Chen, Wei and
Xu, Derong and
Xu, Tong and
Chen, Enhong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.716",
pages = "13250--13262",
abstract = "Visualization recommendations, which aim to automatically match proper visual charts for specific data tables, can significantly simplify the data analysis process. Traditional approaches in this domain have primarily relied on rule-based or machine learning-based methodologies. These methods often demand extensive manual maintenance and yet fail to fully comprehend the tabular data, leading to unsatisfactory performance. Recently, Large Language Models (LLMs) have emerged as powerful tools, exhibiting strong reasoning capabilities. This advancement suggests their substantial promise in addressing visualization recommendation challenges. However, effectively harnessing LLMs to discern and rationalize patterns in tabular data, and consequently deduce the essential information for chart generation, remains an unresolved challenge. To this end, we introduce a novel Hierarchical Table Prompt-based reprogramming framework, named HTP. This framework aims to integrate multi-dimensional tabular data into LLMs through a strategically crafted prompt learning method while keeping the LLMs{'} backbone and weights unaltered. The HTP framework uniquely incorporates a four-level prompt structure, encompassing general, instance, cluster, and column levels. This multi-level approach is engineered to provide a comprehensive understanding of both general distribution and multifaceted fine-grained features of tabular data, before inputting the tabular data into the frozen LLM. Our empirical studies confirm that the HTP framework achieves state-of-the-art performance, marking an advancement in the field of data visualization and analysis. The code and data will be made publicly available upon acceptance.",
}
| Visualization recommendations, which aim to automatically match proper visual charts for specific data tables, can significantly simplify the data analysis process. Traditional approaches in this domain have primarily relied on rule-based or machine learning-based methodologies. These methods often demand extensive manual maintenance and yet fail to fully comprehend the tabular data, leading to unsatisfactory performance. Recently, Large Language Models (LLMs) have emerged as powerful tools, exhibiting strong reasoning capabilities. This advancement suggests their substantial promise in addressing visualization recommendation challenges. However, effectively harnessing LLMs to discern and rationalize patterns in tabular data, and consequently deduce the essential information for chart generation, remains an unresolved challenge. To this end, we introduce a novel Hierarchical Table Prompt-based reprogramming framework, named HTP. This framework aims to integrate multi-dimensional tabular data into LLMs through a strategically crafted prompt learning method while keeping the LLMs{'} backbone and weights unaltered. The HTP framework uniquely incorporates a four-level prompt structure, encompassing general, instance, cluster, and column levels. This multi-level approach is engineered to provide a comprehensive understanding of both general distribution and multifaceted fine-grained features of tabular data, before inputting the tabular data into the frozen LLM. Our empirical studies confirm that the HTP framework achieves state-of-the-art performance, marking an advancement in the field of data visualization and analysis. The code and data will be made publicly available upon acceptance. | [
"Li, Xinhang",
"Zhou, Jingbo",
"Chen, Wei",
"Xu, Derong",
"Xu, Tong",
"Chen, Enhong"
] | Visualization Recommendation with Prompt-based Reprogramming of Large Language Models | acl-long.716 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.716/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.717.bib | @inproceedings{panda-etal-2024-holmes,
title = "{HOLMES}: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering using {LLM}s",
author = "Panda, Pranoy and
Agarwal, Ankush and
Devaguptapu, Chaitanya and
Kaul, Manohar and
Ap, Prathosh",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.717",
pages = "13263--13282",
abstract = "Given unstructured text, Large Language Models (LLMs) are adept at answering simple (single-hop) questions. However, as the complexity of the questions increase, the performance of LLMs degrade. We believe this is due to the overhead associated with understanding the complex question followed by filtering and aggregating unstructured information in the raw text. Recent methods try to reduce this burden by integrating structured knowledge triples into the raw text, aiming to provide a structured overview that simplifies information processing. However, this simplistic approach is query-agnostic and the extracted facts are ambiguous as they lack context. To address these drawbacks and to enable LLMs to answer complex (multi-hop) questions with ease, we propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information. The use of our compressed distilled KG as input to the LLM results in our method utilizing up to 67{\%} fewer tokens to represent the query relevant information present in the supporting documents, compared to the state-of-the-art (SoTA) method.Our experiments show consistent improvements over the SoTA across several metrics (EM, F1, BERTScore, and Human Eval) on two popular benchmark datasets (HotpotQA and MuSiQue).",
}
| Given unstructured text, Large Language Models (LLMs) are adept at answering simple (single-hop) questions. However, as the complexity of the questions increase, the performance of LLMs degrade. We believe this is due to the overhead associated with understanding the complex question followed by filtering and aggregating unstructured information in the raw text. Recent methods try to reduce this burden by integrating structured knowledge triples into the raw text, aiming to provide a structured overview that simplifies information processing. However, this simplistic approach is query-agnostic and the extracted facts are ambiguous as they lack context. To address these drawbacks and to enable LLMs to answer complex (multi-hop) questions with ease, we propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information. The use of our compressed distilled KG as input to the LLM results in our method utilizing up to 67{\%} fewer tokens to represent the query relevant information present in the supporting documents, compared to the state-of-the-art (SoTA) method.Our experiments show consistent improvements over the SoTA across several metrics (EM, F1, BERTScore, and Human Eval) on two popular benchmark datasets (HotpotQA and MuSiQue). | [
"P",
"a, Pranoy",
"Agarwal, Ankush",
"Devaguptapu, Chaitanya",
"Kaul, Manohar",
"Ap, Prathosh"
] | HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering using LLMs | acl-long.717 | Poster | 2406.06027 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.717/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.718.bib | @inproceedings{ross-andreas-2024-toward,
title = "Toward In-Context Teaching: Adapting Examples to Students{'} Misconceptions",
author = "Ross, Alexis and
Andreas, Jacob",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.718",
pages = "13283--13310",
abstract = "When a teacher provides examples for a student to study, these examples must be informative, enabling a student to progress from their current state toward a target concept or skill. Good teachers must therefore simultaneously infer what students already know and adapt their teaching to students{'} changing state of knowledge. There is increasing interest in using computational models, particularly large language models, as pedagogical tools. As students, language models in particular have shown a remarkable ability to adapt to new tasks given small numbers of examples. But how effectively can these models adapt as teachers to students of different types? To study this question, we introduce a suite of models and evaluation methods we call AdapT. AdapT has two components: (1) a collection of simulated Bayesian student models that can be used for evaluation of automated teaching methods; (2) a platform for evaluation with human students, to characterize the real-world effectiveness of these methods. We additionally introduce (3) AToM, a new probabilistic method for adaptive teaching that jointly infers students{'} past beliefs and optimizes for the correctness of future beliefs. In evaluations of simulated students across three learning domains (fraction arithmetic, English morphology, function learning), AToM systematically outperforms LLM-based and standard Bayesian teaching methods. In human experiments, both AToM and LLMs outperform non-adaptive random example selection. Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive methods for solving it.",
}
| When a teacher provides examples for a student to study, these examples must be informative, enabling a student to progress from their current state toward a target concept or skill. Good teachers must therefore simultaneously infer what students already know and adapt their teaching to students{'} changing state of knowledge. There is increasing interest in using computational models, particularly large language models, as pedagogical tools. As students, language models in particular have shown a remarkable ability to adapt to new tasks given small numbers of examples. But how effectively can these models adapt as teachers to students of different types? To study this question, we introduce a suite of models and evaluation methods we call AdapT. AdapT has two components: (1) a collection of simulated Bayesian student models that can be used for evaluation of automated teaching methods; (2) a platform for evaluation with human students, to characterize the real-world effectiveness of these methods. We additionally introduce (3) AToM, a new probabilistic method for adaptive teaching that jointly infers students{'} past beliefs and optimizes for the correctness of future beliefs. In evaluations of simulated students across three learning domains (fraction arithmetic, English morphology, function learning), AToM systematically outperforms LLM-based and standard Bayesian teaching methods. In human experiments, both AToM and LLMs outperform non-adaptive random example selection. Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive methods for solving it. | [
"Ross, Alexis",
"Andreas, Jacob"
] | Toward In-Context Teaching: Adapting Examples to Students' Misconceptions | acl-long.718 | Poster | 2405.04495 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.718/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.719.bib | @inproceedings{tian-etal-2024-bridging,
title = "Bridging Word-Pair and Token-Level Metaphor Detection with Explainable Domain Mining",
author = "Tian, Yuan and
Zhang, Ruike and
Xu, Nan and
Mao, Wenji",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.719",
pages = "13311--13325",
abstract = "Metaphor detection aims to identify whether a linguistic expression in text is metaphorical or literal. Most existing research tackles this problem either using word-pair or token-level information as input, and thus treats word-pair and token-level metaphor detection as distinct subtasks. Benefited from the simplified structure of word pairs, recent methods for word-pair metaphor detection can provide intermediate explainable clues for the detection results, which remains a challenging issue for token-level metaphor detection. To mitigate this issue in token-level metaphor detection and take advantage of word pairs, in this paper, we make the first attempt to bridge word-pair and token-level metaphor detection via modeling word pairs within a sentence as explainable intermediate information. As the central role of verb in metaphorical expressions, we focus on token-level verb metaphor detection and propose a novel explainable Word Pair based Domain Mining (WPDM) method. Our work is inspired by conceptual metaphor theory (CMT). We first devise an approach for conceptual domain mining utilizing semantic role mapping and resources at cognitive, commonsense and lexical levels. We then leverage the inconsistency between source and target domains for core word pair modeling to facilitate the explainability. Experiments on four datasets verify the effectiveness of our method and demonstrate its capability to provide the core word pair and corresponding conceptual domains as explainable clues for metaphor detection.",
}
| Metaphor detection aims to identify whether a linguistic expression in text is metaphorical or literal. Most existing research tackles this problem either using word-pair or token-level information as input, and thus treats word-pair and token-level metaphor detection as distinct subtasks. Benefited from the simplified structure of word pairs, recent methods for word-pair metaphor detection can provide intermediate explainable clues for the detection results, which remains a challenging issue for token-level metaphor detection. To mitigate this issue in token-level metaphor detection and take advantage of word pairs, in this paper, we make the first attempt to bridge word-pair and token-level metaphor detection via modeling word pairs within a sentence as explainable intermediate information. As the central role of verb in metaphorical expressions, we focus on token-level verb metaphor detection and propose a novel explainable Word Pair based Domain Mining (WPDM) method. Our work is inspired by conceptual metaphor theory (CMT). We first devise an approach for conceptual domain mining utilizing semantic role mapping and resources at cognitive, commonsense and lexical levels. We then leverage the inconsistency between source and target domains for core word pair modeling to facilitate the explainability. Experiments on four datasets verify the effectiveness of our method and demonstrate its capability to provide the core word pair and corresponding conceptual domains as explainable clues for metaphor detection. | [
"Tian, Yuan",
"Zhang, Ruike",
"Xu, Nan",
"Mao, Wenji"
] | Bridging Word-Pair and Token-Level Metaphor Detection with Explainable Domain Mining | acl-long.719 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.719/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.720.bib | @inproceedings{xu-etal-2024-faithful,
title = "Faithful Logical Reasoning via Symbolic Chain-of-Thought",
author = "Xu, Jundong and
Fei, Hao and
Pan, Liangming and
Liu, Qian and
Lee, Mong-Li and
Hsu, Wynne",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.720",
pages = "13326--13365",
abstract = "While the recent Chain-of-Thought (CoT) technique enhances the reasoning ability of large language models (LLMs) with the theory of mind, it might still struggle in handling logical reasoning that relies much on symbolic expressions and rigid deducing rules. To strengthen the logical reasoning capability of LLMs, we propose a novel Symbolic Chain-of-Thought, namely SymbCoT, a fully LLM-based framework that integrates symbolic expressions and logic rules with CoT prompting. Technically, building upon an LLM, SymbCoT 1) first translates the natural language context into the symbolic format, and then 2) derives a step-by-step plan to solve the problem with symbolic logical rules, 3) followed by a verifier to check the translation and reasoning chain. Via thorough evaluations on 5 standard datasets with both First-Order Logic and Constraint Optimization symbolic expressions, SymbCoT shows striking improvements over the CoT method consistently, meanwhile refreshing the current state-of-the-art performances. We further demonstrate that our system advances in more faithful, flexible, and explainable logical reasoning. To our knowledge, this is the first attempt at combining symbolic expressions and rules into CoT for logical reasoning with LLMs. Code is open at https://github.com/Aiden0526/SymbCoT.",
}
| While the recent Chain-of-Thought (CoT) technique enhances the reasoning ability of large language models (LLMs) with the theory of mind, it might still struggle in handling logical reasoning that relies much on symbolic expressions and rigid deducing rules. To strengthen the logical reasoning capability of LLMs, we propose a novel Symbolic Chain-of-Thought, namely SymbCoT, a fully LLM-based framework that integrates symbolic expressions and logic rules with CoT prompting. Technically, building upon an LLM, SymbCoT 1) first translates the natural language context into the symbolic format, and then 2) derives a step-by-step plan to solve the problem with symbolic logical rules, 3) followed by a verifier to check the translation and reasoning chain. Via thorough evaluations on 5 standard datasets with both First-Order Logic and Constraint Optimization symbolic expressions, SymbCoT shows striking improvements over the CoT method consistently, meanwhile refreshing the current state-of-the-art performances. We further demonstrate that our system advances in more faithful, flexible, and explainable logical reasoning. To our knowledge, this is the first attempt at combining symbolic expressions and rules into CoT for logical reasoning with LLMs. Code is open at https://github.com/Aiden0526/SymbCoT. | [
"Xu, Jundong",
"Fei, Hao",
"Pan, Liangming",
"Liu, Qian",
"Lee, Mong-Li",
"Hsu, Wynne"
] | Faithful Logical Reasoning via Symbolic Chain-of-Thought | acl-long.720 | Poster | 2405.18357 | [
"https://github.com/aiden0526/symbcot"
] | https://huggingface.co/papers/2405.18357 | 0 | 0 | 0 | 6 | https://aclanthology.org/2024.acl-long.720/ | [
"seandearnaley/phi-3-mini-4k-june-symbolic-sentiment-analysis-july-03-2024-2-epoch",
"seandearnaley/neuraldaredevil-8b-abliterated-sentiment-analysis-june-05-2024-1-epoch"
] | [] | [
"featherless-ai/try-this-model",
"Darok/Featherless-Feud"
] | 1 |
https://aclanthology.org/2024.acl-long.721.bib | @inproceedings{chen-etal-2024-s2gsl,
title = "{S}$^2${GSL}: Incorporating Segment to Syntactic Enhanced Graph Structure Learning for Aspect-based Sentiment Analysis",
author = "Chen, Bingfeng and
Ouyang, Qihan and
Luo, Yongqi and
Xu, Boyan and
Cai, Ruichu and
Hao, Zhifeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.721",
pages = "13366--13379",
abstract = "Previous graph-based approaches in Aspect-based Sentiment Analysis(ABSA) have demonstrated impressive performance by utilizing graph neural networks and attention mechanisms to learn structures of static dependency trees and dynamic latent trees. However, incorporating both semantic and syntactic information simultaneously within complex global structures can introduce irrelevant contexts and syntactic dependencies during the process of graph structure learning, potentially resulting in inaccurate predictions. In order to address the issues above, we propose S$^2$GSL, incorporating Segment to Syntactic enhanced Graph Structure Learning for ABSA. Specifically, S$^2$GSL is featured with a segment-aware semantic graph learning and a syntax-based latent graph learning enabling the removal of irrelevant contexts and dependencies, respectively. We further propose a self-adaptive aggregation network that facilitates the fusion of two graph learning branches, thereby achieving complementarity across diverse structures. Experimental results on four benchmarks demonstrate the effectiveness of our framework.",
}
| Previous graph-based approaches in Aspect-based Sentiment Analysis(ABSA) have demonstrated impressive performance by utilizing graph neural networks and attention mechanisms to learn structures of static dependency trees and dynamic latent trees. However, incorporating both semantic and syntactic information simultaneously within complex global structures can introduce irrelevant contexts and syntactic dependencies during the process of graph structure learning, potentially resulting in inaccurate predictions. In order to address the issues above, we propose S$^2$GSL, incorporating Segment to Syntactic enhanced Graph Structure Learning for ABSA. Specifically, S$^2$GSL is featured with a segment-aware semantic graph learning and a syntax-based latent graph learning enabling the removal of irrelevant contexts and dependencies, respectively. We further propose a self-adaptive aggregation network that facilitates the fusion of two graph learning branches, thereby achieving complementarity across diverse structures. Experimental results on four benchmarks demonstrate the effectiveness of our framework. | [
"Chen, Bingfeng",
"Ouyang, Qihan",
"Luo, Yongqi",
"Xu, Boyan",
"Cai, Ruichu",
"Hao, Zhifeng"
] | S^2GSL: Incorporating Segment to Syntactic Enhanced Graph Structure Learning for Aspect-based Sentiment Analysis | acl-long.721 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.721/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.722.bib | @inproceedings{martinelli-etal-2024-maverick,
title = "Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends",
author = "Martinelli, Giuliano and
Barba, Edoardo and
Navigli, Roberto",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.722",
pages = "13380--13394",
abstract = "Large autoregressive generative models have emerged as the cornerstone for achieving the highest performance across several Natural Language Processing tasks. However, the urge to attain superior results has, at times, led to the premature replacement of carefully designed task-specific approaches without exhaustive experimentation. The Coreference Resolution task is no exception; all recent state-of-the-art solutions adopt large generative autoregressive models that outperform encoder-based discriminative systems. In this work, we challenge this recent trend by introducing Maverick, a carefully designed {--} yet simple {--} pipeline, which enables running a state-of-the-art Coreference Resolution system within the constraints of an academic budget, outperforming models with up to 13 billion parameters with as few as 500 million parameters. Maverick achieves state-of-the-art performance on the CoNLL-2012 benchmark, training with up to 0.006x the memory resources and obtaining a 170x faster inference compared to previous state-of-the-art systems. We extensively validate the robustness of the Maverick framework with an array of diverse experiments, reporting improvements over prior systems in data-scarce, long-document, and out-of-domain settings. We release our code and models for research purposes at https://github.com/SapienzaNLP/maverick-coref.",
}
| Large autoregressive generative models have emerged as the cornerstone for achieving the highest performance across several Natural Language Processing tasks. However, the urge to attain superior results has, at times, led to the premature replacement of carefully designed task-specific approaches without exhaustive experimentation. The Coreference Resolution task is no exception; all recent state-of-the-art solutions adopt large generative autoregressive models that outperform encoder-based discriminative systems. In this work, we challenge this recent trend by introducing Maverick, a carefully designed {--} yet simple {--} pipeline, which enables running a state-of-the-art Coreference Resolution system within the constraints of an academic budget, outperforming models with up to 13 billion parameters with as few as 500 million parameters. Maverick achieves state-of-the-art performance on the CoNLL-2012 benchmark, training with up to 0.006x the memory resources and obtaining a 170x faster inference compared to previous state-of-the-art systems. We extensively validate the robustness of the Maverick framework with an array of diverse experiments, reporting improvements over prior systems in data-scarce, long-document, and out-of-domain settings. We release our code and models for research purposes at https://github.com/SapienzaNLP/maverick-coref. | [
"Martinelli, Giuliano",
"Barba, Edoardo",
"Navigli, Roberto"
] | Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends | acl-long.722 | Poster | 2407.21489 | [
"https://github.com/sapienzanlp/maverick-coref"
] | https://huggingface.co/papers/2407.21489 | 1 | 0 | 0 | 3 | https://aclanthology.org/2024.acl-long.722/ | [
"sapienzanlp/maverick-mes-preco",
"sapienzanlp/maverick-mes-litbank",
"sapienzanlp/maverick-mes-ontonotes"
] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.723.bib | @inproceedings{zhang-etal-2024-escot,
title = "{ESC}o{T}: Towards Interpretable Emotional Support Dialogue Systems",
author = "Zhang, Tenggan and
Zhang, Xinjie and
Zhao, Jinming and
Zhou, Li and
Jin, Qin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.723",
pages = "13395--13412",
abstract = "Understanding the reason for emotional support response is crucial for establishing connections between users and emotional support dialogue systems. Previous works mostly focus on generating better responses but ignore interpretability, which is extremely important for constructing reliable dialogue systems. To empower the system with better interpretability, we propose an emotional support response generation scheme, named $\textbf{E}$motion-Focused and $\textbf{S}$trategy-Driven $\textbf{C}$hain-$\textbf{o}$f-$\textbf{T}$hought ($\textbf{ESCoT}$), mimicking the process of $\textit{identifying}$, $\textit{understanding}$, and $\textit{regulating}$ emotions. Specially, we construct a new dataset with ESCoT in two steps: (1) $\textit{Dialogue Generation}$ where we first generate diverse conversation situations, then enhance dialogue generation using richer emotional support strategies based on these situations; (2) $\textit{Chain Supplement}$ where we focus on supplementing selected dialogues with elements such as emotion, stimuli, appraisal, and strategy reason, forming the manually verified chains. Additionally, we further develop a model to generate dialogue responses with better interpretability. We also conduct extensive experiments and human evaluations to validate the effectiveness of the proposed ESCoT and generated dialogue responses. Our dataset, code, and model will be released.",
}
| Understanding the reason for emotional support response is crucial for establishing connections between users and emotional support dialogue systems. Previous works mostly focus on generating better responses but ignore interpretability, which is extremely important for constructing reliable dialogue systems. To empower the system with better interpretability, we propose an emotional support response generation scheme, named $\textbf{E}$motion-Focused and $\textbf{S}$trategy-Driven $\textbf{C}$hain-$\textbf{o}$f-$\textbf{T}$hought ($\textbf{ESCoT}$), mimicking the process of $\textit{identifying}$, $\textit{understanding}$, and $\textit{regulating}$ emotions. Specially, we construct a new dataset with ESCoT in two steps: (1) $\textit{Dialogue Generation}$ where we first generate diverse conversation situations, then enhance dialogue generation using richer emotional support strategies based on these situations; (2) $\textit{Chain Supplement}$ where we focus on supplementing selected dialogues with elements such as emotion, stimuli, appraisal, and strategy reason, forming the manually verified chains. Additionally, we further develop a model to generate dialogue responses with better interpretability. We also conduct extensive experiments and human evaluations to validate the effectiveness of the proposed ESCoT and generated dialogue responses. Our dataset, code, and model will be released. | [
"Zhang, Tenggan",
"Zhang, Xinjie",
"Zhao, Jinming",
"Zhou, Li",
"Jin, Qin"
] | ESCoT: Towards Interpretable Emotional Support Dialogue Systems | acl-long.723 | Poster | 2406.10960 | [
"https://github.com/teigenzhang/escot"
] | https://huggingface.co/papers/2406.10960 | 1 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.723/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.724.bib | @inproceedings{xu-etal-2024-pathreasoner,
title = "{P}ath{R}easoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering",
author = "Xu, Fangzhi and
Lin, Qika and
Zhao, Tianzhe and
JiaweiHan, JiaweiHan and
Liu, Jun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.724",
pages = "13413--13429",
abstract = "Logical reasoning task has attracted great interest since it was proposed. Faced with such a task, current competitive models, even large language models (e.g., ChatGPT and PaLM 2), still perform badly. Previous promising LMs struggle in logical consistency modeling and logical structure perception. To this end, we model the logical reasoning task by transforming each logical sample into reasoning paths and propose an architecture PathReasoner. It addresses the task from the views of both data and model. To expand the diversity of the logical samples, we propose an atom extension strategy supported by equivalent logical formulas, to form new reasoning paths. From the model perspective, we design a stack of transformer-style blocks. In particular, we propose a path-attention module to joint model in-atom and cross-atom relations with the high-order diffusion strategy. Experiments show that PathReasoner achieves competitive performances on two logical reasoning benchmarks and great generalization abilities.",
}
| Logical reasoning task has attracted great interest since it was proposed. Faced with such a task, current competitive models, even large language models (e.g., ChatGPT and PaLM 2), still perform badly. Previous promising LMs struggle in logical consistency modeling and logical structure perception. To this end, we model the logical reasoning task by transforming each logical sample into reasoning paths and propose an architecture PathReasoner. It addresses the task from the views of both data and model. To expand the diversity of the logical samples, we propose an atom extension strategy supported by equivalent logical formulas, to form new reasoning paths. From the model perspective, we design a stack of transformer-style blocks. In particular, we propose a path-attention module to joint model in-atom and cross-atom relations with the high-order diffusion strategy. Experiments show that PathReasoner achieves competitive performances on two logical reasoning benchmarks and great generalization abilities. | [
"Xu, Fangzhi",
"Lin, Qika",
"Zhao, Tianzhe",
"JiaweiHan, JiaweiHan",
"Liu, Jun"
] | PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering | acl-long.724 | Poster | 2405.19109 | [
""
] | https://huggingface.co/papers/2405.19109 | 2 | 2 | 0 | 5 | https://aclanthology.org/2024.acl-long.724/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.725.bib | @inproceedings{shetty-etal-2024-warden,
title = "{WARDEN}: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection",
author = "Shetty, Anudeex and
Teng, Yue and
He, Ke and
Xu, Qiongkai",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.725",
pages = "13430--13444",
abstract = "Embedding as a Service (EaaS) has become a widely adopted solution, which offers feature extraction capabilities for addressing various downstream tasks in Natural Language Processing (NLP). Prior studies have shown that EaaS can be prone to model extraction attacks; nevertheless, this concern could be mitigated by adding backdoor watermarks to the text embeddings and subsequently verifying the attack models post-publication. Through the analysis of the recent watermarking strategy for EaaS, EmbMarker, we design a novel CSE (Clustering, Selection, Elimination) attack that removes the backdoor watermark while maintaining the high utility of embeddings, indicating that the previous watermarking approach can be breached. In response to this new threat, we propose a new protocol to make the removal of watermarks more challenging by incorporating multiple possible watermark directions. Our defense approach, WARDEN, notably increases the stealthiness of watermarks and has been empirically shown to be effective against CSE attack.",
}
| Embedding as a Service (EaaS) has become a widely adopted solution, which offers feature extraction capabilities for addressing various downstream tasks in Natural Language Processing (NLP). Prior studies have shown that EaaS can be prone to model extraction attacks; nevertheless, this concern could be mitigated by adding backdoor watermarks to the text embeddings and subsequently verifying the attack models post-publication. Through the analysis of the recent watermarking strategy for EaaS, EmbMarker, we design a novel CSE (Clustering, Selection, Elimination) attack that removes the backdoor watermark while maintaining the high utility of embeddings, indicating that the previous watermarking approach can be breached. In response to this new threat, we propose a new protocol to make the removal of watermarks more challenging by incorporating multiple possible watermark directions. Our defense approach, WARDEN, notably increases the stealthiness of watermarks and has been empirically shown to be effective against CSE attack. | [
"Shetty, Anudeex",
"Teng, Yue",
"He, Ke",
"Xu, Qiongkai"
] | WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection | acl-long.725 | Poster | 2403.01472 | [
"https://github.com/anudeex/warden"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.725/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.726.bib | @inproceedings{wu-etal-2024-advancing,
title = "Advancing Parameter Efficiency in Fine-tuning via Representation Editing",
author = "Wu, Muling and
Liu, Wenhao and
Wang, Xiaohua and
Li, Tianlong and
Lv, Changze and
Ling, Zixuan and
JianHao, Zhu and
Zhang, Cenyuan and
Zheng, Xiaoqing and
Huang, Xuanjing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.726",
pages = "13445--13464",
abstract = "Parameter Efficient Fine-Tuning (PEFT) has gained significant attention for its ability to achieve competitive results while updating only a small subset of trainable parameters. Despite the promising performance of current PEFT methods, they present challenges in hyperparameter selection, such as determining the rank of LoRA or Adapter, or specifying the length of soft prompts. In addressing these challenges, we propose a novel approach to fine-tuning neural models, termed Representation EDiting (RED), which scales and biases the representation produced at each layer. RED substantially reduces the number of trainable parameters by a factor of 25,700 compared to full parameter fine-tuning, and by a factor of 32 compared to LoRA. Remarkably, RED achieves comparable or superior results to full parameter fine-tuning and other PEFT methods. Extensive experiments were conducted across models of varying architectures and scales, including RoBERTa, GPT-2, T5, and Llama-2, and the results demonstrate the efficiency and efficacy of RED, positioning it as a promising PEFT approach for large neural models.",
}
| Parameter Efficient Fine-Tuning (PEFT) has gained significant attention for its ability to achieve competitive results while updating only a small subset of trainable parameters. Despite the promising performance of current PEFT methods, they present challenges in hyperparameter selection, such as determining the rank of LoRA or Adapter, or specifying the length of soft prompts. In addressing these challenges, we propose a novel approach to fine-tuning neural models, termed Representation EDiting (RED), which scales and biases the representation produced at each layer. RED substantially reduces the number of trainable parameters by a factor of 25,700 compared to full parameter fine-tuning, and by a factor of 32 compared to LoRA. Remarkably, RED achieves comparable or superior results to full parameter fine-tuning and other PEFT methods. Extensive experiments were conducted across models of varying architectures and scales, including RoBERTa, GPT-2, T5, and Llama-2, and the results demonstrate the efficiency and efficacy of RED, positioning it as a promising PEFT approach for large neural models. | [
"Wu, Muling",
"Liu, Wenhao",
"Wang, Xiaohua",
"Li, Tianlong",
"Lv, Changze",
"Ling, Zixuan",
"JianHao, Zhu",
"Zhang, Cenyuan",
"Zheng, Xiaoqing",
"Huang, Xuanjing"
] | Advancing Parameter Efficiency in Fine-tuning via Representation Editing | acl-long.726 | Poster | 2402.15179 | [
"https://github.com/mlwu22/red"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.726/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.727.bib | @inproceedings{zhong-etal-2024-context,
title = "Context Consistency between Training and Inference in Simultaneous Machine Translation",
author = "Zhong, Meizhi and
Liu, Lemao and
Chen, Kehai and
Yang, Mingming and
Zhang, Min",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.727",
pages = "13465--13476",
abstract = "Simultaneous Machine Translation (SiMT) aims to yield a real-time partial translation with a monotonically growing source-side context.However, there is a counterintuitive phenomenon about the context usage between training and inference: *e.g.*, in wait-$k$ inference, model consistently trained with wait-$k$ is much worse than that model inconsistently trained with wait-$k'$ ($k'\neq k$) in terms of translation quality. To this end, we first investigate the underlying reasons behind this phenomenon and uncover the following two factors: 1) the limited correlation between translation quality and training loss; 2) exposure bias between training and inference. Based on both reasons, we then propose an effective training approach called context consistency training accordingly, which encourages consistent context usage between training and inference by optimizing translation quality and latency as bi-objectives and exposing the predictions to the model during the training. The experiments on three language pairs demonstrate that our SiMT system encouraging context consistency outperforms existing SiMT systems with context inconsistency for the first time.",
}
| Simultaneous Machine Translation (SiMT) aims to yield a real-time partial translation with a monotonically growing source-side context.However, there is a counterintuitive phenomenon about the context usage between training and inference: *e.g.*, in wait-$k$ inference, model consistently trained with wait-$k$ is much worse than that model inconsistently trained with wait-$k'$ ($k'\neq k$) in terms of translation quality. To this end, we first investigate the underlying reasons behind this phenomenon and uncover the following two factors: 1) the limited correlation between translation quality and training loss; 2) exposure bias between training and inference. Based on both reasons, we then propose an effective training approach called context consistency training accordingly, which encourages consistent context usage between training and inference by optimizing translation quality and latency as bi-objectives and exposing the predictions to the model during the training. The experiments on three language pairs demonstrate that our SiMT system encouraging context consistency outperforms existing SiMT systems with context inconsistency for the first time. | [
"Zhong, Meizhi",
"Liu, Lemao",
"Chen, Kehai",
"Yang, Mingming",
"Zhang, Min"
] | Context Consistency between Training and Inference in Simultaneous Machine Translation | acl-long.727 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.727/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.728.bib | @inproceedings{he-etal-2024-using,
title = "Using Natural Language Explanations to Improve Robustness of In-context Learning",
author = "He, Xuanli and
Wu, Yuxiang and
Camburu, Oana-Maria and
Minervini, Pasquale and
Stenetorp, Pontus",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.728",
pages = "13477--13499",
abstract = "Recent studies demonstrated that large language models (LLMs) can excel in many tasks via in-context learning (ICL). However, recentworks show that ICL-prompted models tend to produce inaccurate results when presented with adversarial inputs. In this work, we investigate whether augmenting ICL with natural language explanations (NLEs) improves the robustness of LLMs on adversarial datasets covering natural language inference and paraphrasing identification. We prompt LLMs with a small set of human-generated NLEs to produce further NLEs, yielding more accurate results than both a zero-shot-ICL setting and using only human-generated NLEs. Our results on five popular LLMs (GPT3.5-turbo, Llama2, Vicuna, Zephyr, and Mistral) show that our approach yields over 6{\%} improvement over baseline approaches for eight adversarial datasets: HANS, ISCS, NaN, ST, PICD, PISP, ANLI, and PAWS. Furthermore, previous studies have demonstrated that prompt selection strategies significantly enhance ICL on in-distribution test sets. However, our findings reveal that these strategies do not match the efficacy of our approach for robustness evaluations, resulting in an accuracy drop of 8{\%} compared to the proposed approach.",
}
| Recent studies demonstrated that large language models (LLMs) can excel in many tasks via in-context learning (ICL). However, recentworks show that ICL-prompted models tend to produce inaccurate results when presented with adversarial inputs. In this work, we investigate whether augmenting ICL with natural language explanations (NLEs) improves the robustness of LLMs on adversarial datasets covering natural language inference and paraphrasing identification. We prompt LLMs with a small set of human-generated NLEs to produce further NLEs, yielding more accurate results than both a zero-shot-ICL setting and using only human-generated NLEs. Our results on five popular LLMs (GPT3.5-turbo, Llama2, Vicuna, Zephyr, and Mistral) show that our approach yields over 6{\%} improvement over baseline approaches for eight adversarial datasets: HANS, ISCS, NaN, ST, PICD, PISP, ANLI, and PAWS. Furthermore, previous studies have demonstrated that prompt selection strategies significantly enhance ICL on in-distribution test sets. However, our findings reveal that these strategies do not match the efficacy of our approach for robustness evaluations, resulting in an accuracy drop of 8{\%} compared to the proposed approach. | [
"He, Xuanli",
"Wu, Yuxiang",
"Camburu, Oana-Maria",
"Minervini, Pasquale",
"Stenetorp, Pontus"
] | Using Natural Language Explanations to Improve Robustness of In-context Learning | acl-long.728 | Poster | 2311.07556 | [
"https://github.com/xlhex/acl2024_xicl"
] | https://huggingface.co/papers/2311.07556 | 1 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.728/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.729.bib | @inproceedings{xie-etal-2024-chunk,
title = "Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers",
author = "Xie, Jiawen and
Cheng, Pengyu and
Liang, Xiao and
Dai, Yong and
Du, Nan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.729",
pages = "13500--13519",
abstract = "Although dominant in natural language processing, transformer-based models still struggle with long-sequence processing, due to the computational costs of their self-attention operations, which increase exponentially as the length of the input sequence grows. To address this challenge, we propose a **Sim**ple framework to enhance the long-content processing of off-the-shelf pre-trained transformers via three steps: **C**hunk, **A**lign, and **S**elect (SimCAS). More specifically, we first divide each long-sequence input into a batch of chunks, then align the inter-chunk information during the encoding steps, and finally, select the most representative hidden states from the encoder for the decoding process. With our SimCAS, the computation and memory costs can be reduced to linear complexity. In experiments, we demonstrate the effectiveness of the proposed method on various real-world long-text summarization and reading comprehension tasks, in which SimCAS significantly outperforms prior long-sequence processing baselines. The code is at [https://github.com/xjw-nlp/SimCAS](https://github.com/xjw-nlp/SimCAS).",
}
| Although dominant in natural language processing, transformer-based models still struggle with long-sequence processing, due to the computational costs of their self-attention operations, which increase exponentially as the length of the input sequence grows. To address this challenge, we propose a **Sim**ple framework to enhance the long-content processing of off-the-shelf pre-trained transformers via three steps: **C**hunk, **A**lign, and **S**elect (SimCAS). More specifically, we first divide each long-sequence input into a batch of chunks, then align the inter-chunk information during the encoding steps, and finally, select the most representative hidden states from the encoder for the decoding process. With our SimCAS, the computation and memory costs can be reduced to linear complexity. In experiments, we demonstrate the effectiveness of the proposed method on various real-world long-text summarization and reading comprehension tasks, in which SimCAS significantly outperforms prior long-sequence processing baselines. The code is at [https://github.com/xjw-nlp/SimCAS](https://github.com/xjw-nlp/SimCAS). | [
"Xie, Jiawen",
"Cheng, Pengyu",
"Liang, Xiao",
"Dai, Yong",
"Du, Nan"
] | Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers | acl-long.729 | Poster | 2308.13191 | [
"https://github.com/xjw-nlp/simcas"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.729/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.730.bib | @inproceedings{han-etal-2024-archcode,
title = "{A}rch{C}ode: Incorporating Software Requirements in Code Generation with Large Language Models",
author = "Han, Hojae and
Kim, Jaejin and
Yoo, Jaeseok and
Lee, Youngwon and
Hwang, Seung-won",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.730",
pages = "13520--13552",
abstract = "This paper aims to extend the code generation capability of large language models (LLMs) to automatically manage comprehensive software requirements from given textual descriptions. Such requirements include both functional (i.e. achieving expected behavior for inputs) and non-functional (e.g., time/space performance, robustness, maintainability) requirements. However, textual descriptions can either express requirements verbosely or may even omit some of them. We introduce ARCHCODE, a novel framework that leverages in-context learning to organize requirements observed in descriptions and to extrapolate unexpressed requirements from them. ARCHCODE generates requirements from given descriptions, conditioning them to produce code snippets and test cases. Each test case is tailored to one of the requirements, allowing for the ranking of code snippets based on the compliance of their execution results with the requirements. Public benchmarks show that ARCHCODE enhances to satisfy functional requirements, significantly improving Pass@k scores.Furthermore, we introduce HumanEval-NFR, the first evaluation of LLMs{'} non-functional requirements in code generation, demonstrating ARCHCODE{'}s superiority over baseline methods. The implementation of ARCHCODE and the HumanEval-NFR benchmark are both publicly accessible.",
}
| This paper aims to extend the code generation capability of large language models (LLMs) to automatically manage comprehensive software requirements from given textual descriptions. Such requirements include both functional (i.e. achieving expected behavior for inputs) and non-functional (e.g., time/space performance, robustness, maintainability) requirements. However, textual descriptions can either express requirements verbosely or may even omit some of them. We introduce ARCHCODE, a novel framework that leverages in-context learning to organize requirements observed in descriptions and to extrapolate unexpressed requirements from them. ARCHCODE generates requirements from given descriptions, conditioning them to produce code snippets and test cases. Each test case is tailored to one of the requirements, allowing for the ranking of code snippets based on the compliance of their execution results with the requirements. Public benchmarks show that ARCHCODE enhances to satisfy functional requirements, significantly improving Pass@k scores.Furthermore, we introduce HumanEval-NFR, the first evaluation of LLMs{'} non-functional requirements in code generation, demonstrating ARCHCODE{'}s superiority over baseline methods. The implementation of ARCHCODE and the HumanEval-NFR benchmark are both publicly accessible. | [
"Han, Hojae",
"Kim, Jaejin",
"Yoo, Jaeseok",
"Lee, Youngwon",
"Hwang, Seung-won"
] | ArchCode: Incorporating Software Requirements in Code Generation with Large Language Models | acl-long.730 | Oral | 2408.00994 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.730/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.731.bib | @inproceedings{jia-etal-2024-combining,
title = "Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels",
author = "Jia, Zixia and
Li, Junpeng and
Zhang, Shichuan and
Liu, Anji and
Zheng, Zilong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.731",
pages = "13553--13569",
abstract = "Traditional supervised learning heavily relies on human-annotated datasets, especially in data-hungry neural approaches. However, various tasks, especially multi-label tasks like document-level relation extraction, pose challenges in fully manual annotation due to the specific domain knowledge and large class sets. Therefore, we address the multi-label positive-unlabelled learning (MLPUL) problem, where only a subset of positive classes is annotated. We propose Mixture Learner for Partially Annotated Classification (MLPAC), an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning. Experimental results across various tasks, including document-level relation extraction, multi-label image classification, and binary PU learning, demonstrate the generalization and effectiveness of our framework.",
}
| Traditional supervised learning heavily relies on human-annotated datasets, especially in data-hungry neural approaches. However, various tasks, especially multi-label tasks like document-level relation extraction, pose challenges in fully manual annotation due to the specific domain knowledge and large class sets. Therefore, we address the multi-label positive-unlabelled learning (MLPUL) problem, where only a subset of positive classes is annotated. We propose Mixture Learner for Partially Annotated Classification (MLPAC), an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning. Experimental results across various tasks, including document-level relation extraction, multi-label image classification, and binary PU learning, demonstrate the generalization and effectiveness of our framework. | [
"Jia, Zixia",
"Li, Junpeng",
"Zhang, Shichuan",
"Liu, Anji",
"Zheng, Zilong"
] | Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels | acl-long.731 | Poster | 2406.16293 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.731/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.732.bib | @inproceedings{wang-etal-2024-mulfe,
title = "{MULFE}: A Multi-Level Benchmark for Free Text Model Editing",
author = "Wang, Chenhao and
Cao, Pengfei and
Jin, Zhuoran and
Chen, Yubo and
Zeng, Daojian and
Liu, Kang and
Zhao, Jun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.732",
pages = "13570--13587",
abstract = "Adjusting the outdated behaviors of large langugae models (LLMs) after deployment remains a significant challenge. It motivates the model editing research, which is however mainly explored in a restricted task form with triple-based edit requests. Recent works have initiated a transition to a more practical and unified editing task that takes free-form text as edit requests. However, there are gaps in nuanced benchmark designs and re-evaluation of existing methods. To bridge the gaps, we introduce a multi-level benchmark for free text model editing (MULFE). The benchmark categorizes probe queries into three levels of generalization, ranging from basic literal memory to deeper understanding and reasoning. Based on the benchmark, we conduct extensive experiments across various base models, edit sizes, and editing methods, including adaptations of mainstream locate-and-edit and hypernetwork methods. The results highlight the inconsistent behaviors of edited models on different generalization levels. Higher-level generalization remains a significant challenge. Based on the findings, we propose SIDE, a simple yet effective method based on in-context distillation to enhance the generalization performance. The benchmark dataset and evaluation scripts are publicly available at http://github.com/wchrepo/mulfe.",
}
| Adjusting the outdated behaviors of large langugae models (LLMs) after deployment remains a significant challenge. It motivates the model editing research, which is however mainly explored in a restricted task form with triple-based edit requests. Recent works have initiated a transition to a more practical and unified editing task that takes free-form text as edit requests. However, there are gaps in nuanced benchmark designs and re-evaluation of existing methods. To bridge the gaps, we introduce a multi-level benchmark for free text model editing (MULFE). The benchmark categorizes probe queries into three levels of generalization, ranging from basic literal memory to deeper understanding and reasoning. Based on the benchmark, we conduct extensive experiments across various base models, edit sizes, and editing methods, including adaptations of mainstream locate-and-edit and hypernetwork methods. The results highlight the inconsistent behaviors of edited models on different generalization levels. Higher-level generalization remains a significant challenge. Based on the findings, we propose SIDE, a simple yet effective method based on in-context distillation to enhance the generalization performance. The benchmark dataset and evaluation scripts are publicly available at http://github.com/wchrepo/mulfe. | [
"Wang, Chenhao",
"Cao, Pengfei",
"Jin, Zhuoran",
"Chen, Yubo",
"Zeng, Daojian",
"Liu, Kang",
"Zhao, Jun"
] | MULFE: A Multi-Level Benchmark for Free Text Model Editing | acl-long.732 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.732/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.733.bib | @inproceedings{ji-etal-2024-mobilespeech,
title = "{M}obile{S}peech: A Fast and High-Fidelity Framework for Mobile Zero-Shot Text-to-Speech",
author = "Ji, Shengpeng and
Jiang, Ziyue and
Wang, Hanting and
Zuo, Jialong and
Zhao, Zhou",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.733",
pages = "13588--13600",
abstract = "Zero-shot text-to-speech (TTS) has gained significant attention due to its powerful voice cloning capabilities, requiring only a few seconds of unseen speaker voice prompts. However, all previous work has been developed for cloud-based systems. Taking autoregressive models as an example, although these approaches achieve high-fidelity voice cloning, they fall short in terms of inference speed, model size, and robustness. Therefore, we propose MobileSpeech, which is a fast, lightweight, and robust zero-shot text-to-speech system based on mobile devices for the first time. Specifically: 1) leveraging discrete codec, we design a parallel speech mask decoder module called SMD, which incorporates hierarchical information from the speech codec and weight mechanisms across different codec layers during the generation process. Moreover, to bridge the gap between text and speech, we introduce a high-level probabilistic mask that simulates the progression of information flow from less to more during speech generation. 2) For speaker prompts, we extract fine-grained prompt duration from the prompt speech and incorporate text, prompt speech by cross attention in SMD. We demonstrate the effectiveness of MobileSpeech on multilingual datasets at different levels, achieving state-of-the-art results in terms of generating speed and speech quality. MobileSpeech achieves RTF of 0.09 on a single A100 GPU and we have successfully deployed MobileSpeech on mobile devices. Audio samples are available at https://mobilespeech.github.io/",
}
| Zero-shot text-to-speech (TTS) has gained significant attention due to its powerful voice cloning capabilities, requiring only a few seconds of unseen speaker voice prompts. However, all previous work has been developed for cloud-based systems. Taking autoregressive models as an example, although these approaches achieve high-fidelity voice cloning, they fall short in terms of inference speed, model size, and robustness. Therefore, we propose MobileSpeech, which is a fast, lightweight, and robust zero-shot text-to-speech system based on mobile devices for the first time. Specifically: 1) leveraging discrete codec, we design a parallel speech mask decoder module called SMD, which incorporates hierarchical information from the speech codec and weight mechanisms across different codec layers during the generation process. Moreover, to bridge the gap between text and speech, we introduce a high-level probabilistic mask that simulates the progression of information flow from less to more during speech generation. 2) For speaker prompts, we extract fine-grained prompt duration from the prompt speech and incorporate text, prompt speech by cross attention in SMD. We demonstrate the effectiveness of MobileSpeech on multilingual datasets at different levels, achieving state-of-the-art results in terms of generating speed and speech quality. MobileSpeech achieves RTF of 0.09 on a single A100 GPU and we have successfully deployed MobileSpeech on mobile devices. Audio samples are available at https://mobilespeech.github.io/ | [
"Ji, Shengpeng",
"Jiang, Ziyue",
"Wang, Hanting",
"Zuo, Jialong",
"Zhao, Zhou"
] | MobileSpeech: A Fast and High-Fidelity Framework for Mobile Zero-Shot Text-to-Speech | acl-long.733 | Poster | 2402.09378 | [
""
] | https://huggingface.co/papers/2402.09378 | 0 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.733/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.734.bib | @inproceedings{gopinathan-etal-2024-spatially,
title = "Spatially-Aware Speaker for Vision-and-Language Navigation Instruction Generation",
author = "Gopinathan, Muraleekrishna and
Masek, Martin and
Abu-Khalaf, Jumana and
Suter, David",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.734",
pages = "13601--13614",
abstract = "Embodied AI aims to develop robots that can \textit{understand} and execute human language instructions, as well as communicate in natural languages. On this front, we study the task of generating highly detailed navigational instructions for the embodied robots to follow. Although recent studies have demonstrated significant leaps in the generation of step-by-step instructions from sequences of images, the generated instructions lack variety in terms of their referral to objects and landmarks. Existing speaker models learn strategies to evade the evaluation metrics and obtain higher scores even for low-quality sentences. In this work, we propose SAS (Spatially-Aware Speaker), an instruction generator or \textit{Speaker} model that utilises both structural and semantic knowledge of the environment to produce richer instructions. For training, we employ a reward learning method in an adversarial setting to avoid systematic bias introduced by language evaluation metrics. Empirically, our method outperforms existing instruction generation models, evaluated using standard metrics. Our code is available at https://github.com/gmuraleekrishna/SAS.",
}
| Embodied AI aims to develop robots that can \textit{understand} and execute human language instructions, as well as communicate in natural languages. On this front, we study the task of generating highly detailed navigational instructions for the embodied robots to follow. Although recent studies have demonstrated significant leaps in the generation of step-by-step instructions from sequences of images, the generated instructions lack variety in terms of their referral to objects and landmarks. Existing speaker models learn strategies to evade the evaluation metrics and obtain higher scores even for low-quality sentences. In this work, we propose SAS (Spatially-Aware Speaker), an instruction generator or \textit{Speaker} model that utilises both structural and semantic knowledge of the environment to produce richer instructions. For training, we employ a reward learning method in an adversarial setting to avoid systematic bias introduced by language evaluation metrics. Empirically, our method outperforms existing instruction generation models, evaluated using standard metrics. Our code is available at https://github.com/gmuraleekrishna/SAS. | [
"Gopinathan, Muraleekrishna",
"Masek, Martin",
"Abu-Khalaf, Jumana",
"Suter, David"
] | Spatially-Aware Speaker for Vision-and-Language Navigation Instruction Generation | acl-long.734 | Poster | 2409.05583 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.734/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.735.bib | @inproceedings{zhang-etal-2024-hirope,
title = "{H}i{R}o{PE}: Length Extrapolation for Code Models Using Hierarchical Position",
author = "Zhang, Kechi and
Li, Ge and
Zhang, Huangzhao and
Jin, Zhi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.735",
pages = "13615--13627",
abstract = "Addressing the limitation of context length in large language models for code-related tasks is the primary focus of this paper. Existing LLMs are constrained by their pre-trained context lengths, leading to performance issues in handling long complex code sequences. Inspired by how human programmers navigate code, we introduce Hierarchical Rotary Position Embedding (HiRoPE), a novel approach that enhances the traditional rotary position embedding into a hierarchical format based on the hierarchical structure of source code. HiRoPE offers easy integration into existing LLMs without extra training costs. Our method is extensively evaluated with various LLMs, demonstrating stable performance in tasks such as language modeling and long code completion. We also introduce a new long code understanding task with real-world code projects, in hopes of promoting further development in this code-related field. Theoretically and experimentally, we find that HiRoPE also addresses the out-of-distribution issue in position encoding. Our HiRoPE significantly expands the context length capabilities of LLMs, enabling inference at lengths exponentially greater than the training length.",
}
| Addressing the limitation of context length in large language models for code-related tasks is the primary focus of this paper. Existing LLMs are constrained by their pre-trained context lengths, leading to performance issues in handling long complex code sequences. Inspired by how human programmers navigate code, we introduce Hierarchical Rotary Position Embedding (HiRoPE), a novel approach that enhances the traditional rotary position embedding into a hierarchical format based on the hierarchical structure of source code. HiRoPE offers easy integration into existing LLMs without extra training costs. Our method is extensively evaluated with various LLMs, demonstrating stable performance in tasks such as language modeling and long code completion. We also introduce a new long code understanding task with real-world code projects, in hopes of promoting further development in this code-related field. Theoretically and experimentally, we find that HiRoPE also addresses the out-of-distribution issue in position encoding. Our HiRoPE significantly expands the context length capabilities of LLMs, enabling inference at lengths exponentially greater than the training length. | [
"Zhang, Kechi",
"Li, Ge",
"Zhang, Huangzhao",
"Jin, Zhi"
] | HiRoPE: Length Extrapolation for Code Models Using Hierarchical Position | acl-long.735 | Poster | 2403.19115 | [
""
] | https://huggingface.co/papers/2403.19115 | 0 | 0 | 0 | 4 | https://aclanthology.org/2024.acl-long.735/ | [] | [
"zkcpku/CodeSymbolUnderstanding"
] | [] | 1 |
https://aclanthology.org/2024.acl-long.736.bib | @inproceedings{he-etal-2024-never,
title = "Never Lost in the Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training",
author = "He, Junqing and
Pan, Kunhao and
Dong, Xiaoqun and
Song, Zhuoyang and
LiuYiBo, LiuYiBo and
Qianguosun, Qianguosun and
Liang, Yuxin and
Wang, Hao and
Zhang, Enming and
Zhang, Jiaxing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.736",
pages = "13628--13642",
abstract = "While large language models (LLMs) are equipped with longer text input capabilities than before, they are struggling to seek correct information in long contexts. The {``}lost in the middle{''} problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Position-Agnostic Multi-step QA (PAM QA). Trained in this task, our model excels in focusing more precisely on the desired information. Experimental results show substantial improvement in Multi-doc QA and other benchmarks, superior to state-of-the-art models by 13.7{\%} absolute gain in shuffled settings, by 21.5{\%} in passage retrieval task. We release our model and code to promote related research in the community.",
}
| While large language models (LLMs) are equipped with longer text input capabilities than before, they are struggling to seek correct information in long contexts. The {``}lost in the middle{''} problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Position-Agnostic Multi-step QA (PAM QA). Trained in this task, our model excels in focusing more precisely on the desired information. Experimental results show substantial improvement in Multi-doc QA and other benchmarks, superior to state-of-the-art models by 13.7{\%} absolute gain in shuffled settings, by 21.5{\%} in passage retrieval task. We release our model and code to promote related research in the community. | [
"He, Junqing",
"Pan, Kunhao",
"Dong, Xiaoqun",
"Song, Zhuoyang",
"LiuYiBo, LiuYiBo",
"Qianguosun, Qianguosun",
"Liang, Yuxin",
"Wang, Hao",
"Zhang, Enming",
"Zhang, Jiaxing"
] | Never Lost in the Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training | acl-long.736 | Poster | 2311.09198 | [
"https://github.com/hejunqing/never-lost-in-the-middle"
] | https://huggingface.co/papers/2311.09198 | 3 | 3 | 0 | 11 | https://aclanthology.org/2024.acl-long.736/ | [
"IDEA-CCNL/Ziya-Reader-13B-v1.0",
"qihoo360/360Zhinao-7B-Chat-360K",
"qihoo360/360Zhinao-7B-Chat-360K-Int4",
"qihoo360/360Zhinao-7B-Base",
"qihoo360/360Zhinao-7B-Chat-32K-Int4",
"qihoo360/360Zhinao-7B-Chat-32K",
"qihoo360/360Zhinao-7B-Chat-4K-Int4",
"qihoo360/360Zhinao-7B-Chat-4K"
] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.737.bib | @inproceedings{zhang-etal-2024-codeagent,
title = "{C}ode{A}gent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges",
author = "Zhang, Kechi and
Li, Jia and
Li, Ge and
Shi, Xianjie and
Jin, Zhi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.737",
pages = "13643--13658",
abstract = "Large Language Models (LLMs) have shown promise in automated code generation but typically excel only in simpler tasks such as generating standalone code units. However, real-world software development often involves complex code repositories with complex dependencies and extensive documentation. To enable LLMs to handle these realworld repo-level code generation, we present CodeAgent, a novel LLM-based agent framework that employs external tools for effective repo-level code generation. CodeAgent integrates five programming tools, enabling interaction with software artifacts for information retrieval, code implementation, and code testing. We implement four agent strategies to optimize these tools{'} usage. To the best of our knowledge, CodeAgent is the first agent tool framework specifically for repo-level code generation. In order to measure the effectiveness of our method at the repository level, we have introduced a benchmark dataset CodAgentBench. The performance on this dataset shows a significant improvement brought by our method, with improvements of pass rate ranging from 2.0 to 15.8. Further tests on the HumanEval benchmark confirm CodeAgent{'}s adaptability and efficacy across various code generation tasks. Notably, CodeAgent outperforms commercial products like Github Copilot, showcasing superior accuracy and efficiency. These results demonstrate CodeAgent{'}s robust capabilities in code generation, highlighting its potential for real-world repo-level coding challenges.",
}
| Large Language Models (LLMs) have shown promise in automated code generation but typically excel only in simpler tasks such as generating standalone code units. However, real-world software development often involves complex code repositories with complex dependencies and extensive documentation. To enable LLMs to handle these realworld repo-level code generation, we present CodeAgent, a novel LLM-based agent framework that employs external tools for effective repo-level code generation. CodeAgent integrates five programming tools, enabling interaction with software artifacts for information retrieval, code implementation, and code testing. We implement four agent strategies to optimize these tools{'} usage. To the best of our knowledge, CodeAgent is the first agent tool framework specifically for repo-level code generation. In order to measure the effectiveness of our method at the repository level, we have introduced a benchmark dataset CodAgentBench. The performance on this dataset shows a significant improvement brought by our method, with improvements of pass rate ranging from 2.0 to 15.8. Further tests on the HumanEval benchmark confirm CodeAgent{'}s adaptability and efficacy across various code generation tasks. Notably, CodeAgent outperforms commercial products like Github Copilot, showcasing superior accuracy and efficiency. These results demonstrate CodeAgent{'}s robust capabilities in code generation, highlighting its potential for real-world repo-level coding challenges. | [
"Zhang, Kechi",
"Li, Jia",
"Li, Ge",
"Shi, Xianjie",
"Jin, Zhi"
] | CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges | acl-long.737 | Poster | 2401.07339 | [
""
] | https://huggingface.co/papers/2401.07339 | 0 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.737/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.738.bib | @inproceedings{chen-etal-2024-tree,
title = "When is Tree Search Useful for {LLM} Planning? It Depends on the Discriminator",
author = "Chen, Ziru and
White, Michael and
Mooney, Ray and
Payani, Ali and
Su, Yu and
Sun, Huan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.738",
pages = "13659--13678",
abstract = "In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, a discriminator, and a planning method. We investigate the practical utility of two advanced planning methods, iterative correction and tree search. We present a comprehensive analysis of how discrimination accuracy affects the overall performance of agents when using these two methods or a simpler method, re-ranking. Experiments on two tasks, text-to-SQL parsing and mathematical reasoning, show that: (1) advanced planning methods demand discriminators with at least 90{\%} accuracy to achieve significant improvements over re-ranking; (2) current LLMs{'} discrimination abilities have not met the needs of advanced planning methods to achieve such improvements; (3) with LLM-based discriminators, advanced planning methods may not adequately balance accuracy and efficiency. For example, compared to the other two methods, tree search is at least 10{--}20 times slower but leads to negligible performance gains, which hinders its real-world applications.",
}
| In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, a discriminator, and a planning method. We investigate the practical utility of two advanced planning methods, iterative correction and tree search. We present a comprehensive analysis of how discrimination accuracy affects the overall performance of agents when using these two methods or a simpler method, re-ranking. Experiments on two tasks, text-to-SQL parsing and mathematical reasoning, show that: (1) advanced planning methods demand discriminators with at least 90{\%} accuracy to achieve significant improvements over re-ranking; (2) current LLMs{'} discrimination abilities have not met the needs of advanced planning methods to achieve such improvements; (3) with LLM-based discriminators, advanced planning methods may not adequately balance accuracy and efficiency. For example, compared to the other two methods, tree search is at least 10{--}20 times slower but leads to negligible performance gains, which hinders its real-world applications. | [
"Chen, Ziru",
"White, Michael",
"Mooney, Ray",
"Payani, Ali",
"Su, Yu",
"Sun, Huan"
] | When is Tree Search Useful for LLM Planning? It Depends on the Discriminator | acl-long.738 | Poster | 2402.10890 | [
"https://github.com/osu-nlp-group/llm-planning-eval"
] | https://huggingface.co/papers/2402.10890 | 0 | 0 | 0 | 6 | https://aclanthology.org/2024.acl-long.738/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.739.bib | @inproceedings{parmar-etal-2024-logicbench,
title = "{L}ogic{B}ench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models",
author = "Parmar, Mihir and
Patel, Nisarg and
Varshney, Neeraj and
Nakamura, Mutsumi and
Luo, Man and
Mashetty, Santosh and
Mitra, Arindam and
Baral, Chitta",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.739",
pages = "13679--13707",
abstract = "Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks. But, can they really {``}reason{''} over the natural language? This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied. However, the crucial skill pertaining to {`}logical reasoning{'} has remained underexplored. Existing work investigating this reasoning ability of LLMs has focused only on a couple of inference rules (such as modus ponens and modus tollens) of propositional and first-order logic. Addressing the above limitation, we comprehensively evaluate the logical reasoning ability of LLMs on 25 different reasoning patterns spanning over propositional, first-order, and non-monotonic logics. To enable systematic evaluation, we introduce LogicBench, a natural language question-answering dataset focusing on the use of a single inference rule. We conduct detailed analysis with a range of LLMs such as GPT-4, ChatGPT, Gemini, Llama-2, and Mistral using chain-of-thought prompting. Experimental results show that existing LLMs do not fare well on LogicBench; especially, they struggle with instances involving complex reasoning and negations. Furthermore, they sometimes tend to prioritize parametric knowledge over contextual information and overlook the correct reasoning chain. We believe that our work and findings facilitate future research for evaluating and enhancing the logical reasoning ability of LLMs.",
}
| Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks. But, can they really {``}reason{''} over the natural language? This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied. However, the crucial skill pertaining to {`}logical reasoning{'} has remained underexplored. Existing work investigating this reasoning ability of LLMs has focused only on a couple of inference rules (such as modus ponens and modus tollens) of propositional and first-order logic. Addressing the above limitation, we comprehensively evaluate the logical reasoning ability of LLMs on 25 different reasoning patterns spanning over propositional, first-order, and non-monotonic logics. To enable systematic evaluation, we introduce LogicBench, a natural language question-answering dataset focusing on the use of a single inference rule. We conduct detailed analysis with a range of LLMs such as GPT-4, ChatGPT, Gemini, Llama-2, and Mistral using chain-of-thought prompting. Experimental results show that existing LLMs do not fare well on LogicBench; especially, they struggle with instances involving complex reasoning and negations. Furthermore, they sometimes tend to prioritize parametric knowledge over contextual information and overlook the correct reasoning chain. We believe that our work and findings facilitate future research for evaluating and enhancing the logical reasoning ability of LLMs. | [
"Parmar, Mihir",
"Patel, Nisarg",
"Varshney, Neeraj",
"Nakamura, Mutsumi",
"Luo, Man",
"Mashetty, Santosh",
"Mitra, Arindam",
"Baral, Chitta"
] | LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models | acl-long.739 | Poster | 2404.15522 | [
"https://github.com/mihir3009/logicbench"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.739/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.740.bib | @inproceedings{guo-etal-2024-meta,
title = "Meta-Tuning {LLM}s to Leverage Lexical Knowledge for Generalizable Language Style Understanding",
author = "Guo, Ruohao and
Xu, Wei and
Ritter, Alan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.740",
pages = "13708--13731",
abstract = "Language style is often used by writers to convey their intentions, identities, and mastery of language. In this paper, we show that current large language models struggle to capture some language styles without fine-tuning. To address this challenge, we investigate whether LLMs can be meta-trained based on representative lexicons to recognize new styles they have not been fine-tuned on. Experiments on 13 established style classification tasks, as well as 63 novel tasks generated using LLMs, demonstrate that meta-training with style lexicons consistently improves zero-shot transfer across styles. We release the code and data at https://github.com/octaviaguo/Style-LLM.",
}
| Language style is often used by writers to convey their intentions, identities, and mastery of language. In this paper, we show that current large language models struggle to capture some language styles without fine-tuning. To address this challenge, we investigate whether LLMs can be meta-trained based on representative lexicons to recognize new styles they have not been fine-tuned on. Experiments on 13 established style classification tasks, as well as 63 novel tasks generated using LLMs, demonstrate that meta-training with style lexicons consistently improves zero-shot transfer across styles. We release the code and data at https://github.com/octaviaguo/Style-LLM. | [
"Guo, Ruohao",
"Xu, Wei",
"Ritter, Alan"
] | Meta-Tuning LLMs to Leverage Lexical Knowledge for Generalizable Language Style Understanding | acl-long.740 | Poster | 2305.14592 | [
"https://github.com/octaviaguo/style-llm"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.740/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.741.bib | @inproceedings{dou-etal-2024-reducing,
title = "Reducing Privacy Risks in Online Self-Disclosures with Language Models",
author = "Dou, Yao and
Krsek, Isadora and
Naous, Tarek and
Kabra, Anubha and
Das, Sauvik and
Ritter, Alan and
Xu, Wei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.741",
pages = "13732--13754",
abstract = "Self-disclosure, while being common and rewarding in social media interaction, also poses privacy risks. In this paper, we take the initiative to protect the user-side privacy associated with online self-disclosure through detection and abstraction. We develop a taxonomy of 19 self-disclosure categories and curate a large corpus consisting of 4.8K annotated disclosure spans. We then fine-tune a language model for detection, achieving over 65{\%} partial span F$_1$. We further conduct an HCI user study, with 82{\%} of participants viewing the model positively, highlighting its real-world applicability. Motivated by the user feedback, we introduce the task of self-disclosure abstraction, which is rephrasing disclosures into less specific terms while preserving their utility, e.g., {``}Im 16F{''} to {``}I{'}m a teenage girl{''}. We explore various fine-tuning strategies, and our best model can generate diverse abstractions that moderately reduce privacy risks while maintaining high utility according to human evaluation. To help users in deciding which disclosures to abstract, we present a task of rating their importance for context understanding. Our fine-tuned model achieves 80{\%} accuracy, on-par with GPT-3.5. Given safety and privacy considerations, we will only release our corpus and models to researcher who agree to the ethical guidelines outlined in Ethics Statement.",
}
| Self-disclosure, while being common and rewarding in social media interaction, also poses privacy risks. In this paper, we take the initiative to protect the user-side privacy associated with online self-disclosure through detection and abstraction. We develop a taxonomy of 19 self-disclosure categories and curate a large corpus consisting of 4.8K annotated disclosure spans. We then fine-tune a language model for detection, achieving over 65{\%} partial span F$_1$. We further conduct an HCI user study, with 82{\%} of participants viewing the model positively, highlighting its real-world applicability. Motivated by the user feedback, we introduce the task of self-disclosure abstraction, which is rephrasing disclosures into less specific terms while preserving their utility, e.g., {``}Im 16F{''} to {``}I{'}m a teenage girl{''}. We explore various fine-tuning strategies, and our best model can generate diverse abstractions that moderately reduce privacy risks while maintaining high utility according to human evaluation. To help users in deciding which disclosures to abstract, we present a task of rating their importance for context understanding. Our fine-tuned model achieves 80{\%} accuracy, on-par with GPT-3.5. Given safety and privacy considerations, we will only release our corpus and models to researcher who agree to the ethical guidelines outlined in Ethics Statement. | [
"Dou, Yao",
"Krsek, Isadora",
"Naous, Tarek",
"Kabra, Anubha",
"Das, Sauvik",
"Ritter, Alan",
"Xu, Wei"
] | Reducing Privacy Risks in Online Self-Disclosures with Language Models | acl-long.741 | Poster | 2311.09538 | [
""
] | https://huggingface.co/papers/2311.09538 | 1 | 0 | 0 | 7 | https://aclanthology.org/2024.acl-long.741/ | [
"douy/Llama-2-7B-lora-instruction-ft-abstraction-three-span",
"douy/deberta-v3-large-self-disclosure-detection"
] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.742.bib | @inproceedings{lin-etal-2024-navigating,
title = "Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models",
author = "Lin, Zihao and
Beigi, Mohammad and
Li, Hongxuan and
Zhou, Yufan and
Zhang, Yuxiang and
Wang, Qifan and
Yin, Wenpeng and
Huang, Lifu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.742",
pages = "13755--13772",
abstract = "Memory Editing (ME) has emerged as an efficient method to modify erroneous facts or inject new facts into Large Language Models (LLMs). Two mainstream ME methods exist: parameter-modifying ME and parameter-preserving ME (integrating extra modules while preserving original parameters). Regrettably, previous studies on ME evaluation have two critical limitations: (i) evaluating LLMs with single edit only, neglecting the need for continuous editing, and (ii) evaluations focusing solely on basic factual triples, overlooking broader LLM capabilities like logical reasoning and reading understanding. This study addresses these limitations with contributions threefold: (i) We explore how ME affects a wide range of fundamental capabilities of LLMs under sequential editing. Experimental results reveal an intriguing phenomenon: Most parameter-modifying ME consistently degrade performance across all tasks after a few sequential edits. In contrast, parameter-preserving ME effectively maintains LLMs{'} fundamental capabilities but struggles to accurately recall edited knowledge presented in a different format. (ii) We extend our evaluation to different editing settings, such as layers to edit, model size, instruction tuning, etc. Experimental findings indicate several strategies that can potentially mitigate the adverse effects of ME. (iii) We further explain why parameter-modifying damages LLMs from three dimensions: parameter changes after editing, language modeling capability, and the in-context learning capability. Our in-depth study advocates more careful use of ME in real-world scenarios.",
}
| Memory Editing (ME) has emerged as an efficient method to modify erroneous facts or inject new facts into Large Language Models (LLMs). Two mainstream ME methods exist: parameter-modifying ME and parameter-preserving ME (integrating extra modules while preserving original parameters). Regrettably, previous studies on ME evaluation have two critical limitations: (i) evaluating LLMs with single edit only, neglecting the need for continuous editing, and (ii) evaluations focusing solely on basic factual triples, overlooking broader LLM capabilities like logical reasoning and reading understanding. This study addresses these limitations with contributions threefold: (i) We explore how ME affects a wide range of fundamental capabilities of LLMs under sequential editing. Experimental results reveal an intriguing phenomenon: Most parameter-modifying ME consistently degrade performance across all tasks after a few sequential edits. In contrast, parameter-preserving ME effectively maintains LLMs{'} fundamental capabilities but struggles to accurately recall edited knowledge presented in a different format. (ii) We extend our evaluation to different editing settings, such as layers to edit, model size, instruction tuning, etc. Experimental findings indicate several strategies that can potentially mitigate the adverse effects of ME. (iii) We further explain why parameter-modifying damages LLMs from three dimensions: parameter changes after editing, language modeling capability, and the in-context learning capability. Our in-depth study advocates more careful use of ME in real-world scenarios. | [
"Lin, Zihao",
"Beigi, Mohammad",
"Li, Hongxuan",
"Zhou, Yufan",
"Zhang, Yuxiang",
"Wang, Qifan",
"Yin, Wenpeng",
"Huang, Lifu"
] | Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models | acl-long.742 | Poster | 2402.11122 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.742/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.743.bib | @inproceedings{patil-etal-2024-refinesumm,
title = "{REFINESUMM}: Self-Refining {MLLM} for Generating a Multimodal Summarization Dataset",
author = "Patil, Vaidehi and
Ribeiro, Leonardo and
Liu, Mengwen and
Bansal, Mohit and
Dreyer, Markus",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.743",
pages = "13773--13786",
abstract = "Multimodal Large Language Models (MLLMs) excel at synthesizing key information from diverse sources. However, generating accurate and faithful multimodal summaries is challenging, primarily due to the lack of appropriate multimodal datasets for fine-tuning that meaningfully integrate textual and visual modalities. To address this gap, we present a new dataset designed specifically for image-text multimodal summarization, harnessing the capabilities of state-of-the-art MLLMs. We generate summaries from Wikipedia sections and corresponding images and evaluate them across text-based, visual and multimodal dimensions, employing reference-free metrics. To refine the dataset, we: (1) Filter the MLLM-generated summaries by training a critic model on human annotations and using its predictions to remove low-quality summaries; (2) Fine-tune the MLLM with the filtered high-quality summaries; (3) Use the fine-tuned model in turn to regenerate the summaries. This self-refinement process significantly improves summary quality, as measured by human judgements and automatic multimodal metrics, resulting in a valuable dataset for multimodal summarization research. The dataset is publicly available at https://github.com/amazon-science/refinesumm.",
}
| Multimodal Large Language Models (MLLMs) excel at synthesizing key information from diverse sources. However, generating accurate and faithful multimodal summaries is challenging, primarily due to the lack of appropriate multimodal datasets for fine-tuning that meaningfully integrate textual and visual modalities. To address this gap, we present a new dataset designed specifically for image-text multimodal summarization, harnessing the capabilities of state-of-the-art MLLMs. We generate summaries from Wikipedia sections and corresponding images and evaluate them across text-based, visual and multimodal dimensions, employing reference-free metrics. To refine the dataset, we: (1) Filter the MLLM-generated summaries by training a critic model on human annotations and using its predictions to remove low-quality summaries; (2) Fine-tune the MLLM with the filtered high-quality summaries; (3) Use the fine-tuned model in turn to regenerate the summaries. This self-refinement process significantly improves summary quality, as measured by human judgements and automatic multimodal metrics, resulting in a valuable dataset for multimodal summarization research. The dataset is publicly available at https://github.com/amazon-science/refinesumm. | [
"Patil, Vaidehi",
"Ribeiro, Leonardo",
"Liu, Mengwen",
"Bansal, Mohit",
"Dreyer, Markus"
] | REFINESUMM: Self-Refining MLLM for Generating a Multimodal Summarization Dataset | acl-long.743 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.743/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.744.bib | @inproceedings{alzahrani-etal-2024-benchmarks,
title = "When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards",
author = "Alzahrani, Norah and
Alyahya, Hisham and
Alnumay, Yazeed and
AlRashed, Sultan and
Alsubaie, Shaykhah and
Almushayqih, Yousef and
Mirza, Faisal and
Alotaibi, Nouf and
Al-Twairesh, Nora and
Alowisheq, Areeb and
Bari, M Saiful and
Khan, Haidar",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.744",
pages = "13787--13805",
abstract = "Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection. Often, the published leaderboard rankings are taken at face value {---} we show this is a (potentially costly) mistake. Under existing leaderboards, the relative performance of LLMs is highly sensitive to (often minute) details. We show that for popular multiple-choice question benchmarks (e.g., MMLU), minor perturbations to the benchmark, such as changing the order of choices or the method of answer selection, result in changes in rankings up to 8 positions. We explain this phenomenon by conducting systematic experiments over three broad categories of benchmark perturbations and identifying the sources of this behavior. Our analysis results in several best-practice recommendations, including the advantage of a *hybrid* scoring method for answer selection. Our study highlights the dangers of relying on simple benchmark evaluations and charts the path for more robust evaluation schemes on the existing benchmarks. The code for this paper is available at [https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness](https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness).",
}
| Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection. Often, the published leaderboard rankings are taken at face value {---} we show this is a (potentially costly) mistake. Under existing leaderboards, the relative performance of LLMs is highly sensitive to (often minute) details. We show that for popular multiple-choice question benchmarks (e.g., MMLU), minor perturbations to the benchmark, such as changing the order of choices or the method of answer selection, result in changes in rankings up to 8 positions. We explain this phenomenon by conducting systematic experiments over three broad categories of benchmark perturbations and identifying the sources of this behavior. Our analysis results in several best-practice recommendations, including the advantage of a *hybrid* scoring method for answer selection. Our study highlights the dangers of relying on simple benchmark evaluations and charts the path for more robust evaluation schemes on the existing benchmarks. The code for this paper is available at [https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness](https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness). | [
"Alzahrani, Norah",
"Alyahya, Hisham",
"Alnumay, Yazeed",
"AlRashed, Sultan",
"Alsubaie, Shaykhah",
"Almushayqih, Yousef",
"Mirza, Faisal",
"Alotaibi, Nouf",
"Al-Twairesh, Nora",
"Alowisheq, Areeb",
"Bari, M Saiful",
"Khan, Haidar"
] | When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards | acl-long.744 | Poster | 2402.01781 | [
"https://github.com/national-center-for-ai-saudi-arabia/lm-evaluation-harness"
] | https://huggingface.co/papers/2402.01781 | 0 | 0 | 0 | 12 | https://aclanthology.org/2024.acl-long.744/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.745.bib | @inproceedings{hashemi-etal-2024-llm,
title = "{LLM}-Rubric: A Multidimensional, Calibrated Approach to Automated Evaluation of Natural Language Texts",
author = "Hashemi, Helia and
Eisner, Jason and
Rosset, Corby and
Van Durme, Benjamin and
Kedzie, Chris",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.745",
pages = "13806--13834",
abstract = "This paper introduces a framework for the automated evaluation of natural language texts. A manually constructed rubric describes how to assess multiple dimensions of interest. To evaluate a text, a large language model (LLM) is prompted with each rubric question and produces a distribution over potential responses. The LLM predictions often fail to agree well with human judges{---}indeed, the humans do not fully agree with one another. However, the multiple LLM distributions can be {\_}combined{\_} to {\_}predict{\_} each human judge{'}s annotations on all questions, including a summary question that assesses overall quality or relevance. LLM-Rubric accomplishes this by training a small feed-forward neural network that includes both judge-specific and judge-independent parameters. When evaluating dialogue systems in a human-AI information-seeking task, we find that LLM-Rubric with 9 questions (assessing dimensions such as naturalness, conciseness, and citation quality) predicts human judges{'} assessment of overall user satisfaction, on a scale of 1{--}4, with RMS error {\textless} 0.5, a 2{\mbox{$\times$}} improvement over the uncalibrated baseline.",
}
| This paper introduces a framework for the automated evaluation of natural language texts. A manually constructed rubric describes how to assess multiple dimensions of interest. To evaluate a text, a large language model (LLM) is prompted with each rubric question and produces a distribution over potential responses. The LLM predictions often fail to agree well with human judges{---}indeed, the humans do not fully agree with one another. However, the multiple LLM distributions can be {\_}combined{\_} to {\_}predict{\_} each human judge{'}s annotations on all questions, including a summary question that assesses overall quality or relevance. LLM-Rubric accomplishes this by training a small feed-forward neural network that includes both judge-specific and judge-independent parameters. When evaluating dialogue systems in a human-AI information-seeking task, we find that LLM-Rubric with 9 questions (assessing dimensions such as naturalness, conciseness, and citation quality) predicts human judges{'} assessment of overall user satisfaction, on a scale of 1{--}4, with RMS error {\textless} 0.5, a 2{\mbox{$\times$}} improvement over the uncalibrated baseline. | [
"Hashemi, Helia",
"Eisner, Jason",
"Rosset, Corby",
"Van Durme, Benjamin",
"Kedzie, Chris"
] | LLM-Rubric: A Multidimensional, Calibrated Approach to Automated Evaluation of Natural Language Texts | acl-long.745 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.745/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.746.bib | @inproceedings{zhu-frank-2024-lieder,
title = "{LIEDER}: Linguistically-Informed Evaluation for Discourse Entity Recognition",
author = "Zhu, Xiaomeng and
Frank, Robert",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.746",
pages = "13835--13850",
abstract = "Discourse Entity (DE) recognition is the task of identifying novel and known entities introduced within a text. While previous work has found that large language models have basic, if imperfect, DE recognition abilities (Schuster and Linzen, 2022), it remains largely unassessed which of the fundamental semantic properties that govern the introduction and subsequent reference to DEs they have knowledge of. We propose the Linguistically-Informed Evaluation for Discourse Entity Recognition (LIEDER) dataset that allows for a detailed examination of language models{'} knowledge of four crucial semantic properties: existence, uniqueness, plurality, and novelty. We find evidence that state-of-the-art large language models exhibit sensitivity to all of these properties except novelty, which demonstrates that they have yet to reach human-level language understanding abilities.",
}
| Discourse Entity (DE) recognition is the task of identifying novel and known entities introduced within a text. While previous work has found that large language models have basic, if imperfect, DE recognition abilities (Schuster and Linzen, 2022), it remains largely unassessed which of the fundamental semantic properties that govern the introduction and subsequent reference to DEs they have knowledge of. We propose the Linguistically-Informed Evaluation for Discourse Entity Recognition (LIEDER) dataset that allows for a detailed examination of language models{'} knowledge of four crucial semantic properties: existence, uniqueness, plurality, and novelty. We find evidence that state-of-the-art large language models exhibit sensitivity to all of these properties except novelty, which demonstrates that they have yet to reach human-level language understanding abilities. | [
"Zhu, Xiaomeng",
"Frank, Robert"
] | LIEDER: Linguistically-Informed Evaluation for Discourse Entity Recognition | acl-long.746 | Poster | 2403.06301 | [
"https://github.com/xiaomeng-zhu/lieder"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.746/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.747.bib | @inproceedings{maharana-etal-2024-evaluating,
title = "Evaluating Very Long-Term Conversational Memory of {LLM} Agents",
author = "Maharana, Adyasha and
Lee, Dong-Ho and
Tulyakov, Sergey and
Bansal, Mohit and
Barbieri, Francesco and
Fang, Yuwei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.747",
pages = "13851--13870",
abstract = "Existing works on long-term open-domain dialogues focus on evaluating model responses within contexts spanning no more than five chat sessions. Despite advancements in long-context large language models (LLMs) and retrieval augmented generation (RAG) techniques, their efficacy in very long-term dialogues remains unexplored. To address this research gap, we introduce a machine-human pipeline to generate high-quality, very long-term dialogues by leveraging LLM-based agent architectures and grounding their dialogues on personas and temporal event graphs. Moreover, we equip each agent with the capability of sharing and reacting to images. The generated conversations are verified and edited by human annotators for long-range consistency and grounding to the event graphs. Using this pipeline, we collect LoCoMo, a dataset of very long-term conversations, each encompassing 600 turns and 16K tokens on avg., over up to 32 sessions. Based on LoCoMo, we present a comprehensive evaluation benchmark to measure long-term memory in models, encompassing question answering, event summarization, and multi-modal dialogue generation tasks. Our experimental results indicate that LLMs exhibit challenges in understanding lengthy conversations and comprehending long-range temporal and causal dynamics within dialogues. Employing strategies like long-context LLMs or RAG can offer improvements but these models still substantially lag behind human performance.",
}
| Existing works on long-term open-domain dialogues focus on evaluating model responses within contexts spanning no more than five chat sessions. Despite advancements in long-context large language models (LLMs) and retrieval augmented generation (RAG) techniques, their efficacy in very long-term dialogues remains unexplored. To address this research gap, we introduce a machine-human pipeline to generate high-quality, very long-term dialogues by leveraging LLM-based agent architectures and grounding their dialogues on personas and temporal event graphs. Moreover, we equip each agent with the capability of sharing and reacting to images. The generated conversations are verified and edited by human annotators for long-range consistency and grounding to the event graphs. Using this pipeline, we collect LoCoMo, a dataset of very long-term conversations, each encompassing 600 turns and 16K tokens on avg., over up to 32 sessions. Based on LoCoMo, we present a comprehensive evaluation benchmark to measure long-term memory in models, encompassing question answering, event summarization, and multi-modal dialogue generation tasks. Our experimental results indicate that LLMs exhibit challenges in understanding lengthy conversations and comprehending long-range temporal and causal dynamics within dialogues. Employing strategies like long-context LLMs or RAG can offer improvements but these models still substantially lag behind human performance. | [
"Maharana, Adyasha",
"Lee, Dong-Ho",
"Tulyakov, Sergey",
"Bansal, Mohit",
"Barbieri, Francesco",
"Fang, Yuwei"
] | Evaluating Very Long-Term Conversational Memory of LLM Agents | acl-long.747 | Poster | 2402.17753 | [
""
] | https://huggingface.co/papers/2402.17753 | 2 | 17 | 2 | 6 | https://aclanthology.org/2024.acl-long.747/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.748.bib | @inproceedings{zhang-etal-2024-prototypical,
title = "Prototypical Reward Network for Data-Efficient Model Alignment",
author = "Zhang, Jinghan and
Wang, Xiting and
Jin, Yiqiao and
Chen, Changyu and
Zhang, Xinhao and
Liu, Kunpeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.748",
pages = "13871--13884",
abstract = "The reward model for Reinforcement Learning from Human Feedback (RLHF) has proven effective in fine-tuning Large Language Models (LLMs). This paper explores enhancing RLHF with Prototypical Networks to improve reward models. We propose a framework utilizing Prototypical Networks to enhance reward models under limited human feedback, enabling more stable and reliable structural learning from fewer samples. This enhances the model{'}s adaptability and accuracy in interpreting human preferences. Our experiments demonstrate that this approach significantly improves the performance of reward models and LLMs in human feedback tasks, surpassing traditional methods, especially in data-limited scenarios.",
}
| The reward model for Reinforcement Learning from Human Feedback (RLHF) has proven effective in fine-tuning Large Language Models (LLMs). This paper explores enhancing RLHF with Prototypical Networks to improve reward models. We propose a framework utilizing Prototypical Networks to enhance reward models under limited human feedback, enabling more stable and reliable structural learning from fewer samples. This enhances the model{'}s adaptability and accuracy in interpreting human preferences. Our experiments demonstrate that this approach significantly improves the performance of reward models and LLMs in human feedback tasks, surpassing traditional methods, especially in data-limited scenarios. | [
"Zhang, Jinghan",
"Wang, Xiting",
"Jin, Yiqiao",
"Chen, Changyu",
"Zhang, Xinhao",
"Liu, Kunpeng"
] | Prototypical Reward Network for Data-Efficient Model Alignment | acl-long.748 | Oral | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.748/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.749.bib | @inproceedings{zheng-etal-2024-neo,
title = "{NEO}-{BENCH}: Evaluating Robustness of Large Language Models with Neologisms",
author = "Zheng, Jonathan and
Ritter, Alan and
Xu, Wei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.749",
pages = "13885--13906",
abstract = "The performance of Large Language Models (LLMs) degrades from the temporal drift between data used for model training and newer text seen during inference. One understudied avenue of language change causing data drift is the emergence of neologisms {--} new word forms {--} over time. We create a diverse resource of recent English neologisms by using several popular collection methods. We analyze temporal drift using neologisms by comparing sentences containing new words with near-identical sentences that replace neologisms with existing substitute words. Model performance is nearly halved in machine translation when a single neologism is introduced in a sentence. Motivated by these results, we construct a benchmark to evaluate LLMs{'} ability to generalize to neologisms with various natural language understanding tasks and model perplexity. Models with later knowledge cutoff dates yield lower perplexities and perform better in downstream tasks. LLMs are also affected differently based on the linguistic origins of words, indicating that neologisms are complex for static LLMs to address. We will release our benchmark and code for reproducing our experiments.",
}
| The performance of Large Language Models (LLMs) degrades from the temporal drift between data used for model training and newer text seen during inference. One understudied avenue of language change causing data drift is the emergence of neologisms {--} new word forms {--} over time. We create a diverse resource of recent English neologisms by using several popular collection methods. We analyze temporal drift using neologisms by comparing sentences containing new words with near-identical sentences that replace neologisms with existing substitute words. Model performance is nearly halved in machine translation when a single neologism is introduced in a sentence. Motivated by these results, we construct a benchmark to evaluate LLMs{'} ability to generalize to neologisms with various natural language understanding tasks and model perplexity. Models with later knowledge cutoff dates yield lower perplexities and perform better in downstream tasks. LLMs are also affected differently based on the linguistic origins of words, indicating that neologisms are complex for static LLMs to address. We will release our benchmark and code for reproducing our experiments. | [
"Zheng, Jonathan",
"Ritter, Alan",
"Xu, Wei"
] | NEO-BENCH: Evaluating Robustness of Large Language Models with Neologisms | acl-long.749 | Poster | 2402.12261 | [
"https://github.com/jonathanqzheng/neo-bench"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.749/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.750.bib | @inproceedings{hanneman-etal-2024-impacts,
title = "Impacts of Misspelled Queries on Translation and Product Search",
author = "Hanneman, Greg and
Monaikul, Natawut and
Nakatani, Taichi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.750",
pages = "13907--13920",
abstract = "Machine translation is used in e-commerce to translate second-language queries into the primary language of the store, to be matched by the search system against the product catalog. However, many queries contain spelling mistakes. We first present an analysis of the spelling-robustness of a population of MT systems, quantifying how spelling variations affect MT output, the list of returned products, and ultimately user behavior. We then present two sets of practical experiments illustrating how spelling-robustness may be specifically improved. For MT, reducing the number of BPE operations significantly improves spelling-robustness in six language pairs. In end-to-end e-commerce, the inclusion of a dedicated spelling correction model, and the augmentation of that model{'}s training data with language-relevant phenomena, each improve robustness and consistency of search results.",
}
| Machine translation is used in e-commerce to translate second-language queries into the primary language of the store, to be matched by the search system against the product catalog. However, many queries contain spelling mistakes. We first present an analysis of the spelling-robustness of a population of MT systems, quantifying how spelling variations affect MT output, the list of returned products, and ultimately user behavior. We then present two sets of practical experiments illustrating how spelling-robustness may be specifically improved. For MT, reducing the number of BPE operations significantly improves spelling-robustness in six language pairs. In end-to-end e-commerce, the inclusion of a dedicated spelling correction model, and the augmentation of that model{'}s training data with language-relevant phenomena, each improve robustness and consistency of search results. | [
"Hanneman, Greg",
"Monaikul, Natawut",
"Nakatani, Taichi"
] | Impacts of Misspelled Queries on Translation and Product Search | acl-long.750 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.750/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.751.bib | @inproceedings{sel-etal-2024-skin,
title = "Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in {LLM}s",
author = "Sel, Bilgehan and
Shanmugasundaram, Priya and
Kachuee, Mohammad and
Zhou, Kun and
Jia, Ruoxi and
Jin, Ming",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.751",
pages = "13921--13959",
abstract = "Large Language Models (LLMs) have shown remarkable capabilities in tasks such as summarization, arithmetic reasoning, and question answering. However, they encounter significant challenges in the domain of moral reasoning and ethical decision-making, especially in complex scenarios with multiple stakeholders. This paper introduces the Skin-in-the-Game (SKIG) framework, aimed at enhancing moral reasoning in LLMs by exploring decisions{'} consequences from multiple stakeholder perspectives. The core components of the framework consist of simulating accountability for decisions, conducting empathy exercises on different stakeholders, and evaluating the risks associated with the impacts of potential actions. We study SKIG{'}s performance across various moral reasoning benchmarks with proprietary and open-source LLMs, and investigate its crucial components through extensive ablation analyses. Our framework exhibits marked improvements in performance compared to baselines across different language models and benchmarks.",
}
| Large Language Models (LLMs) have shown remarkable capabilities in tasks such as summarization, arithmetic reasoning, and question answering. However, they encounter significant challenges in the domain of moral reasoning and ethical decision-making, especially in complex scenarios with multiple stakeholders. This paper introduces the Skin-in-the-Game (SKIG) framework, aimed at enhancing moral reasoning in LLMs by exploring decisions{'} consequences from multiple stakeholder perspectives. The core components of the framework consist of simulating accountability for decisions, conducting empathy exercises on different stakeholders, and evaluating the risks associated with the impacts of potential actions. We study SKIG{'}s performance across various moral reasoning benchmarks with proprietary and open-source LLMs, and investigate its crucial components through extensive ablation analyses. Our framework exhibits marked improvements in performance compared to baselines across different language models and benchmarks. | [
"Sel, Bilgehan",
"Shanmugasundaram, Priya",
"Kachuee, Mohammad",
"Zhou, Kun",
"Jia, Ruoxi",
"Jin, Ming"
] | Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs | acl-long.751 | Poster | 2405.12933 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.751/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.752.bib | @inproceedings{zhang-etal-2024-mersa,
title = "The {MERSA} Dataset and a Transformer-Based Approach for Speech Emotion Recognition",
author = "Zhang, Enshi and
Trujillo, Rafael and
Poellabauer, Christian",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.752",
pages = "13960--13970",
abstract = "Research in the field of speech emotion recognition (SER) relies on the availability of comprehensive datasets to make it possible to design accurate emotion detection models. This study introduces the Multimodal Emotion Recognition and Sentiment Analysis (MERSA) dataset, which includes both natural and scripted speech recordings, transcribed text, physiological data, and self-reported emotional surveys from 150 participants collected over a two-week period. This work also presents a novel emotion recognition approach that uses a transformer-based model, integrating pre-trained wav2vec 2.0 and BERT for feature extractions and additional LSTM layers to learn hidden representations from fused representations from speech and text. Our model predicts emotions on dimensions of arousal, valence, and dominance. We trained and evaluated the model on the MSP-PODCAST dataset and achieved competitive results from the best-performing model regarding the concordance correlation coefficient (CCC). Further, this paper demonstrates the effectiveness of this model through cross-domain evaluations on both IEMOCAP and MERSA datasets.",
}
| Research in the field of speech emotion recognition (SER) relies on the availability of comprehensive datasets to make it possible to design accurate emotion detection models. This study introduces the Multimodal Emotion Recognition and Sentiment Analysis (MERSA) dataset, which includes both natural and scripted speech recordings, transcribed text, physiological data, and self-reported emotional surveys from 150 participants collected over a two-week period. This work also presents a novel emotion recognition approach that uses a transformer-based model, integrating pre-trained wav2vec 2.0 and BERT for feature extractions and additional LSTM layers to learn hidden representations from fused representations from speech and text. Our model predicts emotions on dimensions of arousal, valence, and dominance. We trained and evaluated the model on the MSP-PODCAST dataset and achieved competitive results from the best-performing model regarding the concordance correlation coefficient (CCC). Further, this paper demonstrates the effectiveness of this model through cross-domain evaluations on both IEMOCAP and MERSA datasets. | [
"Zhang, Enshi",
"Trujillo, Rafael",
"Poellabauer, Christian"
] | The MERSA Dataset and a Transformer-Based Approach for Speech Emotion Recognition | acl-long.752 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.752/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.753.bib | @inproceedings{ramos-etal-2024-transparent,
title = "Transparent and Scrutable Recommendations Using Natural Language User Profiles",
author = "Ramos, Jerome and
Rahmani, Hossein A. and
Wang, Xi and
Fu, Xiao and
Lipani, Aldo",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.753",
pages = "13971--13984",
abstract = "Recent state-of-the-art recommender systems predominantly rely on either implicit or explicit feedback from users to suggest new items. While effective in recommending novel options, many recommender systems often use uninterpretable embeddings to represent user preferences. This lack of transparency not only limits user understanding of why certain items are suggested but also reduces the user{'}s ability to scrutinize and modify their preferences, thereby affecting their ability to receive a list of preferred recommendations. Given the recent advances in Large Language Models (LLMs), we investigate how a properly crafted prompt can be used to summarize a user{'}s preferences from past reviews and recommend items based only on language-based preferences. In particular, we study how LLMs can be prompted to generate a natural language (NL) user profile that holistically describe a user{'}s preferences. These NL profiles can then be leveraged to fine-tune a LLM using only NL profiles to make transparent and scrutable recommendations. Furthermore, we validate the scrutability of our user profile-based recommender by investigating the impact on recommendation changes after editing NL user profiles. According to our evaluations of the model{'}s rating prediction performance on two benchmarking rating prediction datasets, we observe that this novel approach maintains a performance level on par with established recommender systems in a warm-start setting. With a systematic analysis into the effect of updating user profiles and system prompts, we show the advantage of our approach in easier adjustment of user preferences and a greater autonomy over users{'} received recommendations.",
}
| Recent state-of-the-art recommender systems predominantly rely on either implicit or explicit feedback from users to suggest new items. While effective in recommending novel options, many recommender systems often use uninterpretable embeddings to represent user preferences. This lack of transparency not only limits user understanding of why certain items are suggested but also reduces the user{'}s ability to scrutinize and modify their preferences, thereby affecting their ability to receive a list of preferred recommendations. Given the recent advances in Large Language Models (LLMs), we investigate how a properly crafted prompt can be used to summarize a user{'}s preferences from past reviews and recommend items based only on language-based preferences. In particular, we study how LLMs can be prompted to generate a natural language (NL) user profile that holistically describe a user{'}s preferences. These NL profiles can then be leveraged to fine-tune a LLM using only NL profiles to make transparent and scrutable recommendations. Furthermore, we validate the scrutability of our user profile-based recommender by investigating the impact on recommendation changes after editing NL user profiles. According to our evaluations of the model{'}s rating prediction performance on two benchmarking rating prediction datasets, we observe that this novel approach maintains a performance level on par with established recommender systems in a warm-start setting. With a systematic analysis into the effect of updating user profiles and system prompts, we show the advantage of our approach in easier adjustment of user preferences and a greater autonomy over users{'} received recommendations. | [
"Ramos, Jerome",
"Rahmani, Hossein A.",
"Wang, Xi",
"Fu, Xiao",
"Lipani, Aldo"
] | Transparent and Scrutable Recommendations Using Natural Language User Profiles | acl-long.753 | Poster | 2402.05810 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.753/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.754.bib | @inproceedings{schroeder-etal-2024-fora,
title = "Fora: A corpus and framework for the study of facilitated dialogue",
author = "Schroeder, Hope and
Roy, Deb and
Kabbara, Jad",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.754",
pages = "13985--14001",
abstract = "Facilitated dialogue is increasingly popular as a method of civic engagement and as a method for gathering social insight, but resources for its study are scant. We present Fora, a unique collection of annotated facilitated dialogues. We compile 262 facilitated conversations that were hosted with partner organizations seeking to engage their members and surface insights regarding issues like education, elections, and public health, primarily through the sharing of personal experience. Alongside this corpus of 39,911 speaker turns, we present a framework for the analysis of facilitated dialogue. We taxonomize key personal sharing behaviors and facilitation strategies in the corpus, annotate a 25{\%} sample (10,000+ speaker turns) of the data accordingly, and evaluate and establish baselines on a number of tasks essential to the identification of these phenomena in dialogue. We describe the data, and relate facilitator behavior to turn-taking and participant sharing. We outline how this research can inform future work in understanding and improving facilitated dialogue, parsing spoken conversation, and improving the behavior of dialogue agents.",
}
| Facilitated dialogue is increasingly popular as a method of civic engagement and as a method for gathering social insight, but resources for its study are scant. We present Fora, a unique collection of annotated facilitated dialogues. We compile 262 facilitated conversations that were hosted with partner organizations seeking to engage their members and surface insights regarding issues like education, elections, and public health, primarily through the sharing of personal experience. Alongside this corpus of 39,911 speaker turns, we present a framework for the analysis of facilitated dialogue. We taxonomize key personal sharing behaviors and facilitation strategies in the corpus, annotate a 25{\%} sample (10,000+ speaker turns) of the data accordingly, and evaluate and establish baselines on a number of tasks essential to the identification of these phenomena in dialogue. We describe the data, and relate facilitator behavior to turn-taking and participant sharing. We outline how this research can inform future work in understanding and improving facilitated dialogue, parsing spoken conversation, and improving the behavior of dialogue agents. | [
"Schroeder, Hope",
"Roy, Deb",
"Kabbara, Jad"
] | Fora: A corpus and framework for the study of facilitated dialogue | acl-long.754 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.754/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.755.bib | @inproceedings{yu-etal-2024-explanation,
title = "Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning",
author = "Yu, Yue and
Shen, Jiaming and
Liu, Tianqi and
Qin, Zhen and
Yan, Jing Nathan and
Liu, Jialu and
Zhang, Chao and
Bendersky, Michael",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.755",
pages = "14002--14024",
abstract = "Large language models (LLMs) have shown remarkable capabilities in various natural language understanding tasks with a few demonstration examples via in-context learning. Common strategies to boost such {``}in-context{''} learning ability are to ensemble multiple model decoded results and require the model to generate an explanation along with the prediction. However, these models often treat different class predictions equally and neglect the potential discrepancy between the explanations and predictions. To fully unleash the power of explanations, we propose EASE, an \textit{Explanation-Aware Soft Ensemble} framework to empower in-context learning with LLMs. We design two techniques, explanation-guided ensemble, and soft probability aggregation, to mitigate the effect of unreliable explanations and improve the consistency between explanations and final predictions. Experiments on seven natural language understanding tasks and four varying-size LLMs demonstrate the effectiveness of our proposed framework.",
}
| Large language models (LLMs) have shown remarkable capabilities in various natural language understanding tasks with a few demonstration examples via in-context learning. Common strategies to boost such {``}in-context{''} learning ability are to ensemble multiple model decoded results and require the model to generate an explanation along with the prediction. However, these models often treat different class predictions equally and neglect the potential discrepancy between the explanations and predictions. To fully unleash the power of explanations, we propose EASE, an \textit{Explanation-Aware Soft Ensemble} framework to empower in-context learning with LLMs. We design two techniques, explanation-guided ensemble, and soft probability aggregation, to mitigate the effect of unreliable explanations and improve the consistency between explanations and final predictions. Experiments on seven natural language understanding tasks and four varying-size LLMs demonstrate the effectiveness of our proposed framework. | [
"Yu, Yue",
"Shen, Jiaming",
"Liu, Tianqi",
"Qin, Zhen",
"Yan, Jing Nathan",
"Liu, Jialu",
"Zhang, Chao",
"Bendersky, Michael"
] | Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning | acl-long.755 | Poster | 2311.07099 | [
""
] | https://huggingface.co/papers/2311.07099 | 3 | 1 | 0 | 8 | https://aclanthology.org/2024.acl-long.755/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.756.bib | @inproceedings{wang-etal-2024-best,
title = "What is the Best Way for {C}hat{GPT} to Translate Poetry?",
author = "Wang, Shanshan and
Wong, Derek and
Yao, Jingming and
Chao, Lidia",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.756",
pages = "14025--14043",
abstract = "Machine translation (MT) has historically faced significant challenges when applied to literary works, particularly in the domain of poetry translation. The advent of Large Language Models such as ChatGPT holds potential for innovation in this field. This study examines ChatGPT{'}s capabilities in English-Chinese poetry translation tasks, utilizing targeted prompts and small sample scenarios to ascertain optimal performance. Despite promising outcomes, our analysis reveals persistent issues in the translations generated by ChatGPT that warrant attention. To address these shortcomings, we propose an Explanation-Assisted Poetry Machine Translation (EAPMT) method, which leverages monolingual poetry explanation as a guiding information for the translation process. Furthermore, we refine existing evaluation criteria to better suit the nuances of modern poetry translation. We engaged a panel of professional poets for assessments, complemented evaluations by using GPT-4. The results from both human and machine evaluations demonstrate that our EAPMT method outperforms traditional translation methods of ChatGPT and the existing online systems. This paper validates the efficacy of our method and contributes a novel perspective to machine-assisted literary translation.",
}
| Machine translation (MT) has historically faced significant challenges when applied to literary works, particularly in the domain of poetry translation. The advent of Large Language Models such as ChatGPT holds potential for innovation in this field. This study examines ChatGPT{'}s capabilities in English-Chinese poetry translation tasks, utilizing targeted prompts and small sample scenarios to ascertain optimal performance. Despite promising outcomes, our analysis reveals persistent issues in the translations generated by ChatGPT that warrant attention. To address these shortcomings, we propose an Explanation-Assisted Poetry Machine Translation (EAPMT) method, which leverages monolingual poetry explanation as a guiding information for the translation process. Furthermore, we refine existing evaluation criteria to better suit the nuances of modern poetry translation. We engaged a panel of professional poets for assessments, complemented evaluations by using GPT-4. The results from both human and machine evaluations demonstrate that our EAPMT method outperforms traditional translation methods of ChatGPT and the existing online systems. This paper validates the efficacy of our method and contributes a novel perspective to machine-assisted literary translation. | [
"Wang, Shanshan",
"Wong, Derek",
"Yao, Jingming",
"Chao, Lidia"
] | What is the Best Way for ChatGPT to Translate Poetry? | acl-long.756 | Poster | 2406.03450 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.756/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.757.bib | @inproceedings{maini-etal-2024-rephrasing,
title = "Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling",
author = "Maini, Pratyush and
Seto, Skyler and
Bai, Richard and
Grangier, David and
Zhang, Yizhe and
Jaitly, Navdeep",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.757",
pages = "14044--14072",
abstract = "Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased. Current scaling laws show that learning from such data requires an abundance of both compute and data, which grows with the size of the model being trained. This is infeasible both because of the large compute costs and duration associated with pre-training, and the impending scarcity of high-quality data on the web. In this work, we propose Web Rephrase Augmented Pre-training (WRAP) that uses an off-the-shelf instruction-tuned model prompted to paraphrase documents on the web in specific styles such as {``}like Wikipedia{''} or in {``}question-answer format{''} to jointly pre-train LLMs on real and synthetic rephrases. First, we show that using WRAP on the C4 dataset, which is naturally noisy, speeds up pre-training by {\textasciitilde}3x. At the same pre-training compute budget, it improves perplexity by more than 50{\%} on average across different subsets of the Pile, and improves zero-shot question answer accuracy across 13 tasks by more than 2{\%}. Second, we investigate the impact of the re-phrasing style on the performance of the model, offering insights into how the composition of the training data can impact the performance of LLMs in OOD settings. Our gains are attributed to the fact that re-phrased synthetic data has higher utility than just real data because it (i) incorporates style diversity that closely reflects downstream evaluation style, and (ii) has higher {`}quality{'} than web-scraped data.",
}
| Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased. Current scaling laws show that learning from such data requires an abundance of both compute and data, which grows with the size of the model being trained. This is infeasible both because of the large compute costs and duration associated with pre-training, and the impending scarcity of high-quality data on the web. In this work, we propose Web Rephrase Augmented Pre-training (WRAP) that uses an off-the-shelf instruction-tuned model prompted to paraphrase documents on the web in specific styles such as {``}like Wikipedia{''} or in {``}question-answer format{''} to jointly pre-train LLMs on real and synthetic rephrases. First, we show that using WRAP on the C4 dataset, which is naturally noisy, speeds up pre-training by {\textasciitilde}3x. At the same pre-training compute budget, it improves perplexity by more than 50{\%} on average across different subsets of the Pile, and improves zero-shot question answer accuracy across 13 tasks by more than 2{\%}. Second, we investigate the impact of the re-phrasing style on the performance of the model, offering insights into how the composition of the training data can impact the performance of LLMs in OOD settings. Our gains are attributed to the fact that re-phrased synthetic data has higher utility than just real data because it (i) incorporates style diversity that closely reflects downstream evaluation style, and (ii) has higher {`}quality{'} than web-scraped data. | [
"Maini, Pratyush",
"Seto, Skyler",
"Bai, Richard",
"Grangier, David",
"Zhang, Yizhe",
"Jaitly, Navdeep"
] | Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling | acl-long.757 | Poster | 2401.16380 | [
""
] | https://huggingface.co/papers/2401.16380 | 4 | 46 | 7 | 6 | https://aclanthology.org/2024.acl-long.757/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.758.bib | @inproceedings{wu-etal-2024-decot,
title = "{D}e{C}o{T}: Debiasing Chain-of-Thought for Knowledge-Intensive Tasks in Large Language Models via Causal Intervention",
author = "Wu, Junda and
Yu, Tong and
Chen, Xiang and
Wang, Haoliang and
Rossi, Ryan and
Kim, Sungchul and
Rao, Anup and
McAuley, Julian",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.758",
pages = "14073--14087",
abstract = "Large language models (LLMs) often require task-relevant knowledge to augment their internal knowledge through prompts. However, simply injecting external knowledge into prompts does not guarantee that LLMs can identify and use relevant information in the prompts to conduct chain-of-thought reasoning, especially when the LLM{'}s internal knowledge is derived from biased information on the pretraining data. In this paper, we propose a novel causal view to formally explain the internal knowledge bias of LLMs via a Structural Causal Model (SCM). We review the chain-of-thought (CoT) prompting from a causal perspective and discover that the biased information from pretrained models can impair LLMs{'} reasoning abilities. When the CoT reasoning paths are misled by irrelevant information from prompts and are logically incorrect, simply editing factual information is insufficient to reach the correct answer. To estimate the confounding effect on CoT reasoning in LLMs, we use external knowledge as an instrumental variable. We further introduce CoT as a mediator to conduct front-door adjustment and generate logically correct CoTs where the spurious correlation between LLMs{'} pretrained knowledge and task queries is reduced. With extensive experiments, we validate that our approach enables more accurate CoT reasoning and enhances LLM generation on knowledge-intensive tasks.",
}
| Large language models (LLMs) often require task-relevant knowledge to augment their internal knowledge through prompts. However, simply injecting external knowledge into prompts does not guarantee that LLMs can identify and use relevant information in the prompts to conduct chain-of-thought reasoning, especially when the LLM{'}s internal knowledge is derived from biased information on the pretraining data. In this paper, we propose a novel causal view to formally explain the internal knowledge bias of LLMs via a Structural Causal Model (SCM). We review the chain-of-thought (CoT) prompting from a causal perspective and discover that the biased information from pretrained models can impair LLMs{'} reasoning abilities. When the CoT reasoning paths are misled by irrelevant information from prompts and are logically incorrect, simply editing factual information is insufficient to reach the correct answer. To estimate the confounding effect on CoT reasoning in LLMs, we use external knowledge as an instrumental variable. We further introduce CoT as a mediator to conduct front-door adjustment and generate logically correct CoTs where the spurious correlation between LLMs{'} pretrained knowledge and task queries is reduced. With extensive experiments, we validate that our approach enables more accurate CoT reasoning and enhances LLM generation on knowledge-intensive tasks. | [
"Wu, Junda",
"Yu, Tong",
"Chen, Xiang",
"Wang, Haoliang",
"Rossi, Ryan",
"Kim, Sungchul",
"Rao, Anup",
"McAuley, Julian"
] | DeCoT: Debiasing Chain-of-Thought for Knowledge-Intensive Tasks in Large Language Models via Causal Intervention | acl-long.758 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.758/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.759.bib | @inproceedings{hu-etal-2024-representation,
title = "Representation Learning with Conditional Information Flow Maximization",
author = "Hu, Dou and
Wei, Lingwei and
Zhou, Wei and
Hu, Songlin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.759",
pages = "14088--14103",
abstract = "This paper proposes an information-theoretic representation learning framework, named conditional information flow maximization, to extract noise-invariant sufficient representations for the input data and target task. It promotes the learned representations have good feature uniformity and sufficient predictive ability, which can enhance the generalization of pre-trained language models (PLMs) for the target task. Firstly, an information flow maximization principle is proposed to learn more sufficient representations for the input and target by simultaneously maximizing both input-representation and representation-label mutual information. Unlike the information bottleneck, we handle the input-representation information in an opposite way to avoid the over-compression issue of latent representations. Besides, to mitigate the negative effect of potential redundant features from the input, we design a conditional information minimization principle to eliminate negative redundant features while preserve noise-invariant features. Experiments on 13 language understanding benchmarks demonstrate that our method effectively improves the performance of PLMs for classification and regression. Extensive experiments show that the learned representations are more sufficient, robust and transferable.",
}
| This paper proposes an information-theoretic representation learning framework, named conditional information flow maximization, to extract noise-invariant sufficient representations for the input data and target task. It promotes the learned representations have good feature uniformity and sufficient predictive ability, which can enhance the generalization of pre-trained language models (PLMs) for the target task. Firstly, an information flow maximization principle is proposed to learn more sufficient representations for the input and target by simultaneously maximizing both input-representation and representation-label mutual information. Unlike the information bottleneck, we handle the input-representation information in an opposite way to avoid the over-compression issue of latent representations. Besides, to mitigate the negative effect of potential redundant features from the input, we design a conditional information minimization principle to eliminate negative redundant features while preserve noise-invariant features. Experiments on 13 language understanding benchmarks demonstrate that our method effectively improves the performance of PLMs for classification and regression. Extensive experiments show that the learned representations are more sufficient, robust and transferable. | [
"Hu, Dou",
"Wei, Lingwei",
"Zhou, Wei",
"Hu, Songlin"
] | Representation Learning with Conditional Information Flow Maximization | acl-long.759 | Poster | 2406.05510 | [
"https://github.com/zerohd4869/CIFM"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.759/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.760.bib | @inproceedings{felkner-etal-2024-gpt,
title = "{GPT} is Not an Annotator: The Necessity of Human Annotation in Fairness Benchmark Construction",
author = "Felkner, Virginia and
Thompson, Jennifer and
May, Jonathan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.760",
pages = "14104--14115",
abstract = "Social biases in LLMs are usually measured via bias benchmark datasets. Current benchmarks have limitations in scope, grounding, quality, and human effort required. Previous work has shown success with a community-sourced, rather than crowd-sourced, approach to benchmark development. However, this work still required considerable effort from annotators with relevant lived experience. This paper explores whether an LLM (specifically, GPT-3.5-Turbo) can assist with the task of developing a bias benchmark dataset from responses to an open-ended community survey. We also extend the previous work to a new community and set of biases: the Jewish community and antisemitism. Our analysis shows that GPT-3.5-Turbo has poor performance on this annotation task and produces unacceptable quality issues in its output. Thus, we conclude that GPT-3.5-Turbo is not an appropriate substitute for human annotation in sensitive tasks related to social biases, and that its use actually negates many of the benefits of community-sourcing bias benchmarks.",
}
| Social biases in LLMs are usually measured via bias benchmark datasets. Current benchmarks have limitations in scope, grounding, quality, and human effort required. Previous work has shown success with a community-sourced, rather than crowd-sourced, approach to benchmark development. However, this work still required considerable effort from annotators with relevant lived experience. This paper explores whether an LLM (specifically, GPT-3.5-Turbo) can assist with the task of developing a bias benchmark dataset from responses to an open-ended community survey. We also extend the previous work to a new community and set of biases: the Jewish community and antisemitism. Our analysis shows that GPT-3.5-Turbo has poor performance on this annotation task and produces unacceptable quality issues in its output. Thus, we conclude that GPT-3.5-Turbo is not an appropriate substitute for human annotation in sensitive tasks related to social biases, and that its use actually negates many of the benefits of community-sourcing bias benchmarks. | [
"Felkner, Virginia",
"Thompson, Jennifer",
"May, Jonathan"
] | GPT is Not an Annotator: The Necessity of Human Annotation in Fairness Benchmark Construction | acl-long.760 | Poster | 2405.15760 | [
""
] | https://huggingface.co/papers/2405.15760 | 2 | 1 | 0 | 3 | https://aclanthology.org/2024.acl-long.760/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.761.bib | @inproceedings{riddell-etal-2024-quantifying,
title = "Quantifying Contamination in Evaluating Code Generation Capabilities of Language Models",
author = "Riddell, Martin and
Ni, Ansong and
Cohan, Arman",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.761",
pages = "14116--14137",
abstract = "While large language models have achieved remarkable performance on various code generation benchmarks, there have been growing concerns regarding potential contamination of these benchmarks as they may be leaked into pretraining and finetuning data. While recent work has investigated contamination in natural language generation and understanding tasks, there has been less extensive research into how data contamination impacts the evaluation of code generation, which is critical for understanding the robustness and reliability of LLMs in programming contexts. In this work, we perform a comprehensive study of data contamination of popular code generation benchmarks, and precisely quantify their overlap with pretraining corpus through both surface-level and semantic-level matching. In our experiments, we show that there are substantial overlap between popular code generation benchmarks and open training corpus, and models perform significantly better on the subset of the benchmarks where similar solutions are seen during training. We also conduct extensive analysis on the factors that affect model memorization and generalization, such as model size, problem difficulty, and question length. We release all resulting files from our matching pipeline for future research.",
}
| While large language models have achieved remarkable performance on various code generation benchmarks, there have been growing concerns regarding potential contamination of these benchmarks as they may be leaked into pretraining and finetuning data. While recent work has investigated contamination in natural language generation and understanding tasks, there has been less extensive research into how data contamination impacts the evaluation of code generation, which is critical for understanding the robustness and reliability of LLMs in programming contexts. In this work, we perform a comprehensive study of data contamination of popular code generation benchmarks, and precisely quantify their overlap with pretraining corpus through both surface-level and semantic-level matching. In our experiments, we show that there are substantial overlap between popular code generation benchmarks and open training corpus, and models perform significantly better on the subset of the benchmarks where similar solutions are seen during training. We also conduct extensive analysis on the factors that affect model memorization and generalization, such as model size, problem difficulty, and question length. We release all resulting files from our matching pipeline for future research. | [
"Riddell, Martin",
"Ni, Ansong",
"Cohan, Arman"
] | Quantifying Contamination in Evaluating Code Generation Capabilities of Language Models | acl-long.761 | Poster | 2403.04811 | [
"https://github.com/yale-nlp/code-llm-contamination"
] | https://huggingface.co/papers/2403.04811 | 0 | 0 | 0 | 3 | https://aclanthology.org/2024.acl-long.761/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.762.bib | @inproceedings{bhardwaj-etal-2024-language,
title = "Language Models are {H}omer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic",
author = "Bhardwaj, Rishabh and
Do, Duc Anh and
Poria, Soujanya",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.762",
pages = "14138--14149",
abstract = "We propose RESTA to perform LLM realignment towards safety, which gets compromised due to downstream task fine-tuning. RESTA stands for REstoring Safety through Task Arithmetic. At its core, it involves a simple arithmetic addition of a safety vector to the weights of the compromised model. We demonstrate the effectiveness of RESTA in both parameter-efficient and full fine-tuning, covering a wide range of downstream tasks, including instruction following in Chinese, English, and Hindi, as well as problem-solving capabilities in Code and Math. We also showcase the generalizability of RESTA on three existing safety evaluation benchmarks and a multilingual benchmark dataset proposed as a part of this work, consisting of 550 harmful questions covering 11 categories, each with 5 sub-categories of harm. Overall, RESTA decreases the harmfulness of the compromised model from 18.6{\%} to 5.1{\%} and from 9.2{\%} to 1.5{\%} in parameter-efficient and full fine-tuning, respectively, while maintaining most of the model{'}s performance on the task. We release the source codes at: https://github.com/declare-lab/resta.",
}
| We propose RESTA to perform LLM realignment towards safety, which gets compromised due to downstream task fine-tuning. RESTA stands for REstoring Safety through Task Arithmetic. At its core, it involves a simple arithmetic addition of a safety vector to the weights of the compromised model. We demonstrate the effectiveness of RESTA in both parameter-efficient and full fine-tuning, covering a wide range of downstream tasks, including instruction following in Chinese, English, and Hindi, as well as problem-solving capabilities in Code and Math. We also showcase the generalizability of RESTA on three existing safety evaluation benchmarks and a multilingual benchmark dataset proposed as a part of this work, consisting of 550 harmful questions covering 11 categories, each with 5 sub-categories of harm. Overall, RESTA decreases the harmfulness of the compromised model from 18.6{\%} to 5.1{\%} and from 9.2{\%} to 1.5{\%} in parameter-efficient and full fine-tuning, respectively, while maintaining most of the model{'}s performance on the task. We release the source codes at: https://github.com/declare-lab/resta. | [
"Bhardwaj, Rishabh",
"Do, Duc Anh",
"Poria, Soujanya"
] | Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic | acl-long.762 | Poster | 2402.11746 | [
"https://github.com/declare-lab/resta"
] | https://huggingface.co/papers/2402.11746 | 2 | 2 | 0 | 3 | https://aclanthology.org/2024.acl-long.762/ | [
"declare-lab/starling-7B",
"sunatte/txt2sql"
] | [
"declare-lab/HarmfulQA",
"declare-lab/CategoricalHarmfulQA",
"walledai/CatHarmfulQA",
"d-llm/HarmfulQA"
] | [
"Justinrune/LLaMA-Factory",
"spacemonkAI87/declare-lab-starling-7B",
"smarttang/blingsec"
] | 1 |
https://aclanthology.org/2024.acl-long.763.bib | @inproceedings{spangher-etal-2024-tracking,
title = "Tracking the Newsworthiness of Public Documents",
author = "Spangher, Alexander and
Tumgoren, Serdar and
Welsh, Ben and
Peng, Nanyun and
Ferrara, Emilio and
May, Jonathan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.763",
pages = "14150--14168",
abstract = "Journalists regularly make decisions on whether or not to report stories, based on {``}news values{''}. In this work, we wish to explicitly model these decisions to explore {\_}when{\_} and {\_}why{\_} certain stories get press attention. This is challenging because very few labelled links between source documents and news articles exist and language use between corpora is very different. We address this problem by implementing a novel {\_}probabilistic relational modeling{\_} framework, which we show is a low-annotation linking methodology that outperforms other, more state-of-the-art retrieval-based baselines. Next, we define a new task: {\_}{\_}newsworthiness prediction{\_}{\_}, to predict if a policy item will get covered. We focus on news coverage of local public policy in the San Francisco Bay Area by the {\_}San Francisco Chronicle{\_}. We gather 15k policies discussed across 10 years of public policy meetings, and transcribe over 3,200 hours of public discussion. In general, we find limited impact of public discussion on newsworthiness prediction accuracy, suggesting that some of the most important stories barely get discussed in public.Finally, we show that newsworthiness predictions can be a useful assistive tool for journalists seeking to keep abreast of local government. We perform human evaluation with expert journalists and show our systems identify policies they consider newsworthy with 68{\%} F1 and our coverage recommendations are helpful with an 84{\%} win-rate against baseline. We release all code and data to our work here: https://github.com/alex2awesome/newsworthiness-public.",
}
| Journalists regularly make decisions on whether or not to report stories, based on {``}news values{''}. In this work, we wish to explicitly model these decisions to explore {\_}when{\_} and {\_}why{\_} certain stories get press attention. This is challenging because very few labelled links between source documents and news articles exist and language use between corpora is very different. We address this problem by implementing a novel {\_}probabilistic relational modeling{\_} framework, which we show is a low-annotation linking methodology that outperforms other, more state-of-the-art retrieval-based baselines. Next, we define a new task: {\_}{\_}newsworthiness prediction{\_}{\_}, to predict if a policy item will get covered. We focus on news coverage of local public policy in the San Francisco Bay Area by the {\_}San Francisco Chronicle{\_}. We gather 15k policies discussed across 10 years of public policy meetings, and transcribe over 3,200 hours of public discussion. In general, we find limited impact of public discussion on newsworthiness prediction accuracy, suggesting that some of the most important stories barely get discussed in public.Finally, we show that newsworthiness predictions can be a useful assistive tool for journalists seeking to keep abreast of local government. We perform human evaluation with expert journalists and show our systems identify policies they consider newsworthy with 68{\%} F1 and our coverage recommendations are helpful with an 84{\%} win-rate against baseline. We release all code and data to our work here: https://github.com/alex2awesome/newsworthiness-public. | [
"Spangher, Alex",
"er",
"Tumgoren, Serdar",
"Welsh, Ben",
"Peng, Nanyun",
"Ferrara, Emilio",
"May, Jonathan"
] | Tracking the Newsworthiness of Public Documents | acl-long.763 | Poster | 2311.09734 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.763/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.764.bib | @inproceedings{dehghan-etal-2024-ewek,
title = "{EWEK}-{QA} : Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems",
author = "Dehghan, Mohammad and
Alomrani, Mohammad and
Bagga, Sunyam and
Alfonso-Hermelo, David and
Bibi, Khalil and
Ghaddar, Abbas and
Zhang, Yingxue and
Li, Xiaoguang and
Hao, Jianye and
Liu, Qun and
Lin, Jimmy and
Chen, Boxing and
Parthasarathi, Prasanna and
Biparva, Mahdi and
Rezagholizadeh, Mehdi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.764",
pages = "14169--14187",
abstract = "The emerging citation-based QA systems are gaining more attention especially in generative AI search applications. The importance of extracted knowledge provided to these systems is vital from both accuracy (completeness of information) and efficiency (extracting the information in a timely manner). In this regard, citation-based QA systems are suffering from two shortcomings. First, they usually rely only on web as a source of extracted knowledge and adding other external knowledge sources can hamper the efficiency of the system. Second, web-retrieved contents are usually obtained by some simple heuristics such as fixed length or breakpoints which might lead to splitting information into pieces. To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system. This has been done through designing an adaptive web retriever and incorporating KGs triples in an efficient manner. We demonstrate the effectiveness of over the open-source state-of-the-art (SoTA) web-based and KG baseline models using a comprehensive set of quantitative and human evaluation experiments. Our model is able to: first, improve the web-retriever baseline in terms of extracting more relevant passages ({\textgreater}20{\%}), the coverage of answer span ({\textgreater}25{\%}) and self containment ({\textgreater}35{\%}); second, obtain and integrate KG triples into its pipeline very efficiently (by avoiding any LLM calls) to outperform the web-only and KG-only SoTA baselines significantly in 7 quantitative QA tasks and our human evaluation.",
}
| The emerging citation-based QA systems are gaining more attention especially in generative AI search applications. The importance of extracted knowledge provided to these systems is vital from both accuracy (completeness of information) and efficiency (extracting the information in a timely manner). In this regard, citation-based QA systems are suffering from two shortcomings. First, they usually rely only on web as a source of extracted knowledge and adding other external knowledge sources can hamper the efficiency of the system. Second, web-retrieved contents are usually obtained by some simple heuristics such as fixed length or breakpoints which might lead to splitting information into pieces. To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system. This has been done through designing an adaptive web retriever and incorporating KGs triples in an efficient manner. We demonstrate the effectiveness of over the open-source state-of-the-art (SoTA) web-based and KG baseline models using a comprehensive set of quantitative and human evaluation experiments. Our model is able to: first, improve the web-retriever baseline in terms of extracting more relevant passages ({\textgreater}20{\%}), the coverage of answer span ({\textgreater}25{\%}) and self containment ({\textgreater}35{\%}); second, obtain and integrate KG triples into its pipeline very efficiently (by avoiding any LLM calls) to outperform the web-only and KG-only SoTA baselines significantly in 7 quantitative QA tasks and our human evaluation. | [
"Dehghan, Mohammad",
"Alomrani, Mohammad",
"Bagga, Sunyam",
"Alfonso-Hermelo, David",
"Bibi, Khalil",
"Ghaddar, Abbas",
"Zhang, Yingxue",
"Li, Xiaoguang",
"Hao, Jianye",
"Liu, Qun",
"Lin, Jimmy",
"Chen, Boxing",
"Parthasarathi, Prasanna",
"Biparva, Mahdi",
"Rezagholizadeh, Mehdi"
] | EWEK-QA : Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems | acl-long.764 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.764/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.765.bib | @inproceedings{li-etal-2024-multi,
title = "Multi-modal Preference Alignment Remedies Degradation of Visual Instruction Tuning on Language Models",
author = "Li, Shengzhi and
Lin, Rongyu and
Pei, Shichao",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.765",
pages = "14188--14200",
abstract = "Multi-modal large language models (MLLMs) are expected to support multi-turn queries of interchanging image and text modalities in production. However, the current MLLMs trained with visual-question-answering (VQA) datasets could suffer from degradation, as VQA datasets lack the diversity and complexity of the original text instruction datasets with which the underlying language model was trained. To address this degradation, we first collect a lightweight, 5k-sample VQA preference dataset where answers were annotated by Gemini for five quality metrics in a granular fashion and investigate standard Supervised Fine-tuning, rejection sampling, Direct Preference Optimization (DPO) and SteerLM algorithms. Our findings indicate that with DPO, we can surpass the instruction-following capabilities of the language model, achieving a 6.73 score on MT-Bench, compared to Vicuna{'}s 6.57 and LLaVA{'}s 5.99. This enhancement in textual instruction-following capability correlates with boosted visual instruction performance (+4.9{\%} on MM-Vet, +6{\%} on LLaVA-Bench), with minimal alignment tax on visual knowledge benchmarks compared to the previous RLHF approach. In conclusion, we propose a distillation-based multi-modal alignment model with fine-grained annotations on a small dataset that restores and boosts MLLM{'}s language capability after visual instruction tuning.",
}
| Multi-modal large language models (MLLMs) are expected to support multi-turn queries of interchanging image and text modalities in production. However, the current MLLMs trained with visual-question-answering (VQA) datasets could suffer from degradation, as VQA datasets lack the diversity and complexity of the original text instruction datasets with which the underlying language model was trained. To address this degradation, we first collect a lightweight, 5k-sample VQA preference dataset where answers were annotated by Gemini for five quality metrics in a granular fashion and investigate standard Supervised Fine-tuning, rejection sampling, Direct Preference Optimization (DPO) and SteerLM algorithms. Our findings indicate that with DPO, we can surpass the instruction-following capabilities of the language model, achieving a 6.73 score on MT-Bench, compared to Vicuna{'}s 6.57 and LLaVA{'}s 5.99. This enhancement in textual instruction-following capability correlates with boosted visual instruction performance (+4.9{\%} on MM-Vet, +6{\%} on LLaVA-Bench), with minimal alignment tax on visual knowledge benchmarks compared to the previous RLHF approach. In conclusion, we propose a distillation-based multi-modal alignment model with fine-grained annotations on a small dataset that restores and boosts MLLM{'}s language capability after visual instruction tuning. | [
"Li, Shengzhi",
"Lin, Rongyu",
"Pei, Shichao"
] | Multi-modal Preference Alignment Remedies Degradation of Visual Instruction Tuning on Language Models | acl-long.765 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.765/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.766.bib | @inproceedings{zhao-etal-2024-multistage,
title = "Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation",
author = "Zhao, Jiachen and
Zhao, Wenlong and
Drozdov, Andrew and
Rozonoyer, Benjamin and
Sultan, Md Arafat and
Lee, Jay-Yoon and
Iyyer, Mohit and
McCallum, Andrew",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.766",
pages = "14201--14214",
abstract = "We study semi-supervised sequence generation tasks, where the few labeled examples are too scarce to finetune a model, and meanwhile, few-shot prompted large language models (LLMs) exhibit room for improvement. In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks. We find that the student is able to learn a general pattern from the high-quality pseudolabels produced by the teacher during knowledge distillation (KD), and favorably not a general pattern from the low-quality pseudolabels. Leveraging this discovery, we propose a new method, Multistage Collaborative Knowledge Distillation from an LLM (MCKD), for these tasks. MCKD first few-shot prompts an LLM to produce pseudolabels for unlabeled data. Then at each stage of an iterative KD process, a new pair of students is trained on disjoint partitions of the pseudolabeled data, and produces new and improved pseudolabels for their unseen partitions. We conduct extensive experiments on four syntactic and semantic parsing datasets and show the effectiveness of MCKD for low-resource semi-supervised sequence generation. On CRAFT biomedical parsing, for example, 3-stage MCKD with 50 labeled examples outperforms an LLM teacher and vanilla KD by 7.5{\%} and 3.7{\%} parsing F1, respectively, and matches the performance of supervised finetuning with 500 labeled examples.",
}
| We study semi-supervised sequence generation tasks, where the few labeled examples are too scarce to finetune a model, and meanwhile, few-shot prompted large language models (LLMs) exhibit room for improvement. In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks. We find that the student is able to learn a general pattern from the high-quality pseudolabels produced by the teacher during knowledge distillation (KD), and favorably not a general pattern from the low-quality pseudolabels. Leveraging this discovery, we propose a new method, Multistage Collaborative Knowledge Distillation from an LLM (MCKD), for these tasks. MCKD first few-shot prompts an LLM to produce pseudolabels for unlabeled data. Then at each stage of an iterative KD process, a new pair of students is trained on disjoint partitions of the pseudolabeled data, and produces new and improved pseudolabels for their unseen partitions. We conduct extensive experiments on four syntactic and semantic parsing datasets and show the effectiveness of MCKD for low-resource semi-supervised sequence generation. On CRAFT biomedical parsing, for example, 3-stage MCKD with 50 labeled examples outperforms an LLM teacher and vanilla KD by 7.5{\%} and 3.7{\%} parsing F1, respectively, and matches the performance of supervised finetuning with 500 labeled examples. | [
"Zhao, Jiachen",
"Zhao, Wenlong",
"Drozdov, Andrew",
"Rozonoyer, Benjamin",
"Sultan, Md Arafat",
"Lee, Jay-Yoon",
"Iyyer, Mohit",
"McCallum, Andrew"
] | Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation | acl-long.766 | Poster | 2311.08640 | [
"https://github.com/andotalao24/multistage-collaborative-knowledge-distillation"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.766/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.767.bib | @inproceedings{yu-etal-2024-controlled,
title = "Controlled Text Generation for Black-box Language Models via Score-based Progressive Editor",
author = "Yu, Sangwon and
Lee, Changmin and
Lee, Hojin and
Yoon, Sungroh",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.767",
pages = "14215--14237",
abstract = "Controlled text generation, aiming to ensure that language models produce text containing only the desired domain or corpus attributes, is immensely crucial in the practical application of language models. Existing methods, however, are inapplicable to black-box models or suffer a significant trade-off between control and fluency in text generation. This paper introduces the Score-based Progressive Editor (ScoPE), a novel approach designed to overcome these issues. ScoPE modifies the context at the token level during the generation process of a backbone language model. This modification guides the subsequent text to naturally include the target attributes. To facilitate this process, ScoPE employs a training objective that maximizes a target score, comprehensively considering both control and fluency. Experimental results on diverse controlled generation tasks demonstrate that ScoPE can effectively regulate the attributes of the generated text while effectively utilizing the capability of the backbone large language models.",
}
| Controlled text generation, aiming to ensure that language models produce text containing only the desired domain or corpus attributes, is immensely crucial in the practical application of language models. Existing methods, however, are inapplicable to black-box models or suffer a significant trade-off between control and fluency in text generation. This paper introduces the Score-based Progressive Editor (ScoPE), a novel approach designed to overcome these issues. ScoPE modifies the context at the token level during the generation process of a backbone language model. This modification guides the subsequent text to naturally include the target attributes. To facilitate this process, ScoPE employs a training objective that maximizes a target score, comprehensively considering both control and fluency. Experimental results on diverse controlled generation tasks demonstrate that ScoPE can effectively regulate the attributes of the generated text while effectively utilizing the capability of the backbone large language models. | [
"Yu, Sangwon",
"Lee, Changmin",
"Lee, Hojin",
"Yoon, Sungroh"
] | Controlled Text Generation for Black-box Language Models via Score-based Progressive Editor | acl-long.767 | Poster | 2311.07430 | [
"https://github.com/ysw1021/scope"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.767/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.768.bib | @inproceedings{chen-etal-2024-logogramnlp,
title = "{L}ogogram{NLP}: Comparing Visual and Textual Representations of Ancient Logographic Writing Systems for {NLP}",
author = "Chen, Danlu and
Shi, Freda and
Agarwal, Aditi and
Myerston, Jacobo and
Berg-Kirkpatrick, Taylor",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.768",
pages = "14238--14254",
abstract = "Standard natural language processing (NLP) pipelines operate on symbolic representations of language, which typically consist of sequences of discrete tokens. However, creating an analogous representation for ancient logographic writing systems is an extremely labor-intensive process that requires expert knowledge. At present, a large portion of logographic data persists in a purely visual form due to the absence of transcription{---}this issue poses a bottleneck for researchers seeking to apply NLP toolkits to study ancient logographic languages: most of the relevant data are images of writing. This paper investigates whether direct processing of visual representations of language offers a potential solution. We introduce LogogramNLP, the first benchmark enabling NLP analysis of ancient logographic languages, featuring both transcribed and visual datasetsfor four writing systems along with annotations for tasks like classification, translation, and parsing. Our experiments compare systems thatemploy recent visual and text encoding strategies as backbones. The results demonstrate that visual representations outperform textual representations for some investigated tasks, suggesting that visual processing pipelines may unlock a large amount of cultural heritage data of logographic languages for NLP-based analyses. Data and code are available at https: //logogramNLP.github.io/.",
}
| Standard natural language processing (NLP) pipelines operate on symbolic representations of language, which typically consist of sequences of discrete tokens. However, creating an analogous representation for ancient logographic writing systems is an extremely labor-intensive process that requires expert knowledge. At present, a large portion of logographic data persists in a purely visual form due to the absence of transcription{---}this issue poses a bottleneck for researchers seeking to apply NLP toolkits to study ancient logographic languages: most of the relevant data are images of writing. This paper investigates whether direct processing of visual representations of language offers a potential solution. We introduce LogogramNLP, the first benchmark enabling NLP analysis of ancient logographic languages, featuring both transcribed and visual datasetsfor four writing systems along with annotations for tasks like classification, translation, and parsing. Our experiments compare systems thatemploy recent visual and text encoding strategies as backbones. The results demonstrate that visual representations outperform textual representations for some investigated tasks, suggesting that visual processing pipelines may unlock a large amount of cultural heritage data of logographic languages for NLP-based analyses. Data and code are available at https: //logogramNLP.github.io/. | [
"Chen, Danlu",
"Shi, Freda",
"Agarwal, Aditi",
"Myerston, Jacobo",
"Berg-Kirkpatrick, Taylor"
] | LogogramNLP: Comparing Visual and Textual Representations of Ancient Logographic Writing Systems for NLP | acl-long.768 | Poster | 2408.04628 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.768/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.769.bib | @inproceedings{li-etal-2024-superfiltering,
title = "Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning",
author = "Li, Ming and
Zhang, Yong and
He, Shwai and
Li, Zhitao and
Zhao, Hongyu and
Wang, Jianzong and
Cheng, Ning and
Zhou, Tianyi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.769",
pages = "14255--14273",
abstract = "Instruction tuning is critical to improve LLMs but usually suffers from low-quality and redundant data. Data filtering for instruction tuning has proved important in improving both the efficiency and performance of the tuning process. But it also leads to extra cost and computation due to the involvement of LLMs in this process. To reduce the filtering cost, we study Superfiltering: Can we use a smaller and weaker model to select data for finetuning a larger and stronger model? Despite the performance gap between weak and strong language models, we find their highly consistent capability to perceive instruction difficulty and data selection results. This enables us to use a much smaller and more efficient model to filter the instruction data used to train a larger language model. Not only does it largely speed up the data filtering, but the filtered-data-finetuned LLM achieves even better performance on standard benchmarks. Extensive experiments validate the efficacy and efficiency of our approach.",
}
| Instruction tuning is critical to improve LLMs but usually suffers from low-quality and redundant data. Data filtering for instruction tuning has proved important in improving both the efficiency and performance of the tuning process. But it also leads to extra cost and computation due to the involvement of LLMs in this process. To reduce the filtering cost, we study Superfiltering: Can we use a smaller and weaker model to select data for finetuning a larger and stronger model? Despite the performance gap between weak and strong language models, we find their highly consistent capability to perceive instruction difficulty and data selection results. This enables us to use a much smaller and more efficient model to filter the instruction data used to train a larger language model. Not only does it largely speed up the data filtering, but the filtered-data-finetuned LLM achieves even better performance on standard benchmarks. Extensive experiments validate the efficacy and efficiency of our approach. | [
"Li, Ming",
"Zhang, Yong",
"He, Shwai",
"Li, Zhitao",
"Zhao, Hongyu",
"Wang, Jianzong",
"Cheng, Ning",
"Zhou, Tianyi"
] | Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning | acl-long.769 | Poster | 2402.00530 | [
"https://github.com/tianyi-lab/superfiltering"
] | https://huggingface.co/papers/2402.00530 | 3 | 1 | 0 | 8 | https://aclanthology.org/2024.acl-long.769/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.770.bib | @inproceedings{sui-etal-2024-confabulation,
title = "Confabulation: The Surprising Value of Large Language Model Hallucinations",
author = "Sui, Peiqi and
Duede, Eamon and
Wu, Sophie and
So, Richard",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.770",
pages = "14274--14284",
abstract = "This paper presents a systematic defense of large language model (LLM) hallucinations or {`}confabulations{'} as a potential resource instead of a categorically negative pitfall. The standard view is that confabulations are inherently problematic and AI research should eliminate this flaw. In this paper, we argue and empirically demonstrate that measurable semantic characteristics of LLM confabulations mirror a human propensity to utilize increased narrativity as a cognitive resource for sense-making and communication. In other words, it has potential value. Specifically, we analyze popular hallucination benchmarks and reveal that hallucinated outputs display increased levels of narrativity and semantic coherence relative to veridical outputs. This finding reveals a tension in our usually dismissive understandings of confabulation. It suggests, counter-intuitively, that the tendency for LLMs to confabulate may be intimately associated with a positive capacity for coherent narrative-text generation.",
}
| This paper presents a systematic defense of large language model (LLM) hallucinations or {`}confabulations{'} as a potential resource instead of a categorically negative pitfall. The standard view is that confabulations are inherently problematic and AI research should eliminate this flaw. In this paper, we argue and empirically demonstrate that measurable semantic characteristics of LLM confabulations mirror a human propensity to utilize increased narrativity as a cognitive resource for sense-making and communication. In other words, it has potential value. Specifically, we analyze popular hallucination benchmarks and reveal that hallucinated outputs display increased levels of narrativity and semantic coherence relative to veridical outputs. This finding reveals a tension in our usually dismissive understandings of confabulation. It suggests, counter-intuitively, that the tendency for LLMs to confabulate may be intimately associated with a positive capacity for coherent narrative-text generation. | [
"Sui, Peiqi",
"Duede, Eamon",
"Wu, Sophie",
"So, Richard"
] | Confabulation: The Surprising Value of Large Language Model Hallucinations | acl-long.770 | Poster | 2406.04175 | [
""
] | https://huggingface.co/papers/2406.04175 | 0 | 0 | 0 | 4 | https://aclanthology.org/2024.acl-long.770/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.771.bib | @inproceedings{zhu-etal-2024-iapt,
title = "{IAPT}: Instance-Aware Prompt Tuning for Large Language Models",
author = "Zhu, Wei and
Tian, Aaron and
Yin, Congrui and
Ni, Yuan and
Wang, Xiaoling and
Xie, Guotong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.771",
pages = "14285--14304",
abstract = "Soft prompt tuning is a widely studied parameter-efficient fine-tuning method. However, it has a clear drawback: many soft tokens must be inserted into the input sequences to guarantee downstream performance. As a result, soft prompt tuning is less considered than Low-rank adaptation (LoRA) in the large language modeling (LLM) era. In this work, we propose a novel prompt tuning method, Instruction-Aware Prompt Tuning (IAPT), that requires only four soft tokens. First, we install a parameter-efficient soft prompt generator at each Transformer layer to generate idiosyncratic soft prompts for each input instruction. The generated soft prompts can be seen as a semantic summary of the input instructions and can effectively guide the output generation. Second, the soft prompt generators are modules with a bottleneck architecture consisting of a self-attention pooling operation, two linear projections, and an activation function. Pilot experiments show that prompt generators at different Transformer layers require different activation functions. Thus, we propose to learn the idiosyncratic activation functions for prompt generators automatically with the help of rational functions. We have conducted experiments on various tasks, and the experimental results demonstrate that (a) our IAPT method can outperform the recent baselines with comparable tunable parameters. (b) Our IAPT method is more efficient than LoRA under the single-backbone multi-tenant setting.",
}
| Soft prompt tuning is a widely studied parameter-efficient fine-tuning method. However, it has a clear drawback: many soft tokens must be inserted into the input sequences to guarantee downstream performance. As a result, soft prompt tuning is less considered than Low-rank adaptation (LoRA) in the large language modeling (LLM) era. In this work, we propose a novel prompt tuning method, Instruction-Aware Prompt Tuning (IAPT), that requires only four soft tokens. First, we install a parameter-efficient soft prompt generator at each Transformer layer to generate idiosyncratic soft prompts for each input instruction. The generated soft prompts can be seen as a semantic summary of the input instructions and can effectively guide the output generation. Second, the soft prompt generators are modules with a bottleneck architecture consisting of a self-attention pooling operation, two linear projections, and an activation function. Pilot experiments show that prompt generators at different Transformer layers require different activation functions. Thus, we propose to learn the idiosyncratic activation functions for prompt generators automatically with the help of rational functions. We have conducted experiments on various tasks, and the experimental results demonstrate that (a) our IAPT method can outperform the recent baselines with comparable tunable parameters. (b) Our IAPT method is more efficient than LoRA under the single-backbone multi-tenant setting. | [
"Zhu, Wei",
"Tian, Aaron",
"Yin, Congrui",
"Ni, Yuan",
"Wang, Xiaoling",
"Xie, Guotong"
] | IAPT: Instance-Aware Prompt Tuning for Large Language Models | acl-long.771 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.771/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.1.bib | @inproceedings{wang-etal-2024-language,
title = "Can Language Models Serve as Text-Based World Simulators?",
author = "Wang, Ruoyao and
Todd, Graham and
Xiao, Ziang and
Yuan, Xingdi and
C{\^o}t{\'e}, Marc-Alexandre and
Clark, Peter and
Jansen, Peter",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.1",
pages = "1--17",
abstract = "Virtual environments play a key role in benchmarking advances in complex planning and decision-making tasks but are expensive and complicated to build by hand. Can current language models themselves serve as world simulators, correctly predicting how actions change different world states, thus bypassing the need for extensive manual coding? Our goal is to answer this question in the context of text-based simulators. Our approach is to build and use a new benchmark, called ByteSized32-State-Prediction, containing a dataset of text game state transitions and accompanying game tasks. We use this to directly quantify, for the first time, how well LLMs can serve as text-based world simulators. We test GPT-4 on this dataset and find that, despite its impressive performance, it is still an unreliable world simulator without further innovations. This work thus contributes both new insights into current LLM{'}s capabilities and weaknesses, as well as a novel benchmark to track future progress as new models appear.",
}
| Virtual environments play a key role in benchmarking advances in complex planning and decision-making tasks but are expensive and complicated to build by hand. Can current language models themselves serve as world simulators, correctly predicting how actions change different world states, thus bypassing the need for extensive manual coding? Our goal is to answer this question in the context of text-based simulators. Our approach is to build and use a new benchmark, called ByteSized32-State-Prediction, containing a dataset of text game state transitions and accompanying game tasks. We use this to directly quantify, for the first time, how well LLMs can serve as text-based world simulators. We test GPT-4 on this dataset and find that, despite its impressive performance, it is still an unreliable world simulator without further innovations. This work thus contributes both new insights into current LLM{'}s capabilities and weaknesses, as well as a novel benchmark to track future progress as new models appear. | [
"Wang, Ruoyao",
"Todd, Graham",
"Xiao, Ziang",
"Yuan, Xingdi",
"C{\\^o}t{\\'e}, Marc-Alex",
"re",
"Clark, Peter",
"Jansen, Peter"
] | Can Language Models Serve as Text-Based World Simulators? | acl-short.1 | Poster | 2406.06485 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.1/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.2.bib | @inproceedings{zhu-etal-2024-fanoutqa,
title = "{F}an{O}ut{QA}: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models",
author = "Zhu, Andrew and
Hwang, Alyssa and
Dugan, Liam and
Callison-Burch, Chris",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.2",
pages = "18--37",
abstract = "One type of question that is commonly found in day-to-day scenarios is {``}fan-out{''} questions, complex multi-hop, multi-document reasoning questions that require finding information about a large number of entities. However, there exist few resources to evaluate this type of question-answering capability among large language models. To evaluate complex reasoning in LLMs more fully, we present FanOutQA, a high-quality dataset of fan-out question-answer pairs and human-annotated decompositions with English Wikipedia as the knowledge base. We formulate three benchmark settings across our dataset and benchmark 7 LLMs, including GPT-4, LLaMA 2, Claude-2.1, and Mixtral-8x7B, finding that contemporary models still have room to improve reasoning over inter-document dependencies in a long context. We provide our dataset, along with open-source tools to run models to encourage evaluation.",
}
| One type of question that is commonly found in day-to-day scenarios is {``}fan-out{''} questions, complex multi-hop, multi-document reasoning questions that require finding information about a large number of entities. However, there exist few resources to evaluate this type of question-answering capability among large language models. To evaluate complex reasoning in LLMs more fully, we present FanOutQA, a high-quality dataset of fan-out question-answer pairs and human-annotated decompositions with English Wikipedia as the knowledge base. We formulate three benchmark settings across our dataset and benchmark 7 LLMs, including GPT-4, LLaMA 2, Claude-2.1, and Mixtral-8x7B, finding that contemporary models still have room to improve reasoning over inter-document dependencies in a long context. We provide our dataset, along with open-source tools to run models to encourage evaluation. | [
"Zhu, Andrew",
"Hwang, Alyssa",
"Dugan, Liam",
"Callison-Burch, Chris"
] | FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models | acl-short.2 | Poster | 2402.14116 | [
"https://github.com/zhudotexe/fanoutqa"
] | https://huggingface.co/papers/2402.14116 | 1 | 0 | 0 | 4 | https://aclanthology.org/2024.acl-short.2/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-short.3.bib | @inproceedings{song-etal-2024-revisiting,
title = "Revisiting Code Similarity Evaluation with Abstract Syntax Tree Edit Distance",
author = "Song, Yewei and
Lothritz, Cedric and
Tang, Xunzhu and
Bissyand{\'e}, Tegawend{\'e} and
Klein, Jacques",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.3",
pages = "38--46",
abstract = "This paper revisits recent code similarity evaluation metrics, particularly focusing on the application of Abstract Syntax Tree (AST) editing distance in diverse programming languages. In particular, we explore the usefulness of these metrics and compare them to traditional sequence similarity metrics. Our experiments showcase the effectiveness of AST editing distance in capturing intricate code structures, revealing a high correlation with established metrics. Furthermore, we explore the strengths and weaknesses of AST editing distance and prompt-based GPT similarity scores in comparison to BLEU score, execution match, and Jaccard Similarity. We propose, optimize, and publish an adaptable metric that demonstrates effectiveness across all tested languages, representing an enhanced version of Tree Similarity of Edit Distance (TSED).",
}
| This paper revisits recent code similarity evaluation metrics, particularly focusing on the application of Abstract Syntax Tree (AST) editing distance in diverse programming languages. In particular, we explore the usefulness of these metrics and compare them to traditional sequence similarity metrics. Our experiments showcase the effectiveness of AST editing distance in capturing intricate code structures, revealing a high correlation with established metrics. Furthermore, we explore the strengths and weaknesses of AST editing distance and prompt-based GPT similarity scores in comparison to BLEU score, execution match, and Jaccard Similarity. We propose, optimize, and publish an adaptable metric that demonstrates effectiveness across all tested languages, representing an enhanced version of Tree Similarity of Edit Distance (TSED). | [
"Song, Yewei",
"Lothritz, Cedric",
"Tang, Xunzhu",
"Bissy",
"{\\'e}, Tegawend{\\'e}",
"Klein, Jacques"
] | Revisiting Code Similarity Evaluation with Abstract Syntax Tree Edit Distance | acl-short.3 | Poster | 2404.08817 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.3/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.4.bib | @inproceedings{muradoglu-etal-2024-resisting,
title = "Resisting the Lure of the Skyline: Grounding Practices in Active Learning for Morphological Inflection",
author = "Muradoglu, Saliha and
Ginn, Michael and
Silfverberg, Miikka and
Hulden, Mans",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.4",
pages = "47--55",
abstract = "Active learning (AL) aims to lower the demand of annotation by selecting informative unannotated samples for the model building. In this paper, we explore the importance of conscious experimental design in the language documentation and description setting, particularly the distribution of the unannotated sample pool. We focus on the task of morphological inflection using a Transformer model. We propose context motivated benchmarks: a baseline and skyline. The baseline describes the frequency weighted distribution encountered in natural speech. We simulate this using Wikipedia texts. The skyline defines the more common approach, uniform sampling from a large, balanced corpus (UniMorph, in our case), which often yields mixed results. We note the unrealistic nature of this unannotated pool. When these factors are considered, our results show a clear benefit to targeted sampling.",
}
| Active learning (AL) aims to lower the demand of annotation by selecting informative unannotated samples for the model building. In this paper, we explore the importance of conscious experimental design in the language documentation and description setting, particularly the distribution of the unannotated sample pool. We focus on the task of morphological inflection using a Transformer model. We propose context motivated benchmarks: a baseline and skyline. The baseline describes the frequency weighted distribution encountered in natural speech. We simulate this using Wikipedia texts. The skyline defines the more common approach, uniform sampling from a large, balanced corpus (UniMorph, in our case), which often yields mixed results. We note the unrealistic nature of this unannotated pool. When these factors are considered, our results show a clear benefit to targeted sampling. | [
"Muradoglu, Saliha",
"Ginn, Michael",
"Silfverberg, Miikka",
"Hulden, Mans"
] | Resisting the Lure of the Skyline: Grounding Practices in Active Learning for Morphological Inflection | acl-short.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.4/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.5.bib | @inproceedings{yuan-etal-2024-speculative,
title = "Speculative Contrastive Decoding",
author = "Yuan, Hongyi and
Lu, Keming and
Huang, Fei and
Yuan, Zheng and
Zhou, Chang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.5",
pages = "56--64",
abstract = "Large language models (LLMs) exhibit exceptional performance in language tasks, yet their auto-regressive inference is limited due to high computational requirements and is sub-optimal due to the exposure bias. Inspired by speculative decoding and contrastive decoding, we introduce Speculative Contrastive Decoding (SCD), a straightforward yet powerful decoding approach that leverages predictions from smaller language models (LMs) to achieve both decoding acceleration and quality improvement. Extensive evaluations and analyses on four diverse language tasks demonstrate the effectiveness of SCD, showing that decoding efficiency and quality can compatibly benefit from one smaller LM.",
}
| Large language models (LLMs) exhibit exceptional performance in language tasks, yet their auto-regressive inference is limited due to high computational requirements and is sub-optimal due to the exposure bias. Inspired by speculative decoding and contrastive decoding, we introduce Speculative Contrastive Decoding (SCD), a straightforward yet powerful decoding approach that leverages predictions from smaller language models (LMs) to achieve both decoding acceleration and quality improvement. Extensive evaluations and analyses on four diverse language tasks demonstrate the effectiveness of SCD, showing that decoding efficiency and quality can compatibly benefit from one smaller LM. | [
"Yuan, Hongyi",
"Lu, Keming",
"Huang, Fei",
"Yuan, Zheng",
"Zhou, Chang"
] | Speculative Contrastive Decoding | acl-short.5 | Poster | 2311.08981 | [
""
] | https://huggingface.co/papers/2311.08981 | 1 | 2 | 0 | 5 | https://aclanthology.org/2024.acl-short.5/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-short.6.bib | @inproceedings{wang-etal-2024-rdrec,
title = "{RDR}ec: Rationale Distillation for {LLM}-based Recommendation",
author = "Wang, Xinfeng and
Cui, Jin and
Suzuki, Yoshimi and
Fukumoto, Fumiyo",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.6",
pages = "65--74",
abstract = "Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning ability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our code is available online.",
}
| Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning ability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our code is available online. | [
"Wang, Xinfeng",
"Cui, Jin",
"Suzuki, Yoshimi",
"Fukumoto, Fumiyo"
] | RDRec: Rationale Distillation for LLM-based Recommendation | acl-short.6 | Poster | 2405.10587 | [
"https://github.com/wangxfng/rdrec"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.6/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.7.bib | @inproceedings{mickus-etal-2024-isotropy,
title = "Isotropy, Clusters, and Classifiers",
author = {Mickus, Timothee and
Gr{\"o}nroos, Stig-Arne and
Attieh, Joseph},
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.7",
pages = "75--84",
abstract = "Whether embedding spaces use all their dimensions equally, i.e., whether they are isotropic, has been a recent subject of discussion. Evidence has been accrued both for and against enforcing isotropy in embedding spaces. In the present paper, we stress that isotropy imposes requirements on the embedding space that are not compatible with the presence of clusters{---}which also negatively impacts linear classification objectives. We demonstrate this fact both empirically and mathematically and use it to shed light on previous results from the literature.",
}
| Whether embedding spaces use all their dimensions equally, i.e., whether they are isotropic, has been a recent subject of discussion. Evidence has been accrued both for and against enforcing isotropy in embedding spaces. In the present paper, we stress that isotropy imposes requirements on the embedding space that are not compatible with the presence of clusters{---}which also negatively impacts linear classification objectives. We demonstrate this fact both empirically and mathematically and use it to shed light on previous results from the literature. | [
"Mickus, Timothee",
"Gr{\\\"o}nroos, Stig-Arne",
"Attieh, Joseph"
] | Isotropy, Clusters, and Classifiers | acl-short.7 | Poster | 2402.03191 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.7/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.8.bib | @inproceedings{gambardella-etal-2024-language,
title = "Language Models Do Hard Arithmetic Tasks Easily and Hardly Do Easy Arithmetic Tasks",
author = "Gambardella, Andrew and
Iwasawa, Yusuke and
Matsuo, Yutaka",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.8",
pages = "85--91",
abstract = "The ability (and inability) of large language models (LLMs) to perform arithmetic tasks has been the subject of much theoretical and practical debate. We show that LLMs are frequently able to correctly and confidently predict the first digit of $n$-digit by $m$-digit multiplication tasks without using chain of thought reasoning, despite these tasks require compounding operations to solve. Simultaneously, LLMs in practice often fail to correctly or confidently predict the last digit of an $n$-digit by $m$-digit multiplication, a task equivalent to 1-digit by 1-digit multiplication which can be easily learned or memorized. We show that the latter task can be solved more robustly when the LLM is conditioned on all of the correct higher-order digits, which on average increases the confidence of the correct last digit on 5-digit by 5-digit multiplication tasks using Llama 2-13B by over 230{\%} (0.13â0.43) and Mistral-7B by 150{\%} (0.22â0.55).",
}
| The ability (and inability) of large language models (LLMs) to perform arithmetic tasks has been the subject of much theoretical and practical debate. We show that LLMs are frequently able to correctly and confidently predict the first digit of $n$-digit by $m$-digit multiplication tasks without using chain of thought reasoning, despite these tasks require compounding operations to solve. Simultaneously, LLMs in practice often fail to correctly or confidently predict the last digit of an $n$-digit by $m$-digit multiplication, a task equivalent to 1-digit by 1-digit multiplication which can be easily learned or memorized. We show that the latter task can be solved more robustly when the LLM is conditioned on all of the correct higher-order digits, which on average increases the confidence of the correct last digit on 5-digit by 5-digit multiplication tasks using Llama 2-13B by over 230{\%} (0.13â0.43) and Mistral-7B by 150{\%} (0.22â0.55). | [
"Gambardella, Andrew",
"Iwasawa, Yusuke",
"Matsuo, Yutaka"
] | Language Models Do Hard Arithmetic Tasks Easily and Hardly Do Easy Arithmetic Tasks | acl-short.8 | Poster | 2406.02356 | [
""
] | https://huggingface.co/papers/2406.02356 | 0 | 1 | 0 | 3 | https://aclanthology.org/2024.acl-short.8/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-short.9.bib | @inproceedings{lim-etal-2024-simpsons,
title = "Simpson{'}s Paradox and the Accuracy-Fluency Tradeoff in Translation",
author = "Lim, Zheng Wei and
Vylomova, Ekaterina and
Cohn, Trevor and
Kemp, Charles",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.9",
pages = "92--103",
abstract = "A good translation should be faithful to the source and should respect the norms of the target language. We address a theoretical puzzle about the relationship between these objectives. On one hand, intuition and some prior work suggest that accuracy and fluency should trade off against each other, and that capturing every detail of the source can only be achieved at the cost of fluency. On the other hand, quality assessment researchers often suggest that accuracy and fluency are highly correlated and difficult for human raters to distinguish (Callison-Burch et al., 2007). We show that the tension between these views is an instance of Simpson{'}s paradox, and that accuracy and fluency are positively correlated at the level of the corpus but trade off at the level of individual source segments. We further suggest that the relationship between accuracy and fluency is best evaluated at the segment (or sentence) level, and that the trade off between these dimensions has implications both for assessing translation quality and developing improved MT systems.",
}
| A good translation should be faithful to the source and should respect the norms of the target language. We address a theoretical puzzle about the relationship between these objectives. On one hand, intuition and some prior work suggest that accuracy and fluency should trade off against each other, and that capturing every detail of the source can only be achieved at the cost of fluency. On the other hand, quality assessment researchers often suggest that accuracy and fluency are highly correlated and difficult for human raters to distinguish (Callison-Burch et al., 2007). We show that the tension between these views is an instance of Simpson{'}s paradox, and that accuracy and fluency are positively correlated at the level of the corpus but trade off at the level of individual source segments. We further suggest that the relationship between accuracy and fluency is best evaluated at the segment (or sentence) level, and that the trade off between these dimensions has implications both for assessing translation quality and developing improved MT systems. | [
"Lim, Zheng Wei",
"Vylomova, Ekaterina",
"Cohn, Trevor",
"Kemp, Charles"
] | Simpson's Paradox and the Accuracy-Fluency Tradeoff in Translation | acl-short.9 | Poster | 2402.12690 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.9/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.10.bib | @inproceedings{belcak-wattenhofer-2024-ultrasparsebert,
title = "{U}ltra{S}parse{BERT}: 99{\%} Conditionally Sparse Language Modelling",
author = "Belcak, Peter and
Wattenhofer, Roger",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.10",
pages = "104--108",
abstract = "We present UltraSparseBERT, a BERT variant that uses 0.3{\%} of its neurons during inference while performing on par with similar BERT models. UltraSparseBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by reorganizing feedforward networks into fast feedforward networks (FFFs).To showcase but one benefit of high sparsity, we provide an Intel MKL implementation achieving 78x speedup over the optimized feedforward baseline on CPUs, and an OpenAI Triton implementation performing forward passes 4.1x faster than the corresponding native GPU implementation. The training and benchmarking code is enclosed.",
}
| We present UltraSparseBERT, a BERT variant that uses 0.3{\%} of its neurons during inference while performing on par with similar BERT models. UltraSparseBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by reorganizing feedforward networks into fast feedforward networks (FFFs).To showcase but one benefit of high sparsity, we provide an Intel MKL implementation achieving 78x speedup over the optimized feedforward baseline on CPUs, and an OpenAI Triton implementation performing forward passes 4.1x faster than the corresponding native GPU implementation. The training and benchmarking code is enclosed. | [
"Belcak, Peter",
"Wattenhofer, Roger"
] | UltraSparseBERT: 99% Conditionally Sparse Language Modelling | acl-short.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.10/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.11.bib | @inproceedings{liang-etal-2024-scemqa,
title = "{S}ce{MQA}: A Scientific College Entrance Level Multimodal Question Answering Benchmark",
author = "Liang, Zhenwen and
Guo, Kehan and
Liu, Gang and
Guo, Taicheng and
Zhou, Yujun and
Yang, Tianyu and
Jiao, Jiajun and
Pi, Renjie and
Zhang, Jipeng and
Zhang, Xiangliang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.11",
pages = "109--119",
abstract = "The paper introduces SceMQA, a novel benchmark for scientific multimodal question answering at the college entrance level. It addresses a critical educational phase often overlooked in existing benchmarks, spanning high school to pre-college levels. SceMQA focuses on core science subjects including Mathematics, Physics, Chemistry, and Biology. It features a blend of multiple-choice and free-response formats, ensuring a comprehensive evaluation of AI models{'} abilities. Additionally, our benchmark provides specific knowledge points for each problem and detailed explanations for each answer. SceMQA also uniquely presents problems with identical contexts but varied questions to facilitate a more thorough and accurate assessment of reasoning capabilities. In the experiment, we evaluate both open-source and close-source state-of-the-art Multimodal Large Language Models (MLLMs), across various experimental settings. The results show that further research and development are needed in developing more capable MLLM, as highlighted by only 50{\%} to 60{\%} accuracy achieved by the strongest models.",
}
| The paper introduces SceMQA, a novel benchmark for scientific multimodal question answering at the college entrance level. It addresses a critical educational phase often overlooked in existing benchmarks, spanning high school to pre-college levels. SceMQA focuses on core science subjects including Mathematics, Physics, Chemistry, and Biology. It features a blend of multiple-choice and free-response formats, ensuring a comprehensive evaluation of AI models{'} abilities. Additionally, our benchmark provides specific knowledge points for each problem and detailed explanations for each answer. SceMQA also uniquely presents problems with identical contexts but varied questions to facilitate a more thorough and accurate assessment of reasoning capabilities. In the experiment, we evaluate both open-source and close-source state-of-the-art Multimodal Large Language Models (MLLMs), across various experimental settings. The results show that further research and development are needed in developing more capable MLLM, as highlighted by only 50{\%} to 60{\%} accuracy achieved by the strongest models. | [
"Liang, Zhenwen",
"Guo, Kehan",
"Liu, Gang",
"Guo, Taicheng",
"Zhou, Yujun",
"Yang, Tianyu",
"Jiao, Jiajun",
"Pi, Renjie",
"Zhang, Jipeng",
"Zhang, Xiangliang"
] | SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark | acl-short.11 | Poster | 2402.05138 | [
""
] | https://huggingface.co/papers/2402.05138 | 2 | 2 | 0 | 10 | https://aclanthology.org/2024.acl-short.11/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-short.12.bib | @inproceedings{li-etal-2024-role-long,
title = "On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models",
author = "Li, Dongyang and
Yan, Junbing and
Zhang, Taolin and
Wang, Chengyu and
He, Xiaofeng and
Huang, Longtao and
Xue{'}, Hui and
Huang, Jun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.12",
pages = "120--126",
abstract = "Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of large language models (LLMs) with retrieved documents related to user queries. However, RAG only focuses on improving the response quality of LLMs via enhancing queries indiscriminately with retrieved information, paying little attention to what type of knowledge LLMs really need to answer original queries more accurately. In this paper, we suggest that long-tail knowledge is crucial for RAG as LLMs have already remembered common world knowledge during large-scale pre-training. Based on our observation, we propose a simple but effective long-tail knowledge detection method for LLMs. Specifically, the novel Generative Expected Calibration Error (GECE) metric is derived to measure the {``}long-tailness{''} of knowledge based on both statistics and semantics. Hence, we retrieve relevant documents and infuse them into the model for patching knowledge loopholes only when the input query relates to long-tail knowledge. Experiments show that, compared to existing RAG pipelines, our method achieves over 4x speedup in average inference time and consistent performance improvement in downstream tasks.",
}
| Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of large language models (LLMs) with retrieved documents related to user queries. However, RAG only focuses on improving the response quality of LLMs via enhancing queries indiscriminately with retrieved information, paying little attention to what type of knowledge LLMs really need to answer original queries more accurately. In this paper, we suggest that long-tail knowledge is crucial for RAG as LLMs have already remembered common world knowledge during large-scale pre-training. Based on our observation, we propose a simple but effective long-tail knowledge detection method for LLMs. Specifically, the novel Generative Expected Calibration Error (GECE) metric is derived to measure the {``}long-tailness{''} of knowledge based on both statistics and semantics. Hence, we retrieve relevant documents and infuse them into the model for patching knowledge loopholes only when the input query relates to long-tail knowledge. Experiments show that, compared to existing RAG pipelines, our method achieves over 4x speedup in average inference time and consistent performance improvement in downstream tasks. | [
"Li, Dongyang",
"Yan, Junbing",
"Zhang, Taolin",
"Wang, Chengyu",
"He, Xiaofeng",
"Huang, Longtao",
"Xue{'}, Hui",
"Huang, Jun"
] | On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models | acl-short.12 | Poster | 2406.16367 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.12/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.13.bib | @inproceedings{gui-etal-2024-iepile,
title = "{IEP}ile: Unearthing Large Scale Schema-Conditioned Information Extraction Corpus",
author = "Gui, Honghao and
Yuan, Lin and
Ye, Hongbin and
Zhang, Ningyu and
Sun, Mengshu and
Liang, Lei and
Chen, Huajun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.13",
pages = "127--146",
abstract = "Large Language Models (LLMs) demonstrate remarkable potential across various domains; however, they exhibit a significant performance gap in Information Extraction (IE). Note that high-quality instruction data is the vital key for enhancing the specific capabilities of LLMs, while current IE datasets tend to be small in scale, fragmented, and lack standardized schema. To this end, we introduce IEPile, a comprehensive bilingual (English and Chinese) IE instruction corpus, which contains approximately 0.32B tokens. We construct IEPile by collecting and cleaning 33 existing IE datasets, and introduce schema-based instruction generation to unearth a large-scale corpus. Experimentally, IEPile enhance the performance of LLMs for IE, with notable improvements in zero-shot generalization. We open-source the resource and pre-trained models, hoping to provide valuable support to the NLP community.",
}
| Large Language Models (LLMs) demonstrate remarkable potential across various domains; however, they exhibit a significant performance gap in Information Extraction (IE). Note that high-quality instruction data is the vital key for enhancing the specific capabilities of LLMs, while current IE datasets tend to be small in scale, fragmented, and lack standardized schema. To this end, we introduce IEPile, a comprehensive bilingual (English and Chinese) IE instruction corpus, which contains approximately 0.32B tokens. We construct IEPile by collecting and cleaning 33 existing IE datasets, and introduce schema-based instruction generation to unearth a large-scale corpus. Experimentally, IEPile enhance the performance of LLMs for IE, with notable improvements in zero-shot generalization. We open-source the resource and pre-trained models, hoping to provide valuable support to the NLP community. | [
"Gui, Honghao",
"Yuan, Lin",
"Ye, Hongbin",
"Zhang, Ningyu",
"Sun, Mengshu",
"Liang, Lei",
"Chen, Huajun"
] | IEPile: Unearthing Large Scale Schema-Conditioned Information Extraction Corpus | acl-short.13 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.13/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.14.bib | @inproceedings{du-etal-2024-bi,
title = "Bi-Directional Multi-Granularity Generation Framework for Knowledge Graph-to-Text with Large Language Model",
author = "Du, Haowei and
Li, Chen and
Zhang, Dinghao and
Zhao, Dongyan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.14",
pages = "147--152",
abstract = "The knowledge graph-to-text (KG-to-text) generation task aims to synthesize coherent and engaging sentences that accurately convey the complex information derived from an input knowledge graph. Existing methods generate the whole target text based on all KG triples at once and may incorporate incorrect KG triples for each sentence. To this end, we propose the bi-directional multi-granularity generation framework. Instead of generating the whole text at a time, we construct the sentence level generation based on the corresponding triples and generate the graph-level text as a result. Moreover, we design a backward relation extraction task to enhance the correctness of relational information. Our method achieves the new state-of-the-art in benchmark dataset WebNLG and further analysis shows the efficiency of different modules.",
}
| The knowledge graph-to-text (KG-to-text) generation task aims to synthesize coherent and engaging sentences that accurately convey the complex information derived from an input knowledge graph. Existing methods generate the whole target text based on all KG triples at once and may incorporate incorrect KG triples for each sentence. To this end, we propose the bi-directional multi-granularity generation framework. Instead of generating the whole text at a time, we construct the sentence level generation based on the corresponding triples and generate the graph-level text as a result. Moreover, we design a backward relation extraction task to enhance the correctness of relational information. Our method achieves the new state-of-the-art in benchmark dataset WebNLG and further analysis shows the efficiency of different modules. | [
"Du, Haowei",
"Li, Chen",
"Zhang, Dinghao",
"Zhao, Dongyan"
] | Bi-Directional Multi-Granularity Generation Framework for Knowledge Graph-to-Text with Large Language Model | acl-short.14 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.14/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.15.bib | @inproceedings{zhu-etal-2024-code,
title = "Code-Switching Can be Better Aligners: Advancing Cross-Lingual {SLU} through Representation-Level and Prediction-Level Alignment",
author = "Zhu, Zhihong and
Cheng, Xuxin and
Chen, Zhanpeng and
Zhuang, Xianwei and
Huang, Zhiqi and
Zou, Yuexian",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.15",
pages = "153--160",
abstract = "Zero-shot cross-lingual spoken language understanding (SLU) can promote the globalization application of dialog systems, which has attracted increasing attention. While current code-switching based cross-lingual SLU frameworks have shown promising results, they (i) predominantly utilize contrastive objectives to model hard alignment, which may disrupt the inherent structure within sentences of each language; and (ii) focus optimization objectives solely on the original sentences, neglecting the relation between original sentences and code-switched sentences, which may hinder contextualized embeddings from further alignment. In this paper, we propose a novel framework dubbed REPE (short for Representation-Level and Prediction-Level Alignment), which leverages both code-switched and original sentences to achieve multi-level alignment. Specifically, REPE introduces optimal transport to facilitate soft alignment between the representations of code-switched and original sentences, thereby preserving structural integrity as much as possible. Moreover, REPE adopts multi-view learning to enforce consistency regularization between the prediction of the two sentences, aligning them into a more refined language-invariant space. Based on this, we further incorporate a self-distillation layer to boost the robustness of REPE. Extensive experiments on two benchmarks across ten languages demonstrate the superiority of the proposed REPE framework.",
}
| Zero-shot cross-lingual spoken language understanding (SLU) can promote the globalization application of dialog systems, which has attracted increasing attention. While current code-switching based cross-lingual SLU frameworks have shown promising results, they (i) predominantly utilize contrastive objectives to model hard alignment, which may disrupt the inherent structure within sentences of each language; and (ii) focus optimization objectives solely on the original sentences, neglecting the relation between original sentences and code-switched sentences, which may hinder contextualized embeddings from further alignment. In this paper, we propose a novel framework dubbed REPE (short for Representation-Level and Prediction-Level Alignment), which leverages both code-switched and original sentences to achieve multi-level alignment. Specifically, REPE introduces optimal transport to facilitate soft alignment between the representations of code-switched and original sentences, thereby preserving structural integrity as much as possible. Moreover, REPE adopts multi-view learning to enforce consistency regularization between the prediction of the two sentences, aligning them into a more refined language-invariant space. Based on this, we further incorporate a self-distillation layer to boost the robustness of REPE. Extensive experiments on two benchmarks across ten languages demonstrate the superiority of the proposed REPE framework. | [
"Zhu, Zhihong",
"Cheng, Xuxin",
"Chen, Zhanpeng",
"Zhuang, Xianwei",
"Huang, Zhiqi",
"Zou, Yuexian"
] | Code-Switching Can be Better Aligners: Advancing Cross-Lingual SLU through Representation-Level and Prediction-Level Alignment | acl-short.15 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.15/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.16.bib | @inproceedings{liu-etal-2024-aflora,
title = "{AFL}o{RA}: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models",
author = "Liu, Zeyu and
Kundu, Souvik and
Li, Anni and
Wan, Junrui and
Jiang, Lianghao and
Beerel, Peter",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.16",
pages = "161--167",
abstract = "We present a novel Parameter-Efficient Fine-Tuning (PEFT) method, dubbed as $\textit{Adaptive Freezing of Low-Rank Adaptation}$ (AFLoRA). Specifically, for each pre-trained frozen weight tensor, we add a parallel path of trainable low-rank matrices, namely a down-projection and an up-projection matrix, each of which is followed by a feature transformation vector. Based on a novel \textit{freezing score}, we incrementally freeze these projection matrices during fine-tuning to reduce the computation and alleviate over-fitting. Our experimental results demonstrate that we can achieve state-of-the-art performance with an average improvement of up to 0.85{\%} as evaluated on the GLUE benchmark while yielding up to $9.5\times$ fewer average trainable parameters. While compared in terms of runtime, AFLoRA can yield up to $1.86\times$ improvement as opposed to similar PEFT alternatives. Besides the practical utility of our approach, we provide insights on the trainability requirements of LoRA paths at different modules and the freezing schedule for the different projection matrices.",
}
| We present a novel Parameter-Efficient Fine-Tuning (PEFT) method, dubbed as $\textit{Adaptive Freezing of Low-Rank Adaptation}$ (AFLoRA). Specifically, for each pre-trained frozen weight tensor, we add a parallel path of trainable low-rank matrices, namely a down-projection and an up-projection matrix, each of which is followed by a feature transformation vector. Based on a novel \textit{freezing score}, we incrementally freeze these projection matrices during fine-tuning to reduce the computation and alleviate over-fitting. Our experimental results demonstrate that we can achieve state-of-the-art performance with an average improvement of up to 0.85{\%} as evaluated on the GLUE benchmark while yielding up to $9.5\times$ fewer average trainable parameters. While compared in terms of runtime, AFLoRA can yield up to $1.86\times$ improvement as opposed to similar PEFT alternatives. Besides the practical utility of our approach, we provide insights on the trainability requirements of LoRA paths at different modules and the freezing schedule for the different projection matrices. | [
"Liu, Zeyu",
"Kundu, Souvik",
"Li, Anni",
"Wan, Junrui",
"Jiang, Lianghao",
"Beerel, Peter"
] | AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models | acl-short.16 | Oral | 2403.13269 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.16/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.17.bib | @inproceedings{mu-etal-2024-ddprompt,
title = "{DDP}rompt: Differential Diversity Prompting in Large Language Models",
author = "Mu, Lin and
Zhang, Wenhao and
Zhang, Yiwen and
Jin, Peiquan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.17",
pages = "168--174",
abstract = "Large Language Models (LLMs) have shown that their reasoning ability could be enhanced through approaches like Chain-of-Thought (CoT) prompting. However, these methods use single prompts for different types of questions and do not design appropriate prompts for questions with different characteristics. In this paper, we aim to explore a methodology that generates differentially diverse reasoning paths for different types of questions. To achieve this, we propose a novel prompting strategy called Differential Diversity Prompting (DDPrompt). Firstly, we generate the optimal prompts collection based on question characteristics. Then, we use this optimal prompts collection to generate multiple answers for a question and choose the final answer by voting. We evaluated DDPrompt on twelve reasoning benchmarks and significant improvement in the performance of LLMs on complex reasoning tasks (e.g., GSM8K 75{\%}-{\textgreater}84{\%}, Tracking Shuffled Objects (68.8{\%}-{\textgreater}83.9{\%}))",
}
| Large Language Models (LLMs) have shown that their reasoning ability could be enhanced through approaches like Chain-of-Thought (CoT) prompting. However, these methods use single prompts for different types of questions and do not design appropriate prompts for questions with different characteristics. In this paper, we aim to explore a methodology that generates differentially diverse reasoning paths for different types of questions. To achieve this, we propose a novel prompting strategy called Differential Diversity Prompting (DDPrompt). Firstly, we generate the optimal prompts collection based on question characteristics. Then, we use this optimal prompts collection to generate multiple answers for a question and choose the final answer by voting. We evaluated DDPrompt on twelve reasoning benchmarks and significant improvement in the performance of LLMs on complex reasoning tasks (e.g., GSM8K 75{\%}-{\textgreater}84{\%}, Tracking Shuffled Objects (68.8{\%}-{\textgreater}83.9{\%})) | [
"Mu, Lin",
"Zhang, Wenhao",
"Zhang, Yiwen",
"Jin, Peiquan"
] | DDPrompt: Differential Diversity Prompting in Large Language Models | acl-short.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.17/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.18.bib | @inproceedings{heinzerling-inui-2024-monotonic,
title = "Monotonic Representation of Numeric Attributes in Language Models",
author = "Heinzerling, Benjamin and
Inui, Kentaro",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.18",
pages = "175--195",
abstract = "Language models (LMs) can express factual knowledge involving numeric properties such as Karl Popper was born in 1902. However, how this information is encoded in the model{'}s internal representations is not understood well. Here, we introduce a method for finding and editing representations of numeric properties such as an entity{'}s birth year. We find directions that encode numeric properties monotonically, in an interpretable fashion. When editing representations along these directions, LM output changes accordingly. For example, by patching activations along a {``}birthyear{''} direction we can make the LM express an increasingly late birthyear. Property-encoding directions exist across several numeric properties in all models under consideration, suggesting the possibility that monotonic representation of numeric properties consistently emerges during LM pretraining.Code: https://github.com/bheinzerling/numeric-property-reprA long version of this short paper is available at: https://arxiv.org/abs/2403.10381",
}
| Language models (LMs) can express factual knowledge involving numeric properties such as Karl Popper was born in 1902. However, how this information is encoded in the model{'}s internal representations is not understood well. Here, we introduce a method for finding and editing representations of numeric properties such as an entity{'}s birth year. We find directions that encode numeric properties monotonically, in an interpretable fashion. When editing representations along these directions, LM output changes accordingly. For example, by patching activations along a {``}birthyear{''} direction we can make the LM express an increasingly late birthyear. Property-encoding directions exist across several numeric properties in all models under consideration, suggesting the possibility that monotonic representation of numeric properties consistently emerges during LM pretraining.Code: https://github.com/bheinzerling/numeric-property-reprA long version of this short paper is available at: https://arxiv.org/abs/2403.10381 | [
"Heinzerling, Benjamin",
"Inui, Kentaro"
] | Monotonic Representation of Numeric Attributes in Language Models | acl-short.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.18/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.19.bib | @inproceedings{sun-etal-2024-two,
title = "Two Issues with {C}hinese Spelling Correction and A Refinement Solution",
author = "Sun, Changxuan and
She, Linlin and
Lu, Xuesong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.19",
pages = "196--204",
abstract = "The Chinese Spelling Correction (CSC) task aims to detect and correct misspelled characters in Chinese text, and has received lots of attention in the past few years. Most recent studies adopt a Transformer-based model and leverage different features of characters such as pronunciation, glyph and contextual information to enhance the model{'}s ability to complete the task. Despite their state-of-the-art performance, we observe two issues that should be addressed to further advance the CSC task. First, the widely-used benchmark datasets SIGHAN13, SIGHAN14 and SIGHAN15, contain many mistakes. Hence the performance of existing models is not accurate and should be re-evaluated. Second, existing models seem to have reached a performance bottleneck, where the improvements on the SIGHAN{'}s testing sets are increasingly smaller and unstable. To deal with the two issues, we make two contributions: (1) we manually fix the SIGHAN datasets and re-evaluate four representative CSC models using the fixed datasets; (2) we analyze the new results to identify the spelling errors that none of the four models successfully corrects, based on which we propose a simple yet effective refinement solution. Experimental results show that our solution improves the four models in all metrics by notable margins.",
}
| The Chinese Spelling Correction (CSC) task aims to detect and correct misspelled characters in Chinese text, and has received lots of attention in the past few years. Most recent studies adopt a Transformer-based model and leverage different features of characters such as pronunciation, glyph and contextual information to enhance the model{'}s ability to complete the task. Despite their state-of-the-art performance, we observe two issues that should be addressed to further advance the CSC task. First, the widely-used benchmark datasets SIGHAN13, SIGHAN14 and SIGHAN15, contain many mistakes. Hence the performance of existing models is not accurate and should be re-evaluated. Second, existing models seem to have reached a performance bottleneck, where the improvements on the SIGHAN{'}s testing sets are increasingly smaller and unstable. To deal with the two issues, we make two contributions: (1) we manually fix the SIGHAN datasets and re-evaluate four representative CSC models using the fixed datasets; (2) we analyze the new results to identify the spelling errors that none of the four models successfully corrects, based on which we propose a simple yet effective refinement solution. Experimental results show that our solution improves the four models in all metrics by notable margins. | [
"Sun, Changxuan",
"She, Linlin",
"Lu, Xuesong"
] | Two Issues with Chinese Spelling Correction and A Refinement Solution | acl-short.19 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.19/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.20.bib | @inproceedings{nandi-etal-2024-dynasemble,
title = "{D}yna{S}emble: Dynamic Ensembling of Textual and Structure-Based Models for Knowledge Graph Completion",
author = "Nandi, Ananjan and
Kaur, Navdeep and
Singla, Parag and
., Mausam",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.20",
pages = "205--216",
abstract = "We consider two popular approaches to KnowledgeGraph Completion (KGC): textual modelsthat rely on textual entity descriptions, andstructure-based models that exploit the connectivitystructure of the Knowledge Graph(KG). Preliminary experiments show that theseapproaches have complementary strengths:structure-based models perform exceptionallywell when the gold answer is easily reachablefrom the query head in the KG, while textualmodels exploit descriptions to give goodperformance even when the gold answer isnot easily reachable. In response, we proposeDynaSemble, a novel method for learningquery-dependent ensemble weights to combinethese approaches by using the distributions ofscores assigned by the models in the ensembleto all candidate entities. DynaSemble achievesstate-of-the-art results on three standard KGCdatasets, with up to 6.8 pt MRR and 8.3 ptHits@1 gains over the best baseline model forthe WN18RR dataset.",
}
| We consider two popular approaches to KnowledgeGraph Completion (KGC): textual modelsthat rely on textual entity descriptions, andstructure-based models that exploit the connectivitystructure of the Knowledge Graph(KG). Preliminary experiments show that theseapproaches have complementary strengths:structure-based models perform exceptionallywell when the gold answer is easily reachablefrom the query head in the KG, while textualmodels exploit descriptions to give goodperformance even when the gold answer isnot easily reachable. In response, we proposeDynaSemble, a novel method for learningquery-dependent ensemble weights to combinethese approaches by using the distributions ofscores assigned by the models in the ensembleto all candidate entities. DynaSemble achievesstate-of-the-art results on three standard KGCdatasets, with up to 6.8 pt MRR and 8.3 ptHits@1 gains over the best baseline model forthe WN18RR dataset. | [
"N",
"i, Ananjan",
"Kaur, Navdeep",
"Singla, Parag",
"., Mausam"
] | DynaSemble: Dynamic Ensembling of Textual and Structure-Based Models for Knowledge Graph Completion | acl-short.20 | Poster | 2311.03780 | [
"https://github.com/dair-iitd/KGC-Ensemble"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.20/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.21.bib | @inproceedings{deng-etal-2024-fine,
title = "Fine-Tuning Pre-Trained Language Models with Gaze Supervision",
author = {Deng, Shuwen and
Prasse, Paul and
Reich, David and
Scheffer, Tobias and
J{\"a}ger, Lena},
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.21",
pages = "217--224",
abstract = "Human gaze data provide cognitive information that reflect human language comprehension and has been effectively integrated into a variety of natural language processing (NLP) tasks, demonstrating improved performance over corresponding plain text-based models. In this work, we propose to integrate a gaze module into pre-trained language models (LMs) at the fine-tuning stage to improve their capabilities to learn representations that are grounded in human language processing. This is done by extending the conventional purely text-based fine-tuning objective with an auxiliary loss to exploit cognitive signals. The gaze module is only included during training, retaining compatibility with existing pre-trained LM-based pipelines. We evaluate the proposed approach using two distinct pre-trained LMs on the GLUE benchmark and observe that the proposed model improves performance compared to both standard fine-tuning and traditional text augmentation baselines.",
}
| Human gaze data provide cognitive information that reflect human language comprehension and has been effectively integrated into a variety of natural language processing (NLP) tasks, demonstrating improved performance over corresponding plain text-based models. In this work, we propose to integrate a gaze module into pre-trained language models (LMs) at the fine-tuning stage to improve their capabilities to learn representations that are grounded in human language processing. This is done by extending the conventional purely text-based fine-tuning objective with an auxiliary loss to exploit cognitive signals. The gaze module is only included during training, retaining compatibility with existing pre-trained LM-based pipelines. We evaluate the proposed approach using two distinct pre-trained LMs on the GLUE benchmark and observe that the proposed model improves performance compared to both standard fine-tuning and traditional text augmentation baselines. | [
"Deng, Shuwen",
"Prasse, Paul",
"Reich, David",
"Scheffer, Tobias",
"J{\\\"a}ger, Lena"
] | Fine-Tuning Pre-Trained Language Models with Gaze Supervision | acl-short.21 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.21/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.22.bib | @inproceedings{pupier-etal-2024-growing,
title = "Growing Trees on Sounds: Assessing Strategies for End-to-End Dependency Parsing of Speech",
author = "Pupier, Adrien and
Coavoux, Maximin and
Goulian, J{\'e}r{\^o}me and
Lecouteux, Benjamin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.22",
pages = "225--233",
abstract = "Direct dependency parsing of the speech signal {--}as opposed to parsing speech transcriptions{--} has recently been proposed as a task (Pupier et al. 2022), as a way of incorporating prosodic information in the parsing system and bypassing the limitations of a pipeline approach that would consist of using first an Automatic Speech Recognition (ASR) system and then a syntactic parser. In this article, we report on a set of experiments aiming at assessing the performance of two parsing paradigms (graph-based parsing and sequence labeling based parsing) on speech parsing. We perform this evaluation on a large treebank of spoken French, featuring realistic spontaneous conversations. Our findings show that (i) the graph based approach obtain better results across the board (ii) parsing directly from speech outperforms a pipeline approach, despite having 30{\%} fewer parameters.",
}
| Direct dependency parsing of the speech signal {--}as opposed to parsing speech transcriptions{--} has recently been proposed as a task (Pupier et al. 2022), as a way of incorporating prosodic information in the parsing system and bypassing the limitations of a pipeline approach that would consist of using first an Automatic Speech Recognition (ASR) system and then a syntactic parser. In this article, we report on a set of experiments aiming at assessing the performance of two parsing paradigms (graph-based parsing and sequence labeling based parsing) on speech parsing. We perform this evaluation on a large treebank of spoken French, featuring realistic spontaneous conversations. Our findings show that (i) the graph based approach obtain better results across the board (ii) parsing directly from speech outperforms a pipeline approach, despite having 30{\%} fewer parameters. | [
"Pupier, Adrien",
"Coavoux, Maximin",
"Goulian, J{\\'e}r{\\^o}me",
"Lecouteux, Benjamin"
] | Growing Trees on Sounds: Assessing Strategies for End-to-End Dependency Parsing of Speech | acl-short.22 | Poster | 2406.12621 | [
"https://github.com/Pupiera/Growing_tree_on_sound"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.22/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.23.bib | @inproceedings{geng-etal-2024-sketch,
title = "Sketch-Guided Constrained Decoding for Boosting Blackbox Large Language Models without Logit Access",
author = {Geng, Saibo and
D{\"o}ner, Berkay and
Wendler, Chris and
Josifoski, Martin and
West, Robert},
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.23",
pages = "234--245",
abstract = "Constrained decoding, a technique for enforcing constraints on language model outputs, offers a way to control text generation without retraining or architectural modifications. Its application is, however, typically restricted to models that give users access to next-token distributions (usually via softmax logits), which poses a limitation with blackbox large language models (LLMs). This paper introduces sketch-guided constrained decoding (SketchGCD), a novel approach to constrained decoding for blackbox LLMs, which operates without access to the logits of the blackbox LLM. SketchGCD utilizes a locally hosted auxiliary model to refine the output of an unconstrained blackbox LLM, effectively treating this initial output as a {``}sketch{''} for further elaboration. This approach is complementary to traditional logit-based techniques and enables the application of constrained decoding in settings where full model transparency is unavailable. We demonstrate the efficacy of SketchGCD through experiments in closed information extraction and constituency parsing, showing how it enhances the utility and flexibility of blackbox LLMs for complex NLP tasks.",
}
| Constrained decoding, a technique for enforcing constraints on language model outputs, offers a way to control text generation without retraining or architectural modifications. Its application is, however, typically restricted to models that give users access to next-token distributions (usually via softmax logits), which poses a limitation with blackbox large language models (LLMs). This paper introduces sketch-guided constrained decoding (SketchGCD), a novel approach to constrained decoding for blackbox LLMs, which operates without access to the logits of the blackbox LLM. SketchGCD utilizes a locally hosted auxiliary model to refine the output of an unconstrained blackbox LLM, effectively treating this initial output as a {``}sketch{''} for further elaboration. This approach is complementary to traditional logit-based techniques and enables the application of constrained decoding in settings where full model transparency is unavailable. We demonstrate the efficacy of SketchGCD through experiments in closed information extraction and constituency parsing, showing how it enhances the utility and flexibility of blackbox LLMs for complex NLP tasks. | [
"Geng, Saibo",
"D{\\\"o}ner, Berkay",
"Wendler, Chris",
"Josifoski, Martin",
"West, Robert"
] | Sketch-Guided Constrained Decoding for Boosting Blackbox Large Language Models without Logit Access | acl-short.23 | Oral | 2401.09967 | [
"https://github.com/epfl-dlab/sketchgcd"
] | https://huggingface.co/papers/2401.09967 | 0 | 1 | 0 | 5 | https://aclanthology.org/2024.acl-short.23/ | [] | [
"Saibo-creator/SketchGCD-KG"
] | [] | 1 |
https://aclanthology.org/2024.acl-short.24.bib | @inproceedings{varshavsky-hassid-etal-2024-semantic,
title = "On the Semantic Latent Space of Diffusion-Based Text-To-Speech Models",
author = "Varshavsky-Hassid, Miri and
Hirsch, Roy and
Cohen, Regev and
Golany, Tomer and
Freedman, Daniel and
Rivlin, Ehud",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.24",
pages = "246--255",
abstract = "The incorporation of Denoising Diffusion Models (DDMs) in the Text-to-Speech (TTS) domain is rising, providing great value in synthesizing high quality speech. Although they exhibit impressive audio quality, the extent of their semantic capabilities is unknown, and controlling their synthesized speech{'}s vocal properties remains a challenge. Inspired by recent advances in image synthesis, we explore the latent space of frozen TTS models, which is composed of the latent bottleneck activations of the DDM{'}s denoiser. We identify that this space contains rich semantic information, and outline several novel methods for finding semantic directions within it, both supervised and unsupervised. We then demonstrate how these enable off-the-shelf audio editing, without any further training, architectural changes or data requirements. We present evidence of the semantic and acoustic qualities of the edited audio, and provide supplemental samples: https://latent-analysis-grad-tts.github.io/speech-samples/.",
}
| The incorporation of Denoising Diffusion Models (DDMs) in the Text-to-Speech (TTS) domain is rising, providing great value in synthesizing high quality speech. Although they exhibit impressive audio quality, the extent of their semantic capabilities is unknown, and controlling their synthesized speech{'}s vocal properties remains a challenge. Inspired by recent advances in image synthesis, we explore the latent space of frozen TTS models, which is composed of the latent bottleneck activations of the DDM{'}s denoiser. We identify that this space contains rich semantic information, and outline several novel methods for finding semantic directions within it, both supervised and unsupervised. We then demonstrate how these enable off-the-shelf audio editing, without any further training, architectural changes or data requirements. We present evidence of the semantic and acoustic qualities of the edited audio, and provide supplemental samples: https://latent-analysis-grad-tts.github.io/speech-samples/. | [
"Varshavsky-Hassid, Miri",
"Hirsch, Roy",
"Cohen, Regev",
"Golany, Tomer",
"Freedman, Daniel",
"Rivlin, Ehud"
] | On the Semantic Latent Space of Diffusion-Based Text-To-Speech Models | acl-short.24 | Poster | 2402.12423 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.24/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.25.bib | @inproceedings{chen-etal-2024-learnable,
title = "Learnable Privacy Neurons Localization in Language Models",
author = "Chen, Ruizhe and
Hu, Tianxiang and
Feng, Yang and
Liu, Zuozhu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.25",
pages = "256--264",
abstract = "Concerns regarding Large Language Models (LLMs) to memorize and disclose private information, particularly Personally Identifiable Information (PII), become prominent within the community. Many efforts have been made to mitigate the privacy risks.However, the mechanism through which LLMs memorize PII remains poorly understood. To bridge this gap, we introduce a pioneering method for pinpointing PII-sensitive neurons (privacy neurons) within LLMs. Our method employs learnable binary weight masks to localize specific neurons that account for the memorization of PII in LLMs through adversarial training. Our investigations discover that PII is memorized by a small subset of neurons across all layers, which shows the property of PII specificity. Furthermore, we propose to validate the potential in PII risk mitigation by deactivating the localized privacy neurons. Both quantitative and qualitative experiments demonstrate the effectiveness of our neuron localization algorithm.",
}
| Concerns regarding Large Language Models (LLMs) to memorize and disclose private information, particularly Personally Identifiable Information (PII), become prominent within the community. Many efforts have been made to mitigate the privacy risks.However, the mechanism through which LLMs memorize PII remains poorly understood. To bridge this gap, we introduce a pioneering method for pinpointing PII-sensitive neurons (privacy neurons) within LLMs. Our method employs learnable binary weight masks to localize specific neurons that account for the memorization of PII in LLMs through adversarial training. Our investigations discover that PII is memorized by a small subset of neurons across all layers, which shows the property of PII specificity. Furthermore, we propose to validate the potential in PII risk mitigation by deactivating the localized privacy neurons. Both quantitative and qualitative experiments demonstrate the effectiveness of our neuron localization algorithm. | [
"Chen, Ruizhe",
"Hu, Tianxiang",
"Feng, Yang",
"Liu, Zuozhu"
] | Learnable Privacy Neurons Localization in Language Models | acl-short.25 | Poster | 2405.10989 | [
"https://github.com/richhh520/learnable-privacy-neurons-localization"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.25/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.26.bib | @inproceedings{yerukola-etal-2024-pope,
title = "Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Non-Literal Intent Resolution in {LLM}s",
author = "Yerukola, Akhila and
Vaduguru, Saujas and
Fried, Daniel and
Sap, Maarten",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.26",
pages = "265--275",
abstract = "Humans often express their communicative intents indirectly or non-literally, which requires their interlocutors{---}human or AI{---}to understand beyond the literal meaning of words. While most existing work has focused on discriminative evaluations, we present a new approach to generatively evaluate large language models{'} (LLMs{'}) intention understanding by examining their responses to non-literal utterances. Ideally, an LLM should respond in line with the true intention of a non-literal utterance, not its literal interpretation. Our findings show that LLMs struggle to generate contextually relevant responses to non-literal language. We also find that providing oracle intentions substantially improves response appropriateness, but using chain-of-thought to make models spell out intentions before responding improves much less. These findings suggest that LLMs are not yet pragmatic interlocutors, and that explicitly modeling intention could improve LLM responses to non-literal language.",
}
| Humans often express their communicative intents indirectly or non-literally, which requires their interlocutors{---}human or AI{---}to understand beyond the literal meaning of words. While most existing work has focused on discriminative evaluations, we present a new approach to generatively evaluate large language models{'} (LLMs{'}) intention understanding by examining their responses to non-literal utterances. Ideally, an LLM should respond in line with the true intention of a non-literal utterance, not its literal interpretation. Our findings show that LLMs struggle to generate contextually relevant responses to non-literal language. We also find that providing oracle intentions substantially improves response appropriateness, but using chain-of-thought to make models spell out intentions before responding improves much less. These findings suggest that LLMs are not yet pragmatic interlocutors, and that explicitly modeling intention could improve LLM responses to non-literal language. | [
"Yerukola, Akhila",
"Vaduguru, Saujas",
"Fried, Daniel",
"Sap, Maarten"
] | Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Non-Literal Intent Resolution in LLMs | acl-short.26 | Poster | 2405.08760 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.26/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-short.27.bib | @inproceedings{ahmed-etal-2024-generating,
title = "Generating Harder Cross-document Event Coreference Resolution Datasets using Metaphoric Paraphrasing",
author = "Ahmed, Shafiuddin Rehan and
Wang, Zhiyong and
Baker, George and
Stowe, Kevin and
Martin, James",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.27",
pages = "276--286",
abstract = "The most popular Cross-Document Event Coreference Resolution (CDEC) datasets fail to convey the true difficulty of the task, due to the lack of lexical diversity between coreferring event triggers (words or phrases that refer to an event). Furthermore, there is a dearth of event datasets for figurative language, limiting a crucial avenue of research in event comprehension. We address these two issues by introducing ECB+META, a lexically rich variant of Event Coref Bank Plus (ECB+) for CDEC on symbolic and metaphoric language. We use ChatGPT as a tool for the metaphoric transformation of sentences in the documents of ECB+, then tag the original event triggers in the transformed sentences in a semi-automated manner. In this way, we avoid the re-annotation of expensive coreference links. We present results that show existing methods that work well on ECB+ struggle with ECB+META, thereby paving the way for CDEC research on a much more challenging dataset. Code/data: https://github.com/ahmeshaf/llms{\_}coref",
}
| The most popular Cross-Document Event Coreference Resolution (CDEC) datasets fail to convey the true difficulty of the task, due to the lack of lexical diversity between coreferring event triggers (words or phrases that refer to an event). Furthermore, there is a dearth of event datasets for figurative language, limiting a crucial avenue of research in event comprehension. We address these two issues by introducing ECB+META, a lexically rich variant of Event Coref Bank Plus (ECB+) for CDEC on symbolic and metaphoric language. We use ChatGPT as a tool for the metaphoric transformation of sentences in the documents of ECB+, then tag the original event triggers in the transformed sentences in a semi-automated manner. In this way, we avoid the re-annotation of expensive coreference links. We present results that show existing methods that work well on ECB+ struggle with ECB+META, thereby paving the way for CDEC research on a much more challenging dataset. Code/data: https://github.com/ahmeshaf/llms{\_}coref | [
"Ahmed, Shafiuddin Rehan",
"Wang, Zhiyong",
"Baker, George",
"Stowe, Kevin",
"Martin, James"
] | Generating Harder Cross-document Event Coreference Resolution Datasets using Metaphoric Paraphrasing | acl-short.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.27/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.28.bib | @inproceedings{wang-etal-2024-soft,
title = "Soft Self-Consistency Improves Language Models Agents",
author = "Wang, Han and
Prasad, Archiki and
Stengel-Eskin, Elias and
Bansal, Mohit",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.28",
pages = "287--301",
abstract = "Generations from large language models (LLMs) can be improved by sampling and scoring multiple solutions to select a final answer. Current {``}sample and select{''} methods such as self-consistency (SC) rely on majority voting to score answers. However, when tasks have many distinct and valid answers, selection by voting requires a large number of samples. This makes SC prohibitively expensive for interactive tasks that involve generating multiple actions (answers) sequentially. After establishing that majority voting fails to provide consistent gains on such tasks, we demonstrate how to increase success rates by softening the scoring criterion. We introduce Soft Self-Consistency (SOFT-SC), which replaces SC{'}s discontinuous scoring with a continuous score computed from model likelihoods, allowing for selection even when actions are sparsely distributed. SOFT-SC improves both performance and efficiency on long-horizon interactive tasks, requiring half as many samples as SC for comparable or better performance. For a fixed number of samples, SOFT-SC leads to a 1.3{\%} increase over SC in absolute success rate on writing bash programs, a 6.6{\%} increase on online shopping (WebShop), and a 4.7{\%} increase for an interactive household game (ALFWorld). Finally, we show that SOFT-SC can be applied to both open-source and black-box models.",
}
| Generations from large language models (LLMs) can be improved by sampling and scoring multiple solutions to select a final answer. Current {``}sample and select{''} methods such as self-consistency (SC) rely on majority voting to score answers. However, when tasks have many distinct and valid answers, selection by voting requires a large number of samples. This makes SC prohibitively expensive for interactive tasks that involve generating multiple actions (answers) sequentially. After establishing that majority voting fails to provide consistent gains on such tasks, we demonstrate how to increase success rates by softening the scoring criterion. We introduce Soft Self-Consistency (SOFT-SC), which replaces SC{'}s discontinuous scoring with a continuous score computed from model likelihoods, allowing for selection even when actions are sparsely distributed. SOFT-SC improves both performance and efficiency on long-horizon interactive tasks, requiring half as many samples as SC for comparable or better performance. For a fixed number of samples, SOFT-SC leads to a 1.3{\%} increase over SC in absolute success rate on writing bash programs, a 6.6{\%} increase on online shopping (WebShop), and a 4.7{\%} increase for an interactive household game (ALFWorld). Finally, we show that SOFT-SC can be applied to both open-source and black-box models. | [
"Wang, Han",
"Prasad, Archiki",
"Stengel-Eskin, Elias",
"Bansal, Mohit"
] | Soft Self-Consistency Improves Language Models Agents | acl-short.28 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.28/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-short.29.bib | @inproceedings{ngo-nguyen-2024-recgpt,
title = "{R}ec{GPT}: Generative Pre-training for Text-based Recommendation",
author = "Ngo, Hoang and
Nguyen, Dat Quoc",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.29",
pages = "302--313",
abstract = "We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public {``}huggingface{''} links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT",
}
| We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public {``}huggingface{''} links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT | [
"Ngo, Hoang",
"Nguyen, Dat Quoc"
] | RecGPT: Generative Pre-training for Text-based Recommendation | acl-short.29 | Poster | 2405.12715 | [
"https://github.com/vinairesearch/recgpt"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-short.29/ | [] | [] | [] | 0 |