bibtex_url
stringlengths 41
50
| bibtext
stringlengths 693
2.88k
| abstract
stringlengths 0
2k
| authors
sequencelengths 1
45
| title
stringlengths 21
199
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
28
| upvotes
int64 -1
255
| num_comments
int64 -1
23
| n_authors
int64 -1
35
| proceedings
stringlengths 38
47
| Models
sequencelengths 0
57
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.acl-srw.24.bib | @inproceedings{lin-etal-2024-exploring,
title = "Exploring the Effectiveness and Consistency of Task Selection in Intermediate-Task Transfer Learning",
author = "Lin, Pin-Jie and
Zhang, Miaoran and
Mosbach, Marius and
Klakow, Dietrich",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.24",
pages = "264--279",
abstract = "Identifying beneficial tasks to transfer from is a critical step toward successful intermediate-task transfer learning. In this work, we experiment with 130 source-target task combinations and demonstrate that the transfer performance exhibits severe variance across different source tasks and training seeds, highlighting the crucial role of intermediate-task selection in a broader context. We compare four representative task selection methods in a unified setup, focusing on their effectiveness and consistency. Compared to embedding-free methods and text embeddings, task embeddings constructed from fine-tuned weights can better estimate task transferability by improving task prediction scores from 2.59{\%} to 3.96{\%}. Despite their strong performance, we observe that the task embeddings do not consistently demonstrate superiority for tasks requiring reasoning abilities. Furthermore, we introduce a novel method that measures pairwise token similarity using maximum inner product search, leading to the highest performance in task prediction. Our findings suggest that token-wise similarity is better predictive for predicting transferability compared to averaging weights.",
}
| Identifying beneficial tasks to transfer from is a critical step toward successful intermediate-task transfer learning. In this work, we experiment with 130 source-target task combinations and demonstrate that the transfer performance exhibits severe variance across different source tasks and training seeds, highlighting the crucial role of intermediate-task selection in a broader context. We compare four representative task selection methods in a unified setup, focusing on their effectiveness and consistency. Compared to embedding-free methods and text embeddings, task embeddings constructed from fine-tuned weights can better estimate task transferability by improving task prediction scores from 2.59{\%} to 3.96{\%}. Despite their strong performance, we observe that the task embeddings do not consistently demonstrate superiority for tasks requiring reasoning abilities. Furthermore, we introduce a novel method that measures pairwise token similarity using maximum inner product search, leading to the highest performance in task prediction. Our findings suggest that token-wise similarity is better predictive for predicting transferability compared to averaging weights. | [
"Lin, Pin-Jie",
"Zhang, Miaoran",
"Mosbach, Marius",
"Klakow, Dietrich"
] | Exploring the Effectiveness and Consistency of Task Selection in Intermediate-Task Transfer Learning | acl-srw.24 | Poster | 2407.16245 | [
"https://github.com/uds-lsv/intermediate-task-selection"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.24/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.25.bib | @inproceedings{sauvage-etal-2024-structure,
title = "Does the structure of textual content have an impact on language models for automatic summarization?",
author = "Sauvage, Eve and
Campano, Sabrina and
Ouali, Lydia and
Grouin, Cyril",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.25",
pages = "280--285",
abstract = "The processing of long sequences with models remains a subject in its own right, including automatic summary, despite recent improvements. In this work, we present experiments on the automatic summarization of scientific articles using BART models, taking into account textual information coming from distinct passages from the long texts to be summarized. We demonstrate that taking into account document structure improves the performance of state-of-the-art models and approaches the performance of LongFormer on English.",
}
| The processing of long sequences with models remains a subject in its own right, including automatic summary, despite recent improvements. In this work, we present experiments on the automatic summarization of scientific articles using BART models, taking into account textual information coming from distinct passages from the long texts to be summarized. We demonstrate that taking into account document structure improves the performance of state-of-the-art models and approaches the performance of LongFormer on English. | [
"Sauvage, Eve",
"Campano, Sabrina",
"Ouali, Lydia",
"Grouin, Cyril"
] | Does the structure of textual content have an impact on language models for automatic summarization? | acl-srw.25 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.25/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.26.bib | @inproceedings{kondapally-etal-2024-action,
title = "Action Inference for Destination Prediction in Vision-and-Language Navigation",
author = "Kondapally, Anirudh and
Yamada, Kentaro and
Yanaka, Hitomi",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.26",
pages = "286--293",
abstract = "Vision-and-Language Navigation (VLN) encompasses interacting with autonomous vehicles using language and visual input from the perspective of mobility.Most of the previous work in this field focuses on spatial reasoning and the semantic grounding of visual information.However, reasoning based on the actions of pedestrians in the scene is not much considered.In this study, we provide a VLN dataset for destination prediction with action inference to investigate the extent to which current VLN models perform action inference.We introduce a crowd-sourcing process to construct a dataset for this task in two steps: (1) collecting beliefs about the next action for a pedestrian and (2) annotating the destination considering the pedestrian{'}s next action.Our benchmarking results of the models on destination prediction lead us to believe that the models can learn to reason about the effect of the action and the next action on the destination to a certain extent.However, there is still much scope for improvement.",
}
| Vision-and-Language Navigation (VLN) encompasses interacting with autonomous vehicles using language and visual input from the perspective of mobility.Most of the previous work in this field focuses on spatial reasoning and the semantic grounding of visual information.However, reasoning based on the actions of pedestrians in the scene is not much considered.In this study, we provide a VLN dataset for destination prediction with action inference to investigate the extent to which current VLN models perform action inference.We introduce a crowd-sourcing process to construct a dataset for this task in two steps: (1) collecting beliefs about the next action for a pedestrian and (2) annotating the destination considering the pedestrian{'}s next action.Our benchmarking results of the models on destination prediction lead us to believe that the models can learn to reason about the effect of the action and the next action on the destination to a certain extent.However, there is still much scope for improvement. | [
"Kondapally, Anirudh",
"Yamada, Kentaro",
"Yanaka, Hitomi"
] | Action Inference for Destination Prediction in Vision-and-Language Navigation | acl-srw.26 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.26/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.27.bib | @inproceedings{zurbuchen-voigt-2024-computational,
title = "A Computational Analysis and Exploration of Linguistic Borrowings in {F}rench Rap Lyrics",
author = "Zurbuchen, Lucas and
Voigt, Rob",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.27",
pages = "294--302",
abstract = "In France, linguistic borrowings in the relatively conservative French language are an important site of cultural debate, and rap in particular is a hotspot for borrowings. In this work, we use computational methods to understand the factors that affect the prominence and prevalence of a borrowing. To do so, we manually annotate a lexicon of over 700 borrowings occurring in this context (including key aspects for each borrowing such as origin and semantic class). We analyze the prevalence of these borrowings in a newly collected corpus of over 8000 French rap song lyrics and find that there are increases in the proportion of linguistic borrowings, interjections, and Niger-Congo borrowings while terms related to the arts are decreasing in prevalence. We release our code and data to facilitate further research in this area and discuss potential future directions.",
}
| In France, linguistic borrowings in the relatively conservative French language are an important site of cultural debate, and rap in particular is a hotspot for borrowings. In this work, we use computational methods to understand the factors that affect the prominence and prevalence of a borrowing. To do so, we manually annotate a lexicon of over 700 borrowings occurring in this context (including key aspects for each borrowing such as origin and semantic class). We analyze the prevalence of these borrowings in a newly collected corpus of over 8000 French rap song lyrics and find that there are increases in the proportion of linguistic borrowings, interjections, and Niger-Congo borrowings while terms related to the arts are decreasing in prevalence. We release our code and data to facilitate further research in this area and discuss potential future directions. | [
"Zurbuchen, Lucas",
"Voigt, Rob"
] | A Computational Analysis and Exploration of Linguistic Borrowings in French Rap Lyrics | acl-srw.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.27/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.28.bib | @inproceedings{strich-etal-2024-improving,
title = "On Improving Repository-Level Code {QA} for Large Language Models",
author = "Strich, Jan and
Schneider, Florian and
Nikishina, Irina and
Biemann, Chris",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.28",
pages = "303--338",
abstract = "Large Language Models (LLMs) such as ChatGPT, GitHub Copilot, Llama, or Mistral assist programmers as copilots and knowledge sources to make the coding process faster and more efficient. This paper aims to improve the copilot performance by implementing different self-alignment processes and retrieval-augmented generation (RAG) pipelines, as well as their combination. To test the effectiveness of all approaches, we create a dataset and apply a model-based evaluation, using LLM as a judge. It is designed to check the model{'}s abilities to understand the source code semantics, the dependency between files, and the overall meta-information about the repository. We also compare our approach with other existing solutions, e.g. ChatGPT-3.5, and evaluate on the existing benchmarks. Code and dataset are available online (https://anonymous.4open.science/r/ma{\_}llm-382D).",
}
| Large Language Models (LLMs) such as ChatGPT, GitHub Copilot, Llama, or Mistral assist programmers as copilots and knowledge sources to make the coding process faster and more efficient. This paper aims to improve the copilot performance by implementing different self-alignment processes and retrieval-augmented generation (RAG) pipelines, as well as their combination. To test the effectiveness of all approaches, we create a dataset and apply a model-based evaluation, using LLM as a judge. It is designed to check the model{'}s abilities to understand the source code semantics, the dependency between files, and the overall meta-information about the repository. We also compare our approach with other existing solutions, e.g. ChatGPT-3.5, and evaluate on the existing benchmarks. Code and dataset are available online (https://anonymous.4open.science/r/ma{\_}llm-382D). | [
"Strich, Jan",
"Schneider, Florian",
"Nikishina, Irina",
"Biemann, Chris"
] | On Improving Repository-Level Code QA for Large Language Models | acl-srw.28 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.28/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.29.bib | @inproceedings{pernisi-etal-2024-compromesso,
title = "Compromesso! {I}talian Many-Shot Jailbreaks undermine the safety of Large Language Models",
author = "Pernisi, Fabio and
Hovy, Dirk and
R�ttger, Paul",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.29",
pages = "339--345",
abstract = "As diverse linguistic communities and users adopt Large Language Models (LLMs), assessing their safety across languages becomes critical. Despite ongoing efforts to align these models with safe and ethical guidelines, they can still be induced into unsafe behavior with jailbreaking, a technique in which models are prompted to act outside their operational guidelines. What research has been conducted on these vulnerabilities was predominantly on English, limiting the understanding of LLM behavior in other languages. We address this gap by investigating Many-Shot Jailbreaking (MSJ) in Italian, underscoring the importance of understanding LLM behavior in different languages. We base our analysis on a newly created Italian dataset to identify unique safety vulnerabilities in 4 families of open-source LLMs.We find that the models exhibit unsafe behaviors even with minimal exposure to harmful prompts, and{--}more alarmingly{--}this tendency rapidly escalates with more demonstrations.",
}
| As diverse linguistic communities and users adopt Large Language Models (LLMs), assessing their safety across languages becomes critical. Despite ongoing efforts to align these models with safe and ethical guidelines, they can still be induced into unsafe behavior with jailbreaking, a technique in which models are prompted to act outside their operational guidelines. What research has been conducted on these vulnerabilities was predominantly on English, limiting the understanding of LLM behavior in other languages. We address this gap by investigating Many-Shot Jailbreaking (MSJ) in Italian, underscoring the importance of understanding LLM behavior in different languages. We base our analysis on a newly created Italian dataset to identify unique safety vulnerabilities in 4 families of open-source LLMs.We find that the models exhibit unsafe behaviors even with minimal exposure to harmful prompts, and{--}more alarmingly{--}this tendency rapidly escalates with more demonstrations. | [
"Pernisi, Fabio",
"Hovy, Dirk",
"R�ttger, Paul"
] | Compromesso! Italian Many-Shot Jailbreaks undermine the safety of Large Language Models | acl-srw.29 | Poster | 2408.04522 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.29/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.30.bib | @inproceedings{kim-2024-foundation,
title = "Foundation Model for Biomedical Graphs: Integrating Knowledge Graphs and Protein Structures to Large Language Models",
author = "Kim, Yunsoo",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.30",
pages = "346--355",
abstract = "Transformer model has been a de-facto standard in natural language processing. Its adaptations in other fields such as computer vision showed promising results that this architecture is a powerful neural network in representation learning regardless of the data type. This recent success has led to research in multimodal Large Language Model (LLM), which enabled us to new types of tasks and applications with multiple data types. However, multimodal LLM in the biomedical domain is primarily limited to images, text, and/or sequence data. Here I propose to work on multimodal LLM architecture for biomedical graphs such as protein structure and chemical molecules. The research hypothesis is based on the fact that clinicians and researchers in computational biology and clinical research take advantage of various information for their decision-making process. Therefore, an AI model being able to handle multiple data types should boost its ability to use diverse knowledge for improved performances in clinical applications.",
}
| Transformer model has been a de-facto standard in natural language processing. Its adaptations in other fields such as computer vision showed promising results that this architecture is a powerful neural network in representation learning regardless of the data type. This recent success has led to research in multimodal Large Language Model (LLM), which enabled us to new types of tasks and applications with multiple data types. However, multimodal LLM in the biomedical domain is primarily limited to images, text, and/or sequence data. Here I propose to work on multimodal LLM architecture for biomedical graphs such as protein structure and chemical molecules. The research hypothesis is based on the fact that clinicians and researchers in computational biology and clinical research take advantage of various information for their decision-making process. Therefore, an AI model being able to handle multiple data types should boost its ability to use diverse knowledge for improved performances in clinical applications. | [
"Kim, Yunsoo"
] | Foundation Model for Biomedical Graphs: Integrating Knowledge Graphs and Protein Structures to Large Language Models | acl-srw.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.30/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.31.bib | @inproceedings{tran-etal-2024-vimedaqa,
title = "{V}i{M}ed{AQA}: A {V}ietnamese Medical Abstractive Question-Answering Dataset and Findings of Large Language Model",
author = "Tran, Minh-Nam and
Nguyen, Phu-Vinh and
Nguyen, Long and
Dinh, Dien",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.31",
pages = "356--364",
abstract = "Question answering involves creating answers to questions. With the growth of large language models, the ability of question-answering systems has dramatically improved. However, there is a lack of Vietnamese abstractive question-answering datasets, especially in the medical domain. Therefore, this research aims to mitigate this gap by introducing ViMedAQA. This **Vi**etnamese **Med**ical **A**bstractive **Q**uestion-**A**nswering dataset covers four topics in the Vietnamese medical domain, including body parts, disease, drugs and medicine. Additionally, the empirical results on the proposed dataset examine the capability of the large language models in the Vietnamese medical domain, including reasoning, memorizing and awareness of essential information.",
}
| Question answering involves creating answers to questions. With the growth of large language models, the ability of question-answering systems has dramatically improved. However, there is a lack of Vietnamese abstractive question-answering datasets, especially in the medical domain. Therefore, this research aims to mitigate this gap by introducing ViMedAQA. This **Vi**etnamese **Med**ical **A**bstractive **Q**uestion-**A**nswering dataset covers four topics in the Vietnamese medical domain, including body parts, disease, drugs and medicine. Additionally, the empirical results on the proposed dataset examine the capability of the large language models in the Vietnamese medical domain, including reasoning, memorizing and awareness of essential information. | [
"Tran, Minh-Nam",
"Nguyen, Phu-Vinh",
"Nguyen, Long",
"Dinh, Dien"
] | ViMedAQA: A Vietnamese Medical Abstractive Question-Answering Dataset and Findings of Large Language Model | acl-srw.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.31/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.32.bib | @inproceedings{wang-etal-2024-rescue,
title = "Rescue: Ranking {LLM} Responses with Partial Ordering to Improve Response Generation",
author = "Wang, Yikun and
Zheng, Rui and
Li, Haoming and
Zhang, Qi and
Gui, Tao and
Liu, Fei",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.32",
pages = "365--376",
abstract = "Customizing LLMs for a specific task involves separating high-quality responses from lower-quality ones. This skill can be developed using supervised fine-tuning with extensive human preference data. However, obtaining a large volume of expert-annotated data is costly for most tasks. In this paper, we explore a novel method to optimize LLMs using ranking metrics. This method trains the model to prioritize the best responses from a pool of candidates created for a particular task. Rather than a traditional full ordering, we advocate for a partial ordering, as achieving consensus on the perfect order of candidate responses can be challenging. Our partial ordering is more robust, less sensitive to noise, and can be achieved with limited human annotations or through heuristic methods. We test our system{'}s improved response generation ability using benchmark datasets, including textual entailment and multi-document question answering. We conduct ablation studies to understand crucial factors, such as how to gather candidate responses for a specific task, determine their most suitable order, and balance supervised fine-tuning with ranking metrics. Our approach, named RESCUE, offers a promising avenue for enhancing the response generation and task accuracy of LLMs.",
}
| Customizing LLMs for a specific task involves separating high-quality responses from lower-quality ones. This skill can be developed using supervised fine-tuning with extensive human preference data. However, obtaining a large volume of expert-annotated data is costly for most tasks. In this paper, we explore a novel method to optimize LLMs using ranking metrics. This method trains the model to prioritize the best responses from a pool of candidates created for a particular task. Rather than a traditional full ordering, we advocate for a partial ordering, as achieving consensus on the perfect order of candidate responses can be challenging. Our partial ordering is more robust, less sensitive to noise, and can be achieved with limited human annotations or through heuristic methods. We test our system{'}s improved response generation ability using benchmark datasets, including textual entailment and multi-document question answering. We conduct ablation studies to understand crucial factors, such as how to gather candidate responses for a specific task, determine their most suitable order, and balance supervised fine-tuning with ranking metrics. Our approach, named RESCUE, offers a promising avenue for enhancing the response generation and task accuracy of LLMs. | [
"Wang, Yikun",
"Zheng, Rui",
"Li, Haoming",
"Zhang, Qi",
"Gui, Tao",
"Liu, Fei"
] | Rescue: Ranking LLM Responses with Partial Ordering to Improve Response Generation | acl-srw.32 | Poster | 2311.09136 | [
"https://github.com/ekonwang/rrescue"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.32/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.33.bib | @inproceedings{zhou-etal-2024-basreh,
title = "Basreh or Basra? Geoparsing Historical Locations in the Svoboda Diaries",
author = "Zhou, Jolie and
Cole, Camille and
Chen, Annie",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.33",
pages = "377--390",
abstract = "Geoparsing, the task of assigning coordinates to locations extracted from free text, is invaluable in enabling us to place locations in time and space. In the historical domain, many geoparsing corpora are from large news collections. We examine the Svoboda Diaries, a small historical corpus written primarily in English, with many location names in transliterated Arabic. We develop a pipeline employing named entity recognition for geotagging, and a map-based generate-and-rank approach incorporating candidate name augmentation and clustering of location context words for geocoding. Our system outperforms existing map-based geoparsers in terms of accuracy, lowest mean distance error, and number of locations correctly identified. As location names may vary from those in knowledge bases, we find that augmented candidate generation is instrumental in the system{'}s performance. Among our candidate generation methods, the generation of transliterated names contributed the most to increased location matches in the knowledge base. Our main contribution is proposing an integrated pipeline for geoparsing of historical corpora using augmented candidate location name generation and clustering methods {--} an approach that can be generalized to other texts with foreign or non-standard spellings.",
}
| Geoparsing, the task of assigning coordinates to locations extracted from free text, is invaluable in enabling us to place locations in time and space. In the historical domain, many geoparsing corpora are from large news collections. We examine the Svoboda Diaries, a small historical corpus written primarily in English, with many location names in transliterated Arabic. We develop a pipeline employing named entity recognition for geotagging, and a map-based generate-and-rank approach incorporating candidate name augmentation and clustering of location context words for geocoding. Our system outperforms existing map-based geoparsers in terms of accuracy, lowest mean distance error, and number of locations correctly identified. As location names may vary from those in knowledge bases, we find that augmented candidate generation is instrumental in the system{'}s performance. Among our candidate generation methods, the generation of transliterated names contributed the most to increased location matches in the knowledge base. Our main contribution is proposing an integrated pipeline for geoparsing of historical corpora using augmented candidate location name generation and clustering methods {--} an approach that can be generalized to other texts with foreign or non-standard spellings. | [
"Zhou, Jolie",
"Cole, Camille",
"Chen, Annie"
] | Basreh or Basra? Geoparsing Historical Locations in the Svoboda Diaries | acl-srw.33 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.33/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.34.bib | @inproceedings{wu-etal-2024-homophone2vec,
title = "{H}omophone2{V}ec: Embedding Space Analysis for Empirical Evaluation of Phonological and Semantic Similarity",
author = "Wu, Sophie and
Zheng, Anita and
Chuang, Joey",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.34",
pages = "391--396",
abstract = "This paper introduces a novel method for empirically evaluating the relationship between the phonological and semantic similarity of linguistic units using embedding spaces. Chinese character homophones are used as a proof-of-concept. We employ cosine similarity as a proxy for semantic similarity between characters, and compare relationships between phonologically-related characters and baseline characters (chosen as similar-frequency characters). We show there is a strongly statistically significant positive semantic relationship among different Chinese characters at varying levels of sound-sharing. We also perform some basic probing using t-SNE and UMAP visualizations, and indicate directions for future applications of this method.",
}
| This paper introduces a novel method for empirically evaluating the relationship between the phonological and semantic similarity of linguistic units using embedding spaces. Chinese character homophones are used as a proof-of-concept. We employ cosine similarity as a proxy for semantic similarity between characters, and compare relationships between phonologically-related characters and baseline characters (chosen as similar-frequency characters). We show there is a strongly statistically significant positive semantic relationship among different Chinese characters at varying levels of sound-sharing. We also perform some basic probing using t-SNE and UMAP visualizations, and indicate directions for future applications of this method. | [
"Wu, Sophie",
"Zheng, Anita",
"Chuang, Joey"
] | Homophone2Vec: Embedding Space Analysis for Empirical Evaluation of Phonological and Semantic Similarity | acl-srw.34 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.34/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.35.bib | @inproceedings{mcdonald-emami-2024-trace,
title = "Trace-of-Thought Prompting: Investigating Prompt-Based Knowledge Distillation Through Question Decomposition",
author = "McDonald, Tyler and
Emami, Ali",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.35",
pages = "397--410",
abstract = "Knowledge distillation allows smaller neural networks to emulate the performance of larger, teacher models with reduced computational demands. Traditional methods for Large Language Models (LLMs) often necessitate extensive fine-tuning, which limits their accessibility. To address this, we introduce Trace-of-Thought Prompting, a novel framework designed to distill critical reasoning capabilities from large-scale teacher models (over 8 billion parameters) to small-scale student models (up to 8 billion parameters). This approach leverages problem decomposition to enhance interpretability and facilitate human-in-the-loop interventions. Empirical evaluations on the GSM8K and MATH datasets show that student models achieve accuracy gains of up to 113{\%} on GSM8K and 20{\%} on MATH, with significant improvements particularly notable in smaller models like Llama 2 and Zephyr. Our results suggest a promising pathway for open-source, small-scale models to eventually serve as both students and teachers, potentially reducing our reliance on large-scale, proprietary models. Our code, featuring data analytics and testing scripts, is provided here: https://github.com/traceofthought/trace-of-thought-prompting/tree/main.",
}
| Knowledge distillation allows smaller neural networks to emulate the performance of larger, teacher models with reduced computational demands. Traditional methods for Large Language Models (LLMs) often necessitate extensive fine-tuning, which limits their accessibility. To address this, we introduce Trace-of-Thought Prompting, a novel framework designed to distill critical reasoning capabilities from large-scale teacher models (over 8 billion parameters) to small-scale student models (up to 8 billion parameters). This approach leverages problem decomposition to enhance interpretability and facilitate human-in-the-loop interventions. Empirical evaluations on the GSM8K and MATH datasets show that student models achieve accuracy gains of up to 113{\%} on GSM8K and 20{\%} on MATH, with significant improvements particularly notable in smaller models like Llama 2 and Zephyr. Our results suggest a promising pathway for open-source, small-scale models to eventually serve as both students and teachers, potentially reducing our reliance on large-scale, proprietary models. Our code, featuring data analytics and testing scripts, is provided here: https://github.com/traceofthought/trace-of-thought-prompting/tree/main. | [
"McDonald, Tyler",
"Emami, Ali"
] | Trace-of-Thought Prompting: Investigating Prompt-Based Knowledge Distillation Through Question Decomposition | acl-srw.35 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.35/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.36.bib | @inproceedings{samuel-etal-2024-llms,
title = "Can {LLM}s Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges",
author = "Samuel, Vinay and
Aynaou, Houda and
Chowdhury, Arijit and
Venkat Ramanan, Karthik and
Chadha, Aman",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.36",
pages = "411--421",
abstract = "Large Language Models (LLMs) have demonstrated impressive zero-shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply common sense. A relevant application is to use them for creating high-quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money, and effort that goes into manually labeling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low-resource reading comprehension tasks, by comparing performance after fine-tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low-resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets. Github available at https://github.com/vsamuel2003/qa-gpt4",
}
| Large Language Models (LLMs) have demonstrated impressive zero-shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply common sense. A relevant application is to use them for creating high-quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money, and effort that goes into manually labeling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low-resource reading comprehension tasks, by comparing performance after fine-tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low-resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets. Github available at https://github.com/vsamuel2003/qa-gpt4 | [
"Samuel, Vinay",
"Aynaou, Houda",
"Chowdhury, Arijit",
"Venkat Ramanan, Karthik",
"Chadha, Aman"
] | Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges | acl-srw.36 | Poster | 2309.12426 | [
""
] | https://huggingface.co/papers/2309.12426 | 2 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-srw.36/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-srw.37.bib | @inproceedings{bansal-2024-automatic,
title = "Automatic Derivation of Semantic Representations for {T}hai Serial Verb Constructions: A Grammar-Based Approach",
author = "Bansal, Vipasha",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.37",
pages = "422--437",
abstract = "Deep semantic representations are useful for many NLU tasks (Droganova and Zeman 2019; Schuster and Manning-2016). Manual annotation to build these representations is time-consuming, and so automatic approaches are preferred (Droganova and Zeman 2019; Bender et al. 2015). This paper demonstrates how rich semantic representations can be automatically derived for Thai Serial Verb Constructions (SVCs), where the semantic relationship between component verbs is not immediately clear from the surface forms. I present the first fully-implemented HPSG analysis for Thai SVCs, deriving appropriate semantic representations (MRS; Copestake et al. 2005) from syntactic features, implemented within a DELPH-IN computational grammar (Slayden 2009). This analysis increases verified coverage of SVCs by 73{\%} and decreases ambiguity by 46{\%}. The final grammar can be found at: https://github.com/VipashaB94/ThaiGrammar",
}
| Deep semantic representations are useful for many NLU tasks (Droganova and Zeman 2019; Schuster and Manning-2016). Manual annotation to build these representations is time-consuming, and so automatic approaches are preferred (Droganova and Zeman 2019; Bender et al. 2015). This paper demonstrates how rich semantic representations can be automatically derived for Thai Serial Verb Constructions (SVCs), where the semantic relationship between component verbs is not immediately clear from the surface forms. I present the first fully-implemented HPSG analysis for Thai SVCs, deriving appropriate semantic representations (MRS; Copestake et al. 2005) from syntactic features, implemented within a DELPH-IN computational grammar (Slayden 2009). This analysis increases verified coverage of SVCs by 73{\%} and decreases ambiguity by 46{\%}. The final grammar can be found at: https://github.com/VipashaB94/ThaiGrammar | [
"Bansal, Vipasha"
] | Automatic Derivation of Semantic Representations for Thai Serial Verb Constructions: A Grammar-Based Approach | acl-srw.37 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.37/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.38.bib | @inproceedings{pengpun-etal-2024-seed,
title = "Seed-Free Synthetic Data Generation Framework for Instruction-Tuning {LLM}s: A Case Study in {T}hai",
author = "Pengpun, Parinthapat and
Udomcharoenchaikit, Can and
Buaphet, Weerayut and
Limkonchotiwat, Peerat",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.38",
pages = "438--457",
abstract = "We present a synthetic data approach for instruction-tuning large language models (LLMs) for low-resource languages in a data-efficient manner, specifically focusing on Thai. We identify three key properties that contribute to the effectiveness of instruction-tuning datasets: fluency, diversity, and cultural context. We propose a seed-data-free framework for generating synthetic instruction-tuning data that incorporates these essential properties. Our framework employs an LLM to generate diverse topics, retrieve relevant contexts from Wikipedia, and create instructions for various tasks, such as question answering, summarization, and conversation. The experimental results show that our best-performing synthetic dataset, which incorporates all three key properties, achieves competitive performance using only 5,000 instructions when compared to state-of-the-art Thai LLMs trained on hundreds of thousands of instructions. Our code and dataset are publicly available at https://github.com/parinzee/seed-free-synthetic-instruct.",
}
| We present a synthetic data approach for instruction-tuning large language models (LLMs) for low-resource languages in a data-efficient manner, specifically focusing on Thai. We identify three key properties that contribute to the effectiveness of instruction-tuning datasets: fluency, diversity, and cultural context. We propose a seed-data-free framework for generating synthetic instruction-tuning data that incorporates these essential properties. Our framework employs an LLM to generate diverse topics, retrieve relevant contexts from Wikipedia, and create instructions for various tasks, such as question answering, summarization, and conversation. The experimental results show that our best-performing synthetic dataset, which incorporates all three key properties, achieves competitive performance using only 5,000 instructions when compared to state-of-the-art Thai LLMs trained on hundreds of thousands of instructions. Our code and dataset are publicly available at https://github.com/parinzee/seed-free-synthetic-instruct. | [
"Pengpun, Parinthapat",
"Udomcharoenchaikit, Can",
"Buaphet, Weerayut",
"Limkonchotiwat, Peerat"
] | Seed-Free Synthetic Data Generation Framework for Instruction-Tuning LLMs: A Case Study in Thai | acl-srw.38 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.38/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.39.bib | @inproceedings{madine-2024-bridging,
title = "Bridging Distribution Gap via Semantic Rewriting with {LLM}s to Enhance {OOD} Robustness",
author = "Madine, Manas",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.39",
pages = "458--468",
abstract = "This paper investigates the robustness of Large Language Models (LLMs) against Out-Of-Distribution (OOD) data within the context of sentiment analysis. Traditional fine-tuning approaches often fail to generalize effectively across different data distributions, limiting the practical deployment of LLMs in dynamic real-world scenarios. To address this challenge, we introduce a novel method called {``}Semantic Rewriting,{''} which leverages the inherent flexibility of LLMs to align both in-distribution (ID) and OOD data with the LLMs distributions. By semantically transforming sentences to minimize linguistic discrepancies, our approach helps to standardize features across datasets, thus enhancing model robustness. We conduct extensive experiments with several benchmark datasets and LLMs to validate the efficacy of our method. The results demonstrate that Semantic Rewriting significantly improves the performance of models on OOD tasks, outperforming traditional methods in both robustness and generalization capabilities. Our findings suggest that Semantic Rewriting is a promising technique for developing more reliable and versatile NLP systems capable of performing robustly across diverse operational environments.",
}
| This paper investigates the robustness of Large Language Models (LLMs) against Out-Of-Distribution (OOD) data within the context of sentiment analysis. Traditional fine-tuning approaches often fail to generalize effectively across different data distributions, limiting the practical deployment of LLMs in dynamic real-world scenarios. To address this challenge, we introduce a novel method called {``}Semantic Rewriting,{''} which leverages the inherent flexibility of LLMs to align both in-distribution (ID) and OOD data with the LLMs distributions. By semantically transforming sentences to minimize linguistic discrepancies, our approach helps to standardize features across datasets, thus enhancing model robustness. We conduct extensive experiments with several benchmark datasets and LLMs to validate the efficacy of our method. The results demonstrate that Semantic Rewriting significantly improves the performance of models on OOD tasks, outperforming traditional methods in both robustness and generalization capabilities. Our findings suggest that Semantic Rewriting is a promising technique for developing more reliable and versatile NLP systems capable of performing robustly across diverse operational environments. | [
"Madine, Manas"
] | Bridging Distribution Gap via Semantic Rewriting with LLMs to Enhance OOD Robustness | acl-srw.39 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.39/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.40.bib | @inproceedings{kang-2024-covoswitch,
title = "{C}o{V}o{S}witch: Machine Translation of Synthetic Code-Switched Text Based on Intonation Units",
author = "Kang, Yeeun",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.40",
pages = "469--481",
abstract = "Multilingual code-switching research is often hindered by the lack and linguistically biased status of available datasets. To expand language representation, we synthesize code-switching data by replacing intonation units detected through PSST, a speech segmentation model fine-tuned from OpenAI{'}s Whisper, using a speech-to-text translation dataset, CoVoST 2. With our dataset, CoVoSwitch, spanning 13 languages, we evaluate the code-switching translation performance of two multilingual translation models, M2M-100 418M and NLLB-200 600M. We reveal that the inclusion of code-switching units results in higher translation performance than monolingual settings and that models are better at code-switching translation into English than non-English. Further, low-resource languages gain most from integration of code-switched units when translating into English but much less when translating into non-English. Translations into low-resource languages also perform worse than even raw code-switched inputs. We find that systems excel at copying English tokens but struggle with non-English tokens, that the off-target problem in monolingual settings is also relevant in code-switching settings, and that models hallucinate in code-switching translation by introducing words absent in both of the original source sentences. CoVoSwitch and code are available at https://github.com/sophiayk20/covoswitch.",
}
| Multilingual code-switching research is often hindered by the lack and linguistically biased status of available datasets. To expand language representation, we synthesize code-switching data by replacing intonation units detected through PSST, a speech segmentation model fine-tuned from OpenAI{'}s Whisper, using a speech-to-text translation dataset, CoVoST 2. With our dataset, CoVoSwitch, spanning 13 languages, we evaluate the code-switching translation performance of two multilingual translation models, M2M-100 418M and NLLB-200 600M. We reveal that the inclusion of code-switching units results in higher translation performance than monolingual settings and that models are better at code-switching translation into English than non-English. Further, low-resource languages gain most from integration of code-switched units when translating into English but much less when translating into non-English. Translations into low-resource languages also perform worse than even raw code-switched inputs. We find that systems excel at copying English tokens but struggle with non-English tokens, that the off-target problem in monolingual settings is also relevant in code-switching settings, and that models hallucinate in code-switching translation by introducing words absent in both of the original source sentences. CoVoSwitch and code are available at https://github.com/sophiayk20/covoswitch. | [
"Kang, Yeeun"
] | CoVoSwitch: Machine Translation of Synthetic Code-Switched Text Based on Intonation Units | acl-srw.40 | Poster | 2407.14295 | [
"https://github.com/sophiayk20/covoswitch"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.40/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.41.bib | @inproceedings{song-lee-2024-analysis,
title = "An Analysis under a Unified Formulation of Learning Algorithms with Output Constraints",
author = "Song, Mooho and
Lee, Jay-Yoon",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.41",
pages = "482--498",
abstract = "Neural networks (NN) perform well in diverse tasks, but sometimes produce nonsensical results to humans. Most NN models {``}solely{''} learn from (input, output) pairs, occasionally conflicting with human knowledge. Many studies indicate injecting human knowledge by reducing output constraints during training can improve model performance and reduce constraint violations.While there have been several attempts to compare different existing algorithms under the same programming framework, nonetheless, there has been no previous work that categorizes learning algorithms with output constraints in a unified manner. Our contributions are as follows: (1) We categorize the previous studies based on three axes: type of constraint loss used (e.g. probabilistic soft logic, REINFORCE), exploration strategy of constraint-violating examples, and integration mechanism of learning signals from main task and constraint.(2) We propose new algorithms to integrate the information of main task and constraint injection, inspired by continual-learning algorithms.(3) Furthermore, we propose the $H\beta$-score as a metric for considering the main task metric and constraint violation simultaneously.To provide a thorough analysis, we examine all the algorithms on three NLP tasks: natural language inference (NLI), synthetic transduction examples (STE), and semantic role labeling (SRL). We explore and reveal the key factors of various algorithms associated with achieving high $H\beta$-scores.",
}
| Neural networks (NN) perform well in diverse tasks, but sometimes produce nonsensical results to humans. Most NN models {``}solely{''} learn from (input, output) pairs, occasionally conflicting with human knowledge. Many studies indicate injecting human knowledge by reducing output constraints during training can improve model performance and reduce constraint violations.While there have been several attempts to compare different existing algorithms under the same programming framework, nonetheless, there has been no previous work that categorizes learning algorithms with output constraints in a unified manner. Our contributions are as follows: (1) We categorize the previous studies based on three axes: type of constraint loss used (e.g. probabilistic soft logic, REINFORCE), exploration strategy of constraint-violating examples, and integration mechanism of learning signals from main task and constraint.(2) We propose new algorithms to integrate the information of main task and constraint injection, inspired by continual-learning algorithms.(3) Furthermore, we propose the $H\beta$-score as a metric for considering the main task metric and constraint violation simultaneously.To provide a thorough analysis, we examine all the algorithms on three NLP tasks: natural language inference (NLI), synthetic transduction examples (STE), and semantic role labeling (SRL). We explore and reveal the key factors of various algorithms associated with achieving high $H\beta$-scores. | [
"Song, Mooho",
"Lee, Jay-Yoon"
] | An Analysis under a Unified Formulation of Learning Algorithms with Output Constraints | acl-srw.41 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.41/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.42.bib | @inproceedings{odoherty-etal-2024-beyond,
title = "Beyond Abstracts: A New Dataset, Prompt Design Strategy and Method for Biomedical Synthesis Generation",
author = "O{'}Doherty, James and
Nolan, Cian and
Hou, Yufang and
Belz, Anya",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.42",
pages = "499--518",
abstract = "The biomedical field relies on cost and time intensive systematic reviews of papers to enable practitioners to keep up to date with research. Impressive recent advances in large language models (LLMs) have made the task of automating at least part of the systematic review process feasible, but progress is slow. This paper identifies some factors that may have been holding research back, and proposes a new, enhanced dataset and prompting-based method for automatic synthesis generation, the most challenging step for automation. We test different models and types of information from and about biomedical studies for their usefulness in obtaining high-quality results.We find that, surprisingly, inclusion of paper abstracts can worsens results. Instead, study summary information, and system instructions informed by domain knowledge, are key to producing high-quality syntheses.",
}
| The biomedical field relies on cost and time intensive systematic reviews of papers to enable practitioners to keep up to date with research. Impressive recent advances in large language models (LLMs) have made the task of automating at least part of the systematic review process feasible, but progress is slow. This paper identifies some factors that may have been holding research back, and proposes a new, enhanced dataset and prompting-based method for automatic synthesis generation, the most challenging step for automation. We test different models and types of information from and about biomedical studies for their usefulness in obtaining high-quality results.We find that, surprisingly, inclusion of paper abstracts can worsens results. Instead, study summary information, and system instructions informed by domain knowledge, are key to producing high-quality syntheses. | [
"O{'}Doherty, James",
"Nolan, Cian",
"Hou, Yufang",
"Belz, Anya"
] | Beyond Abstracts: A New Dataset, Prompt Design Strategy and Method for Biomedical Synthesis Generation | acl-srw.42 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.42/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.43.bib | @inproceedings{sato-etal-2024-improving,
title = "Improving Sentence Embeddings with Automatic Generation of Training Data Using Few-shot Examples",
author = "Sato, Soma and
Tsukagoshi, Hayato and
Sasano, Ryohei and
Takeda, Koichi",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.43",
pages = "519--530",
abstract = "Decoder-based large language models (LLMs) have shown high performance on many tasks in natural language processing. This is also true for sentence embedding learning, where a decoder-based model, PromptEOL, has achieved the best performance on semantic textual similarity (STS) tasks. However, PromptEOL requires a manually annotated natural language inference (NLI) dataset for fine-tuning.We aim to improve sentence embeddings without using large manually annotated datasets by automatically generating an NLI dataset with an LLM and using it for fine-tuning of PromptEOL. To achieve this, we explore methods of data generation suitable for sentence embedding learning in this study. Specifically, we will focus on automatic dataset generation through few-shot learning and explore the appropriate methods to leverage few-shot examples. Experimental results on the STS tasks demonstrate that our approach outperforms existing models in settings without large manually annotated datasets.",
}
| Decoder-based large language models (LLMs) have shown high performance on many tasks in natural language processing. This is also true for sentence embedding learning, where a decoder-based model, PromptEOL, has achieved the best performance on semantic textual similarity (STS) tasks. However, PromptEOL requires a manually annotated natural language inference (NLI) dataset for fine-tuning.We aim to improve sentence embeddings without using large manually annotated datasets by automatically generating an NLI dataset with an LLM and using it for fine-tuning of PromptEOL. To achieve this, we explore methods of data generation suitable for sentence embedding learning in this study. Specifically, we will focus on automatic dataset generation through few-shot learning and explore the appropriate methods to leverage few-shot examples. Experimental results on the STS tasks demonstrate that our approach outperforms existing models in settings without large manually annotated datasets. | [
"Sato, Soma",
"Tsukagoshi, Hayato",
"Sasano, Ryohei",
"Takeda, Koichi"
] | Improving Sentence Embeddings with Automatic Generation of Training Data Using Few-shot Examples | acl-srw.43 | Poster | 2402.15132 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.43/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.44.bib | @inproceedings{nar-etal-2024-curriculum,
title = "Curriculum Learning for Small Code Language Models",
author = "Na�r, Marwa and
Yamani, Kamel and
Lhadj, Lynda and
Baghdadi, Riyadh",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.44",
pages = "531--542",
abstract = "Code language models have emerged as useful tools for various programming tasks, yet they often struggle when it comes to complex ones. In this paper, we explore the potential of curriculum learning in enhancing the performance of these models. While prior research has suggested that curriculum learning does not necessarily help in improving the performance of language models, our results surprisingly show that this may not be the case for code language models. We demonstrate that a well-designed curriculum learning approach significantly improves the accuracy of small decoder-only code language models on the task of code execution, while its effect on code completion is less significant. To explore the potential of curriculum learning, we train multiple GPT models with 1 million parameters each to predict the next token and evaluate them on code completion and execution tasks. Our contributions include proposing a novel code difficulty assessment metric by combining software code measures, investigating the effectiveness of Curriculum Learning for code language models, and introducing a Novel Curriculum Learning schedule that enhances the performance of small decoder-only language models in code execution tasks. The results of this paper open the door for more research on the use of curriculum learning for code language models.",
}
| Code language models have emerged as useful tools for various programming tasks, yet they often struggle when it comes to complex ones. In this paper, we explore the potential of curriculum learning in enhancing the performance of these models. While prior research has suggested that curriculum learning does not necessarily help in improving the performance of language models, our results surprisingly show that this may not be the case for code language models. We demonstrate that a well-designed curriculum learning approach significantly improves the accuracy of small decoder-only code language models on the task of code execution, while its effect on code completion is less significant. To explore the potential of curriculum learning, we train multiple GPT models with 1 million parameters each to predict the next token and evaluate them on code completion and execution tasks. Our contributions include proposing a novel code difficulty assessment metric by combining software code measures, investigating the effectiveness of Curriculum Learning for code language models, and introducing a Novel Curriculum Learning schedule that enhances the performance of small decoder-only language models in code execution tasks. The results of this paper open the door for more research on the use of curriculum learning for code language models. | [
"Na�r, Marwa",
"Yamani, Kamel",
"Lhadj, Lynda",
"Baghdadi, Riyadh"
] | Curriculum Learning for Small Code Language Models | acl-srw.44 | Poster | 2407.10194 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.44/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.45.bib | @inproceedings{yugeswardeenoo-etal-2024-question,
title = "Question-Analysis Prompting Improves {LLM} Performance in Reasoning Tasks",
author = "Yugeswardeenoo, Dharunish and
Zhu, Kevin and
O{'}Brien, Sean",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.45",
pages = "543--554",
abstract = "Although LLMs have the potential to transform many fields, they still underperform humans in reasoning tasks. Existing methods induce the model to produce step-by-step calculations, but this research explores the question: Does making the LLM analyze the question improve its performance? We propose a novel prompting strategy called Question Analysis Prompting (QAP), in which the model is prompted to explain the question in {'}n{'} words before solving. The value of {'}n{'} influences the length of response generated by the model. QAP is evaluated on GPT-3.5 Turbo and GPT-4 Turbo on arithmetic datasets GSM8K, AQuA, and SAT and commonsense dataset StrategyQA. QAP is compared with other state-of-the-art prompts including chain-of-thought (CoT), Plan and Solve Prompting (PS+) and Take A Deep Breath (TADB). QAP outperforms all state-of-the-art prompts on AQuA and SAT datasets on both GPT-3.5 and GPT-4. QAP consistently ranks among the top-2 prompts on 75{\%} of the tests. A key factor of QAP performance can be attributed to response length, where detailed responses are beneficial when answering harder questions, but can negatively affect easy questions.",
}
| Although LLMs have the potential to transform many fields, they still underperform humans in reasoning tasks. Existing methods induce the model to produce step-by-step calculations, but this research explores the question: Does making the LLM analyze the question improve its performance? We propose a novel prompting strategy called Question Analysis Prompting (QAP), in which the model is prompted to explain the question in {'}n{'} words before solving. The value of {'}n{'} influences the length of response generated by the model. QAP is evaluated on GPT-3.5 Turbo and GPT-4 Turbo on arithmetic datasets GSM8K, AQuA, and SAT and commonsense dataset StrategyQA. QAP is compared with other state-of-the-art prompts including chain-of-thought (CoT), Plan and Solve Prompting (PS+) and Take A Deep Breath (TADB). QAP outperforms all state-of-the-art prompts on AQuA and SAT datasets on both GPT-3.5 and GPT-4. QAP consistently ranks among the top-2 prompts on 75{\%} of the tests. A key factor of QAP performance can be attributed to response length, where detailed responses are beneficial when answering harder questions, but can negatively affect easy questions. | [
"Yugeswardeenoo, Dharunish",
"Zhu, Kevin",
"O{'}Brien, Sean"
] | Question-Analysis Prompting Improves LLM Performance in Reasoning Tasks | acl-srw.45 | Poster | 2407.03624 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.45/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.46.bib | @inproceedings{hu-collier-2024-individualized,
title = "An Individualized News Affective Response Dataset",
author = "Hu, Tiancheng and
Collier, Nigel",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.46",
pages = "555--563",
abstract = "The rise of sensationalism in news reporting, driven by market saturation and online competition, has compromised news quality and trust. At the core of sensationalism is the evocation of affective responses in the readers. Current NLP approaches to emotion detection often overlook the subjective differences in groups and individuals, relying on aggregation techniques that can obscure nuanced reactions. We introduce a novel large-scale dataset capturing subjective affective responses to news headlines. The dataset includes Facebook post screenshots from popular UK media outlets and uses a comprehensive annotation scheme. Annotators report their affective responses, provide discrete emotion labels, assess relevance to current events, and indicate sharing likelihood. Additionally, we collect demographic, personality, and media consumption data. This ongoing dataset aims to enable more accurate models of affective response by considering individual and contextual factors. This work is ongoing and we highly appreciate any feedback.",
}
| The rise of sensationalism in news reporting, driven by market saturation and online competition, has compromised news quality and trust. At the core of sensationalism is the evocation of affective responses in the readers. Current NLP approaches to emotion detection often overlook the subjective differences in groups and individuals, relying on aggregation techniques that can obscure nuanced reactions. We introduce a novel large-scale dataset capturing subjective affective responses to news headlines. The dataset includes Facebook post screenshots from popular UK media outlets and uses a comprehensive annotation scheme. Annotators report their affective responses, provide discrete emotion labels, assess relevance to current events, and indicate sharing likelihood. Additionally, we collect demographic, personality, and media consumption data. This ongoing dataset aims to enable more accurate models of affective response by considering individual and contextual factors. This work is ongoing and we highly appreciate any feedback. | [
"Hu, Tiancheng",
"Collier, Nigel"
] | An Individualized News Affective Response Dataset | acl-srw.46 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.46/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.47.bib | @inproceedings{yoshida-etal-2024-well,
title = "How Well Do Vision Models Encode Diagram Attributes?",
author = "Yoshida, Haruto and
Kudo, Keito and
Aoki, Yoichi and
Tanaka, Ryota and
Saito, Itsumi and
Sakaguchi, Keisuke and
Inui, Kentaro",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.47",
pages = "564--575",
abstract = "Research on understanding and generating diagrams has used vision models such as CLIP. However, it remains unclear whether these models accurately identify diagram attributes, such as node colors and shapes, along with edge colors and connection patterns. This study evaluates how well vision models recognize the diagram attributes by probing the model and retrieving diagrams using text queries. Experimental results showed that while vision models can recognize differences in node colors, shapes, and edge colors, they struggle to identify differences in edge connection patterns that play a pivotal role in the semantics of diagrams. Moreover, we revealed inadequate alignment between diagram attributes and language representations in the embedding space.",
}
| Research on understanding and generating diagrams has used vision models such as CLIP. However, it remains unclear whether these models accurately identify diagram attributes, such as node colors and shapes, along with edge colors and connection patterns. This study evaluates how well vision models recognize the diagram attributes by probing the model and retrieving diagrams using text queries. Experimental results showed that while vision models can recognize differences in node colors, shapes, and edge colors, they struggle to identify differences in edge connection patterns that play a pivotal role in the semantics of diagrams. Moreover, we revealed inadequate alignment between diagram attributes and language representations in the embedding space. | [
"Yoshida, Haruto",
"Kudo, Keito",
"Aoki, Yoichi",
"Tanaka, Ryota",
"Saito, Itsumi",
"Sakaguchi, Keisuke",
"Inui, Kentaro"
] | How Well Do Vision Models Encode Diagram Attributes? | acl-srw.47 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.47/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.48.bib | @inproceedings{joshi-etal-2024-checkersgpt,
title = "{C}heckers{GPT}: Learning World Models through Language Modeling",
author = "Joshi, Abhinav and
Sharma, Vaibhav and
Modi, Ashutosh",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.48",
pages = "576--588",
abstract = "Although Large Language Models (LLMs) have been trained using just the next token prediction objective, these have shown impressive performance on various tasks. Consequently, it has attracted research interests in this regard. While one line of work in the past has suggested that LLMs learn surface-level statistics from the dataset, another line of work emphasizes that the learned representations are effective for simulating the underlying world model, considering the causal relationship for the next token prediction. This phenomenon is often referred to as the emergence of a world model in sequence prediction tasks. Recent work has demonstrated this phenomenon in a simulated setting of board games like Othello and Chess. In this paper, we analyze the game of Checkers to find out the emergence of a world model in a language model. By training a GPT-style autoregressive language model using only the next character prediction objective, we find that the model does show a hint of learning a world model representation of the board positions. We perform our analysis on two datasets: 1) synthetic dataset, which comes from the checkers game tree, and 2) human gameplay dataset. With multiple models trained with different layer sizes, we find that increasing the parameter size does help learn better world model representation decoded by linear probes.",
}
| Although Large Language Models (LLMs) have been trained using just the next token prediction objective, these have shown impressive performance on various tasks. Consequently, it has attracted research interests in this regard. While one line of work in the past has suggested that LLMs learn surface-level statistics from the dataset, another line of work emphasizes that the learned representations are effective for simulating the underlying world model, considering the causal relationship for the next token prediction. This phenomenon is often referred to as the emergence of a world model in sequence prediction tasks. Recent work has demonstrated this phenomenon in a simulated setting of board games like Othello and Chess. In this paper, we analyze the game of Checkers to find out the emergence of a world model in a language model. By training a GPT-style autoregressive language model using only the next character prediction objective, we find that the model does show a hint of learning a world model representation of the board positions. We perform our analysis on two datasets: 1) synthetic dataset, which comes from the checkers game tree, and 2) human gameplay dataset. With multiple models trained with different layer sizes, we find that increasing the parameter size does help learn better world model representation decoded by linear probes. | [
"Joshi, Abhinav",
"Sharma, Vaibhav",
"Modi, Ashutosh"
] | CheckersGPT: Learning World Models through Language Modeling | acl-srw.48 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.48/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-srw.49.bib | @inproceedings{merler-etal-2024-context,
title = "In-Context Symbolic Regression: Leveraging Large Language Models for Function Discovery",
author = "Merler, Matteo and
Haitsiukevich, Katsiaryna and
Dainese, Nicola and
Marttinen, Pekka",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.49",
pages = "589--606",
abstract = "State of the art Symbolic Regression (SR) methods currently build specialized models, while the application of Large Language Models (LLMs) remains largely unexplored. In this work, we introduce the first comprehensive framework that utilizes LLMs for the task of SR.We propose In-Context Symbolic Regression (ICSR), an SR method which iteratively refines a functional form with an LLM and determines its coefficients with an external optimizer. ICSR leverages LLMs{'} strong mathematical prior both to propose an initial set of possible functions given the observations and to refine them based on their errors.Our findings reveal that LLMs are able to successfully find symbolic equations that fit the given data, matching or outperforming the overall performance of the best SR baselines on four popular benchmarks, while yielding simpler equations with better out of distribution generalization.",
}
| State of the art Symbolic Regression (SR) methods currently build specialized models, while the application of Large Language Models (LLMs) remains largely unexplored. In this work, we introduce the first comprehensive framework that utilizes LLMs for the task of SR.We propose In-Context Symbolic Regression (ICSR), an SR method which iteratively refines a functional form with an LLM and determines its coefficients with an external optimizer. ICSR leverages LLMs{'} strong mathematical prior both to propose an initial set of possible functions given the observations and to refine them based on their errors.Our findings reveal that LLMs are able to successfully find symbolic equations that fit the given data, matching or outperforming the overall performance of the best SR baselines on four popular benchmarks, while yielding simpler equations with better out of distribution generalization. | [
"Merler, Matteo",
"Haitsiukevich, Katsiaryna",
"Dainese, Nicola",
"Marttinen, Pekka"
] | In-Context Symbolic Regression: Leveraging Large Language Models for Function Discovery | acl-srw.49 | Poster | 2404.19094 | [
"https://github.com/merlerm/in-context-symbolic-regression"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.49/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-srw.50.bib | @inproceedings{yano-etal-2024-step,
title = "{STEP}: Staged Parameter-Efficient Pre-training for Large Language Models",
author = "Yano, Kazuki and
Ito, Takumi and
Suzuki, Jun",
editor = "Fu, Xiyan and
Fleisig, Eve",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-srw.50",
pages = "607--614",
abstract = "Pre-training large language models faces significant memory challenges due to the large size of model weights.We propose STaged parameter-Efficient Pre-training (STEP), which combines ideas from parameter-efficient tuning and staged training. We conduct experiments on pre-training models of various sizes and demonstrate that STEP can achieve up to a 40.4{\%} reduction in maximum memory requirement compared to vanilla pre-training while maintaining comparable performance.",
}
| Pre-training large language models faces significant memory challenges due to the large size of model weights.We propose STaged parameter-Efficient Pre-training (STEP), which combines ideas from parameter-efficient tuning and staged training. We conduct experiments on pre-training models of various sizes and demonstrate that STEP can achieve up to a 40.4{\%} reduction in maximum memory requirement compared to vanilla pre-training while maintaining comparable performance. | [
"Yano, Kazuki",
"Ito, Takumi",
"Suzuki, Jun"
] | STEP: Staged Parameter-Efficient Pre-training for Large Language Models | acl-srw.50 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-srw.50/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-tutorials.1.bib | @inproceedings{sun-etal-2024-computational,
title = "Computational Linguistics for Brain Encoding and Decoding: Principles, Practices and Beyond",
author = "Sun, Jingyuan and
Wang, Shaonan and
Chen, Zijiao and
Li, Jixing and
Moens, Marie-Francine",
editor = "Chiruzzo, Luis and
Lee, Hung-yi and
Ribeiro, Leonardo",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-tutorials.1",
pages = "1--2",
abstract = "Computational linguistics (CL) has witnessed tremendous advancementsin recent years, with models such as large language models demonstratingexceptional performance in various natural language processing tasks. Theseadvancements highlight their potential to help understand brain languageprocessing, especially through the lens of brain encoding and decoding.Brain encoding involves the mapping of linguistic stimuli to brain activity,while brain decoding is the process of reconstructing linguistic stimulifrom observed brain activities. CL models that excel at capturing andmanipulating linguistic features are crucial for mapping linguistic stimulito brain activities and vice versa. Brain encoding and decoding have vastapplications, from enhancing human-computer interaction to developingassistive technologies for individuals with communication impairments. Thistutorial will focus on elucidating how computational linguistics canfacilitate brain encoding and decoding. We will delve into the principlesand practices of using computational linguistics methods for brain encodingand decoding. We will also discuss the challenges and future directions ofbrain encoding and decoding. Through this tutorial, we aim to provide acomprehensive and informative overview of the intersection betweencomputational linguistics and cognitive neuroscience, inspiring futureresearch in this exciting and rapidly evolving field.",
}
| Computational linguistics (CL) has witnessed tremendous advancementsin recent years, with models such as large language models demonstratingexceptional performance in various natural language processing tasks. Theseadvancements highlight their potential to help understand brain languageprocessing, especially through the lens of brain encoding and decoding.Brain encoding involves the mapping of linguistic stimuli to brain activity,while brain decoding is the process of reconstructing linguistic stimulifrom observed brain activities. CL models that excel at capturing andmanipulating linguistic features are crucial for mapping linguistic stimulito brain activities and vice versa. Brain encoding and decoding have vastapplications, from enhancing human-computer interaction to developingassistive technologies for individuals with communication impairments. Thistutorial will focus on elucidating how computational linguistics canfacilitate brain encoding and decoding. We will delve into the principlesand practices of using computational linguistics methods for brain encodingand decoding. We will also discuss the challenges and future directions ofbrain encoding and decoding. Through this tutorial, we aim to provide acomprehensive and informative overview of the intersection betweencomputational linguistics and cognitive neuroscience, inspiring futureresearch in this exciting and rapidly evolving field. | [
"Sun, Jingyuan",
"Wang, Shaonan",
"Chen, Zijiao",
"Li, Jixing",
"Moens, Marie-Francine"
] | Computational Linguistics for Brain Encoding and Decoding: Principles, Practices and Beyond | acl-tutorials.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-tutorials.1/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-tutorials.2.bib | @inproceedings{dou-etal-2024-automatic,
title = "Automatic and Human-{AI} Interactive Text Generation (with a focus on Text Simplification and Revision)",
author = "Dou, Yao and
Laban, Philippe and
Gardent, Claire and
Xu, Wei",
editor = "Chiruzzo, Luis and
Lee, Hung-yi and
Ribeiro, Leonardo",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-tutorials.2",
pages = "3--4",
abstract = "In this tutorial, we focus on text-to-text generation, a class ofnatural language generation (NLG) tasks, that takes a piece of text as inputand then generates a revision that is improved according to some specificcriteria (e.g., readability or linguistic styles), while largely retainingthe original meaning and the length of the text. This includes many usefulapplications, such as text simplification, paraphrase generation, styletransfer, etc. In contrast to text summarization and open-ended textcompletion (e.g., story), the text-to-text generation tasks we discuss inthis tutorial are more constrained in terms of semantic consistency andtargeted language styles. This level of control makes these tasks idealtestbeds for studying the ability of models to generate text that is bothsemantically adequate and stylistically appropriate. Moreover, these tasksare interesting from a technical standpoint, as they require complexcombinations of lexical and syntactical transformations, stylistic control,and adherence to factual knowledge, {--} all at once. With a special focus ontext simplification and revision, this tutorial aims to provide an overviewof the state-of-the-art natural language generation research from four majoraspects {--} Data, Models, Human-AI Collaboration, and Evaluation {--} and todiscuss and showcase a few significant and recent advances: (1) the use ofnon-retrogressive approaches; (2) the shift from fine-tuning to promptingwith large language models; (3) the development of new learnable metric andfine-grained human evaluation framework; (4) a growing body of studies anddatasets on non-English languages; (5) the rise of HCI+NLP+Accessibilityinterdisciplinary research to create real-world writing assistant systems.",
}
| In this tutorial, we focus on text-to-text generation, a class ofnatural language generation (NLG) tasks, that takes a piece of text as inputand then generates a revision that is improved according to some specificcriteria (e.g., readability or linguistic styles), while largely retainingthe original meaning and the length of the text. This includes many usefulapplications, such as text simplification, paraphrase generation, styletransfer, etc. In contrast to text summarization and open-ended textcompletion (e.g., story), the text-to-text generation tasks we discuss inthis tutorial are more constrained in terms of semantic consistency andtargeted language styles. This level of control makes these tasks idealtestbeds for studying the ability of models to generate text that is bothsemantically adequate and stylistically appropriate. Moreover, these tasksare interesting from a technical standpoint, as they require complexcombinations of lexical and syntactical transformations, stylistic control,and adherence to factual knowledge, {--} all at once. With a special focus ontext simplification and revision, this tutorial aims to provide an overviewof the state-of-the-art natural language generation research from four majoraspects {--} Data, Models, Human-AI Collaboration, and Evaluation {--} and todiscuss and showcase a few significant and recent advances: (1) the use ofnon-retrogressive approaches; (2) the shift from fine-tuning to promptingwith large language models; (3) the development of new learnable metric andfine-grained human evaluation framework; (4) a growing body of studies anddatasets on non-English languages; (5) the rise of HCI+NLP+Accessibilityinterdisciplinary research to create real-world writing assistant systems. | [
"Dou, Yao",
"Laban, Philippe",
"Gardent, Claire",
"Xu, Wei"
] | Automatic and Human-AI Interactive Text Generation (with a focus on Text Simplification and Revision) | acl-tutorials.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-tutorials.2/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-tutorials.3.bib | @inproceedings{butoi-etal-2024-computational,
title = "Computational Expressivity of Neural Language Models",
author = "Butoi, Alexandra and
Cotterell, Ryan and
Svete, Anej",
editor = "Chiruzzo, Luis and
Lee, Hung-yi and
Ribeiro, Leonardo",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-tutorials.3",
pages = "5--5",
abstract = "Language models (LMs) are currently at the forefront of NLP researchdue to their remarkable versatility across diverse tasks. However, a largegap exists between their observed capabilities and the explanations proposedby established formal machinery. To motivate a better theoreticalcharacterization of LMs{'} abilities and limitations, this tutorial aims toprovide a comprehensive introduction to a specific framework for formalanalysis of modern LMs using tools from formal language theory (FLT). Wepresent how tools from FLT can be useful in understanding the inner workingsand predicting the capabilities of modern neural LM architectures. We willcover recent results using FLT to make precise and practically relevantstatements about LMs based on recurrent neural networks and transformers byrelating them to formal devices such as finite-state automata, Turingmachines, and analog circuits. Altogether, the results covered in thistutorial will allow us to make precise statements and explanations about theobserved as well as predicted behaviors of LMs, as well as providetheoretically motivated suggestions on the aspects of the architectures thatcould be improved.",
}
| Language models (LMs) are currently at the forefront of NLP researchdue to their remarkable versatility across diverse tasks. However, a largegap exists between their observed capabilities and the explanations proposedby established formal machinery. To motivate a better theoreticalcharacterization of LMs{'} abilities and limitations, this tutorial aims toprovide a comprehensive introduction to a specific framework for formalanalysis of modern LMs using tools from formal language theory (FLT). Wepresent how tools from FLT can be useful in understanding the inner workingsand predicting the capabilities of modern neural LM architectures. We willcover recent results using FLT to make precise and practically relevantstatements about LMs based on recurrent neural networks and transformers byrelating them to formal devices such as finite-state automata, Turingmachines, and analog circuits. Altogether, the results covered in thistutorial will allow us to make precise statements and explanations about theobserved as well as predicted behaviors of LMs, as well as providetheoretically motivated suggestions on the aspects of the architectures thatcould be improved. | [
"Butoi, Alex",
"ra",
"Cotterell, Ryan",
"Svete, Anej"
] | Computational Expressivity of Neural Language Models | acl-tutorials.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-tutorials.3/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-tutorials.4.bib | @inproceedings{karimi-etal-2024-presentation,
title = "Presentation Matters: How to Communicate Science in the {NLP} Venues and in the Wild?",
author = "Karimi, Sarvnaz and
Paris, Cecile and
Haffari, Gholamreza",
editor = "Chiruzzo, Luis and
Lee, Hung-yi and
Ribeiro, Leonardo",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-tutorials.4",
pages = "6--7",
abstract = "Each year a large number of early career researchers join the NLP/Computational Linguistics community, with most starting by presenting their research in the *ACL conferences and workshops. While writing a paper that has made it to these venues is one important step, what comes with communicating the outcome is equally important and sets the path to impact of a research outcome. In addition, not all PhD candidates get the chance of being trained for their presentation skills. Research methods courses are not all of the same quality and may not cover scientific communications, and certainly not all are tailored to the NLP community. We are proposing an introductory tutorial that covers a range of different communication skills, including writing, oral presentation (posters and demos), and social media presence. This is to fill in the gap for the researchers who may not have access to research methods courses or other mentors who could help them acquire such skills. The interactive nature of such a tutorial would allow attendees to ask questions and clarifications which would not be possible from reading materials alone.",
}
| Each year a large number of early career researchers join the NLP/Computational Linguistics community, with most starting by presenting their research in the *ACL conferences and workshops. While writing a paper that has made it to these venues is one important step, what comes with communicating the outcome is equally important and sets the path to impact of a research outcome. In addition, not all PhD candidates get the chance of being trained for their presentation skills. Research methods courses are not all of the same quality and may not cover scientific communications, and certainly not all are tailored to the NLP community. We are proposing an introductory tutorial that covers a range of different communication skills, including writing, oral presentation (posters and demos), and social media presence. This is to fill in the gap for the researchers who may not have access to research methods courses or other mentors who could help them acquire such skills. The interactive nature of such a tutorial would allow attendees to ask questions and clarifications which would not be possible from reading materials alone. | [
"Karimi, Sarvnaz",
"Paris, Cecile",
"Haffari, Gholamreza"
] | Presentation Matters: How to Communicate Science in the NLP Venues and in the Wild? | acl-tutorials.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-tutorials.4/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-tutorials.5.bib | @inproceedings{fu-etal-2024-vulnerabilities,
title = "Vulnerabilities of Large Language Models to Adversarial Attacks",
author = "Fu, Yu and
Shayegan, Erfan and
Mamun Al Abdullah, Md. and
Zaree, Pedram and
Abu-Ghazaleh, Nael and
Dong, Yue",
editor = "Chiruzzo, Luis and
Lee, Hung-yi and
Ribeiro, Leonardo",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-tutorials.5",
pages = "8--9",
abstract = "This tutorial serves as a comprehensive guide on the vulnerabilities of Large Language Models (LLMs) to adversarial attacks, an interdisciplinary field that blends perspectives from Natural Language Processing (NLP) and Cybersecurity. As LLMs become more complex and integrated into various systems, understanding their security attributes is crucial. However, current research indicates that even safety-aligned models are not impervious to adversarial attacks that can result in incorrect or harmful outputs. The tutorial first lays the foundation by explaining safety-aligned LLMs and concepts in cybersecurity. It then categorizes existing research based on different types of learning architectures and attack methods. We highlight the existing vulnerabilities of unimodal LLMs, multi-modal LLMs, and systems that integrate LLMs, focusing on adversarial attacks designed to exploit weaknesses and mislead AI systems. Finally, the tutorial delves into the potential causes of these vulnerabilities and discusses potential defense mechanisms.",
}
| This tutorial serves as a comprehensive guide on the vulnerabilities of Large Language Models (LLMs) to adversarial attacks, an interdisciplinary field that blends perspectives from Natural Language Processing (NLP) and Cybersecurity. As LLMs become more complex and integrated into various systems, understanding their security attributes is crucial. However, current research indicates that even safety-aligned models are not impervious to adversarial attacks that can result in incorrect or harmful outputs. The tutorial first lays the foundation by explaining safety-aligned LLMs and concepts in cybersecurity. It then categorizes existing research based on different types of learning architectures and attack methods. We highlight the existing vulnerabilities of unimodal LLMs, multi-modal LLMs, and systems that integrate LLMs, focusing on adversarial attacks designed to exploit weaknesses and mislead AI systems. Finally, the tutorial delves into the potential causes of these vulnerabilities and discusses potential defense mechanisms. | [
"Fu, Yu",
"Shayegan, Erfan",
"Mamun Al Abdullah, Md.",
"Zaree, Pedram",
"Abu-Ghazaleh, Nael",
"Dong, Yue"
] | Vulnerabilities of Large Language Models to Adversarial Attacks | acl-tutorials.5 | Poster | [
""
] | https://huggingface.co/papers/2310.10844 | 1 | 0 | 0 | 6 | https://aclanthology.org/2024.acl-tutorials.5/ | [] | [] | [] | 1 |
|
https://aclanthology.org/2024.acl-tutorials.6.bib | @inproceedings{gao-etal-2024-detecting,
title = "Detecting Machine-Generated Text: Techniques and Challenges",
author = "Gao, Li and
Xiong, Wenhan and
Kim, Taewoo",
editor = "Chiruzzo, Luis and
Lee, Hung-yi and
Ribeiro, Leonardo",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-tutorials.6",
pages = "10--11",
abstract = "As AI-generated text increasingly resembles human-written content, the ability to detect machine-generated text becomes crucial in many applications. This tutorial aims to provide a comprehensive overview of text detection techniques, focusing on machine-generated text and deepfakes. We will discuss various methods for distinguishing between human-written and machine-generated text, including statistical methods, neural network-based techniques, and hybrid approaches. The tutorial will also cover the challenges in the detection process, such as dealing with evolving models and maintaining robustness against adversarial attacks. By the end of the session, attendees will have a solid understanding of current techniques and future directions in the field of text detection.",
}
| As AI-generated text increasingly resembles human-written content, the ability to detect machine-generated text becomes crucial in many applications. This tutorial aims to provide a comprehensive overview of text detection techniques, focusing on machine-generated text and deepfakes. We will discuss various methods for distinguishing between human-written and machine-generated text, including statistical methods, neural network-based techniques, and hybrid approaches. The tutorial will also cover the challenges in the detection process, such as dealing with evolving models and maintaining robustness against adversarial attacks. By the end of the session, attendees will have a solid understanding of current techniques and future directions in the field of text detection. | [
"Gao, Li",
"Xiong, Wenhan",
"Kim, Taewoo"
] | Detecting Machine-Generated Text: Techniques and Challenges | acl-tutorials.6 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-tutorials.6/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.1.bib | @inproceedings{peng-etal-2024-controllable,
title = "Controllable Data Augmentation for Few-Shot Text Mining with Chain-of-Thought Attribute Manipulation",
author = "Peng, Letian and
Zhang, Yuwei and
Shang, Jingbo",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.1",
pages = "1--16",
abstract = "Prompting large language models (LLMs) for data augmentation has recently become a common practice in few-shot NLP tasks. In this paper, we propose Chain-of-Thought Attribute Manipulation (CoTAM), a novel approach that generates new data from existing examples by only tweaking in the user-provided, task-specific attribute, e.g., sentiment polarity or topic in movie reviews. Instead of conventional latent representation controlling, we leverage the chain-of-thought prompting to directly edit the text in three steps, (1) attribute decomposition, (2) manipulation proposal, and (3) sentence reconstruction. Extensive results on various tasks, such as text (pair) classification and aspect-based sentiment analysis, verify the superiority of CoTAM over other LLM-based augmentation methods with the same number of training examples for both fine-tuning and in-context learning. Remarkably, the 2D visualization of the augmented dataset using principle component analysis revealed a human-recognizable decision boundary that is likely hinted by the attribute manipulation, demonstrating the potential of our proposed approach.",
}
| Prompting large language models (LLMs) for data augmentation has recently become a common practice in few-shot NLP tasks. In this paper, we propose Chain-of-Thought Attribute Manipulation (CoTAM), a novel approach that generates new data from existing examples by only tweaking in the user-provided, task-specific attribute, e.g., sentiment polarity or topic in movie reviews. Instead of conventional latent representation controlling, we leverage the chain-of-thought prompting to directly edit the text in three steps, (1) attribute decomposition, (2) manipulation proposal, and (3) sentence reconstruction. Extensive results on various tasks, such as text (pair) classification and aspect-based sentiment analysis, verify the superiority of CoTAM over other LLM-based augmentation methods with the same number of training examples for both fine-tuning and in-context learning. Remarkably, the 2D visualization of the augmented dataset using principle component analysis revealed a human-recognizable decision boundary that is likely hinted by the attribute manipulation, demonstrating the potential of our proposed approach. | [
"Peng, Letian",
"Zhang, Yuwei",
"Shang, Jingbo"
] | Controllable Data Augmentation for Few-Shot Text Mining with Chain-of-Thought Attribute Manipulation | findings-acl.1 | Poster | 2307.07099 | [
"https://github.com/komeijiforce/cotam"
] | https://huggingface.co/papers/2307.07099 | 0 | 1 | 0 | 3 | https://aclanthology.org/2024.findings-acl.1/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.2.bib | @inproceedings{song-etal-2024-match,
title = "Match More, Extract Better! Hybrid Matching Model for Open Domain Web Keyphrase Extraction",
author = "Song, Mingyang and
Jing, Liping and
Feng, Yi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.2",
pages = "17--27",
abstract = "Keyphrase extraction aims to automatically extract salient phrases representing the critical information in the source document. Identifying salient phrases is challenging because there is a lot of noisy information in the document, leading to wrong extraction. To address this issue, in this paper, we propose a hybrid matching model for keyphrase extraction, which combines representation-focused and interaction-based matching modules into a unified framework for improving the performance of the keyphrase extraction task. Specifically, HybridMatch comprises (1) a PLM-based Siamese encoder component that represents both candidate phrases and documents, (2) an interaction-focused matching (IM) component that estimates word matches between candidate phrases and the corresponding document at the word level, and (3) a representation-focused matching (RM) component captures context-aware semantic relatedness of each candidate keyphrase at the phrase level. Extensive experimental results on the OpenKP dataset demonstrate that the performance of the proposed model HybridMatch outperforms the recent state-of-the-art keyphrase extraction baselines. Furthermore, we discuss the performance of large language models in keyphrase extraction based on recent studies and our experiments.",
}
| Keyphrase extraction aims to automatically extract salient phrases representing the critical information in the source document. Identifying salient phrases is challenging because there is a lot of noisy information in the document, leading to wrong extraction. To address this issue, in this paper, we propose a hybrid matching model for keyphrase extraction, which combines representation-focused and interaction-based matching modules into a unified framework for improving the performance of the keyphrase extraction task. Specifically, HybridMatch comprises (1) a PLM-based Siamese encoder component that represents both candidate phrases and documents, (2) an interaction-focused matching (IM) component that estimates word matches between candidate phrases and the corresponding document at the word level, and (3) a representation-focused matching (RM) component captures context-aware semantic relatedness of each candidate keyphrase at the phrase level. Extensive experimental results on the OpenKP dataset demonstrate that the performance of the proposed model HybridMatch outperforms the recent state-of-the-art keyphrase extraction baselines. Furthermore, we discuss the performance of large language models in keyphrase extraction based on recent studies and our experiments. | [
"Song, Mingyang",
"Jing, Liping",
"Feng, Yi"
] | Match More, Extract Better! Hybrid Matching Model for Open Domain Web Keyphrase Extraction | findings-acl.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.2/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.3.bib | @inproceedings{zhang-etal-2024-afpq,
title = "{AFPQ}: Asymmetric Floating Point Quantization for {LLM}s",
author = "Zhang, Yijia and
Zhang, Sicheng and
Cao, Shijie and
Du, DaYou and
Wei, Jianyu and
Cao, Ting and
Xu, Ningyi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.3",
pages = "28--36",
abstract = "Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth.Low-bit weight quantization can save memory and accelerate inference.Although floating-point (FP) formats show good performance in LLM quantization, they tend to perform poorly with small group sizes or sub-4 bits.We find the reason is that the absence of asymmetry in previous FP quantization makes it unsuitable for handling asymmetric value distribution of LLM weight tensors.In this work, we propose asymmetric FP quantization (AFPQ), which sets separate scales for positive and negative values.Our method leads to large accuracy improvements and can be easily plugged into other quantization methods, including GPTQ and AWQ, for better performance.Besides, no additional storage is needed compared with asymmetric integer (INT) quantization.The code is available at https://github.com/zhangsichengsjtu/AFPQ.",
}
| Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth.Low-bit weight quantization can save memory and accelerate inference.Although floating-point (FP) formats show good performance in LLM quantization, they tend to perform poorly with small group sizes or sub-4 bits.We find the reason is that the absence of asymmetry in previous FP quantization makes it unsuitable for handling asymmetric value distribution of LLM weight tensors.In this work, we propose asymmetric FP quantization (AFPQ), which sets separate scales for positive and negative values.Our method leads to large accuracy improvements and can be easily plugged into other quantization methods, including GPTQ and AWQ, for better performance.Besides, no additional storage is needed compared with asymmetric integer (INT) quantization.The code is available at https://github.com/zhangsichengsjtu/AFPQ. | [
"Zhang, Yijia",
"Zhang, Sicheng",
"Cao, Shijie",
"Du, DaYou",
"Wei, Jianyu",
"Cao, Ting",
"Xu, Ningyi"
] | AFPQ: Asymmetric Floating Point Quantization for LLMs | findings-acl.3 | Poster | 2311.01792 | [
"https://github.com/zhangsichengsjtu/afpq"
] | https://huggingface.co/papers/2311.01792 | 1 | 0 | 0 | 7 | https://aclanthology.org/2024.findings-acl.3/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.4.bib | @inproceedings{jiang-etal-2024-end,
title = "End-to-End Emotion Semantic Parsing",
author = "Jiang, Xiaotong and
Wang, Zhongqing and
Zhou, Guodong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.4",
pages = "37--47",
abstract = "Emotion detection is the task of automatically associating one or more emotions with a text. The emotions are experienced, targeted, and caused by different semantic constituents. Therefore, it is necessary to incorporate these semantic constituents into the process of emotion detection. In this study, we propose a new task called emotion semantic parsing which aims to parse the emotion and semantic constituents into an abstract semantic tree structure. In particular, we design an end-to-end generation model to capture the relations between emotion and all the semantic constituents, and to generate them jointly. Furthermore, we employ a task decomposition strategy to capture the semantic relation among these constituents in a more cognitive and structural way. Experimental results demonstrate the importance of the proposed task, and indicate the proposed model gives superior performance compared to other models.",
}
| Emotion detection is the task of automatically associating one or more emotions with a text. The emotions are experienced, targeted, and caused by different semantic constituents. Therefore, it is necessary to incorporate these semantic constituents into the process of emotion detection. In this study, we propose a new task called emotion semantic parsing which aims to parse the emotion and semantic constituents into an abstract semantic tree structure. In particular, we design an end-to-end generation model to capture the relations between emotion and all the semantic constituents, and to generate them jointly. Furthermore, we employ a task decomposition strategy to capture the semantic relation among these constituents in a more cognitive and structural way. Experimental results demonstrate the importance of the proposed task, and indicate the proposed model gives superior performance compared to other models. | [
"Jiang, Xiaotong",
"Wang, Zhongqing",
"Zhou, Guodong"
] | End-to-End Emotion Semantic Parsing | findings-acl.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.4/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.5.bib | @inproceedings{chen-etal-2024-overcoming,
title = "Overcoming Catastrophic Forgetting by Exemplar Selection in Task-oriented Dialogue System",
author = "Chen, Chen and
Li, Ruizhe and
Hu, Yuchen and
Chen, Yuanyuan and
Qin, Chengwei and
Zhang, Qiang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.5",
pages = "48--61",
abstract = "Intelligent task-oriented dialogue systems (ToDs) are expected to continuously acquire new knowledge, also known as Continual Learning (CL), which is crucial to fit ever-changing user needs. However, catastrophic forgetting dramatically degrades the model performance in face of a long streamed curriculum. In this paper, we aim to overcome the forgetting problem in ToDs and propose a method (HESIT) with hyper-gradient-based exemplar strategy, which samples influential exemplars for periodic retraining. Instead of unilaterally observing data or models, HESIT adopts a profound exemplar selection strategy that considers the general performance of the trained model when selecting exemplars for each task domain. Specifically, HESIT analyzes the training data influence by tracing their hyper-gradient in the optimization process. Furthermore, HESIT avoids estimating Hessian to make it compatible for ToDs with a large pre-trained model. Experimental results show that HESIT effectively alleviates catastrophic forgetting by exemplar selection, and achieves state-of-the-art performance on the largest CL benchmark of ToDs in terms of all metrics.",
}
| Intelligent task-oriented dialogue systems (ToDs) are expected to continuously acquire new knowledge, also known as Continual Learning (CL), which is crucial to fit ever-changing user needs. However, catastrophic forgetting dramatically degrades the model performance in face of a long streamed curriculum. In this paper, we aim to overcome the forgetting problem in ToDs and propose a method (HESIT) with hyper-gradient-based exemplar strategy, which samples influential exemplars for periodic retraining. Instead of unilaterally observing data or models, HESIT adopts a profound exemplar selection strategy that considers the general performance of the trained model when selecting exemplars for each task domain. Specifically, HESIT analyzes the training data influence by tracing their hyper-gradient in the optimization process. Furthermore, HESIT avoids estimating Hessian to make it compatible for ToDs with a large pre-trained model. Experimental results show that HESIT effectively alleviates catastrophic forgetting by exemplar selection, and achieves state-of-the-art performance on the largest CL benchmark of ToDs in terms of all metrics. | [
"Chen, Chen",
"Li, Ruizhe",
"Hu, Yuchen",
"Chen, Yuanyuan",
"Qin, Chengwei",
"Zhang, Qiang"
] | Overcoming Catastrophic Forgetting by Exemplar Selection in Task-oriented Dialogue System | findings-acl.5 | Poster | 2405.10992 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.5/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.6.bib | @inproceedings{cho-2024-unveiling,
title = "Unveiling Imitation Learning: Exploring the impact of Data Falsity to Large Language Model",
author = "Cho, Hyunsoo",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.6",
pages = "62--73",
abstract = "Many recent studies endeavor to improve open-sourced language models through imitation learning, re-training on the synthetic instruction data from state-of-the-art proprietary models like ChatGPT and GPT-4.However, the innate nature of synthetic data inherently contains noisy data, giving rise to a substantial presence of low-quality data replete with misleading queries, erroneous responses, and flawed reasoning.Although we intuitively grasp the potential harm of noisy data, we lack a quantitative understanding of its impact.To this end, this paper explores correlation between the degree of noise and its impact on language models through instruction tuning.We first introduce the Falsity-Controllable () dataset, which comprises pairs of true answers and corresponding reasoning, as well as false pairs to manually control the factuality ratio of the dataset.Through our extensive experiments, we found multiple intriguing findings of the correlation between factuality and instruction tuning. Specifically, factuality can significantly impact various benchmark characteristics especially when benchmarks are related to knowledge domain, and initial data quality plays a critical role, whereas the number of learning steps has a lesser impact.Additionally, we noted that once the language model is trained with a dataset contaminated by noise, restoring its original performance becomes exceptionally challenging, verging on irreversible.",
}
| Many recent studies endeavor to improve open-sourced language models through imitation learning, re-training on the synthetic instruction data from state-of-the-art proprietary models like ChatGPT and GPT-4.However, the innate nature of synthetic data inherently contains noisy data, giving rise to a substantial presence of low-quality data replete with misleading queries, erroneous responses, and flawed reasoning.Although we intuitively grasp the potential harm of noisy data, we lack a quantitative understanding of its impact.To this end, this paper explores correlation between the degree of noise and its impact on language models through instruction tuning.We first introduce the Falsity-Controllable () dataset, which comprises pairs of true answers and corresponding reasoning, as well as false pairs to manually control the factuality ratio of the dataset.Through our extensive experiments, we found multiple intriguing findings of the correlation between factuality and instruction tuning. Specifically, factuality can significantly impact various benchmark characteristics especially when benchmarks are related to knowledge domain, and initial data quality plays a critical role, whereas the number of learning steps has a lesser impact.Additionally, we noted that once the language model is trained with a dataset contaminated by noise, restoring its original performance becomes exceptionally challenging, verging on irreversible. | [
"Cho, Hyunsoo"
] | Unveiling Imitation Learning: Exploring the impact of Data Falsity to Large Language Model | findings-acl.6 | Poster | 2404.09717 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.6/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.7.bib | @inproceedings{gu-etal-2024-counterfeit,
title = "The Counterfeit Conundrum: Can Code Language Models Grasp the Nuances of Their Incorrect Generations?",
author = "Gu, Alex and
Li, Wen-Ding and
Jain, Naman and
Olausson, Theo and
Lee, Celine and
Sen, Koushik and
Solar-Lezama, Armando",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.7",
pages = "74--117",
abstract = "While language models are increasingly more proficient at code generation, they still frequently generate incorrect programs. Many of these programs are obviously wrong, but others are more subtle and pass weaker correctness checks such as being able to compile. In this work, we focus on these counterfeit samples: programs sampled from a language model that 1) have a high enough log-probability to be generated at a moderate temperature and 2) pass weak correctness checks. Overall, we discover that most models have a very shallow understanding of counterfeits through three clear failure modes. First, models mistakenly classify them as correct. Second, models are worse at reasoning about the execution behaviour of counterfeits and often predict their execution results as if they were correct. Third, when asking models to fix counterfeits, the likelihood of a model successfully repairing a counterfeit is often even lower than that of sampling a correct program from scratch. Counterfeits also have very unexpected properties: first, counterfeit programs for problems that are easier for a model to solve are not necessarily easier to detect and only slightly easier to execute and repair. Second, counterfeits from a given model are just as confusing to the model itself as they are to other models. Finally, both strong and weak models are able to generate counterfeit samples that equally challenge all models. In light of our findings, we recommend that care and caution be taken when relying on models to understand their own samples, especially when no external feedback is incorporated.",
}
| While language models are increasingly more proficient at code generation, they still frequently generate incorrect programs. Many of these programs are obviously wrong, but others are more subtle and pass weaker correctness checks such as being able to compile. In this work, we focus on these counterfeit samples: programs sampled from a language model that 1) have a high enough log-probability to be generated at a moderate temperature and 2) pass weak correctness checks. Overall, we discover that most models have a very shallow understanding of counterfeits through three clear failure modes. First, models mistakenly classify them as correct. Second, models are worse at reasoning about the execution behaviour of counterfeits and often predict their execution results as if they were correct. Third, when asking models to fix counterfeits, the likelihood of a model successfully repairing a counterfeit is often even lower than that of sampling a correct program from scratch. Counterfeits also have very unexpected properties: first, counterfeit programs for problems that are easier for a model to solve are not necessarily easier to detect and only slightly easier to execute and repair. Second, counterfeits from a given model are just as confusing to the model itself as they are to other models. Finally, both strong and weak models are able to generate counterfeit samples that equally challenge all models. In light of our findings, we recommend that care and caution be taken when relying on models to understand their own samples, especially when no external feedback is incorporated. | [
"Gu, Alex",
"Li, Wen-Ding",
"Jain, Naman",
"Olausson, Theo",
"Lee, Celine",
"Sen, Koushik",
"Solar-Lezama, Arm",
"o"
] | The Counterfeit Conundrum: Can Code Language Models Grasp the Nuances of Their Incorrect Generations? | findings-acl.7 | Poster | 2402.19475 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.7/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.8.bib | @inproceedings{hsu-etal-2024-chime,
title = "{CHIME}: {LLM}-Assisted Hierarchical Organization of Scientific Studies for Literature Review Support",
author = "Hsu, Chao-Chun and
Bransom, Erin and
Sparks, Jenna and
Kuehl, Bailey and
Tan, Chenhao and
Wadden, David and
Wang, Lucy and
Naik, Aakanksha",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.8",
pages = "118--132",
abstract = "Literature review requires researchers to synthesize a large amount of information and is increasingly challenging as the scientific literature expands. In this work, we investigate the potential of LLMs for producing hierarchical organizations of scientific studies to assist researchers with literature review. We define hierarchical organizations as tree structures where nodes refer to topical categories and every node is linked to the studies assigned to that category. Our naive LLM-based pipeline for hierarchy generation from a set of studies produces promising yet imperfect hierarchies, motivating us to collect CHIME, an expert-curated dataset for this task focused on biomedicine. Given the challenging and time-consuming nature of building hierarchies from scratch, we use a human-in-the-loop process in which experts correct errors (both links between categories and study assignment) in LLM-generated hierarchies. CHIME contains 2,174 LLM-generated hierarchies covering 472 topics, and expert-corrected hierarchies for a subset of 100 topics. Expert corrections allow us to quantify LLM performance, and we find that while they are quite good at generating and organizing categories, their assignment of studies to categories could be improved. We attempt to train a corrector model with human feedback which improves study assignment by 12.6 F1 points. We release our dataset and models to encourage research on developing better assistive tools for literature review.",
}
| Literature review requires researchers to synthesize a large amount of information and is increasingly challenging as the scientific literature expands. In this work, we investigate the potential of LLMs for producing hierarchical organizations of scientific studies to assist researchers with literature review. We define hierarchical organizations as tree structures where nodes refer to topical categories and every node is linked to the studies assigned to that category. Our naive LLM-based pipeline for hierarchy generation from a set of studies produces promising yet imperfect hierarchies, motivating us to collect CHIME, an expert-curated dataset for this task focused on biomedicine. Given the challenging and time-consuming nature of building hierarchies from scratch, we use a human-in-the-loop process in which experts correct errors (both links between categories and study assignment) in LLM-generated hierarchies. CHIME contains 2,174 LLM-generated hierarchies covering 472 topics, and expert-corrected hierarchies for a subset of 100 topics. Expert corrections allow us to quantify LLM performance, and we find that while they are quite good at generating and organizing categories, their assignment of studies to categories could be improved. We attempt to train a corrector model with human feedback which improves study assignment by 12.6 F1 points. We release our dataset and models to encourage research on developing better assistive tools for literature review. | [
"Hsu, Chao-Chun",
"Bransom, Erin",
"Sparks, Jenna",
"Kuehl, Bailey",
"Tan, Chenhao",
"Wadden, David",
"Wang, Lucy",
"Naik, Aakanksha"
] | CHIME: LLM-Assisted Hierarchical Organization of Scientific Studies for Literature Review Support | findings-acl.8 | Poster | 2407.16148 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.8/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.9.bib | @inproceedings{li-etal-2024-side,
title = "Which Side Are You On? A Multi-task Dataset for End-to-End Argument Summarisation and Evaluation",
author = "Li, Hao and
Wu, Yuping and
Schlegel, Viktor and
Batista-Navarro, Riza and
Madusanka, Tharindu and
Zahid, Iqra and
Zeng, Jiayan and
Wang, Xiaochi and
He, Xinran and
Li, Yizhi and
Nenadic, Goran",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.9",
pages = "133--150",
abstract = "With the recent advances of large language models (LLMs), it is no longer infeasible to build an automated debate system that helps people to synthesise persuasive arguments. Previous work attempted this task by integrating multiple components. In our work, we introduce an argument mining dataset that captures the end-to-end process of preparing an argumentative essay for a debate, which covers the tasks of claim and evidence identification (Task 1 ED), evidence convincingness ranking (Task 2 ECR), argumentative essay summarisation and human preference ranking (Task 3 ASR) and metric learning for automated evaluation of resulting essays, based on human feedback along argument quality dimensions (Task 4 SQE). Our dataset contains 14k examples of claims that are fully annotated with various properties supporting the aforementioned tasks. We evaluate multiple generative baselines for each of these tasks, including representative LLMs. We find, that while they show promising results on individual tasks in our benchmark, their end-to-end performance on all four tasks in succession deteriorates significantly, both in automated measures as well as in human-centred evaluation. This challenge presented by our proposed dataset motivates future research on end-to-end argument mining and summarisation. The repository of this project is available at https://github.com/HarrywillDr/ArgSum-Datatset.",
}
| With the recent advances of large language models (LLMs), it is no longer infeasible to build an automated debate system that helps people to synthesise persuasive arguments. Previous work attempted this task by integrating multiple components. In our work, we introduce an argument mining dataset that captures the end-to-end process of preparing an argumentative essay for a debate, which covers the tasks of claim and evidence identification (Task 1 ED), evidence convincingness ranking (Task 2 ECR), argumentative essay summarisation and human preference ranking (Task 3 ASR) and metric learning for automated evaluation of resulting essays, based on human feedback along argument quality dimensions (Task 4 SQE). Our dataset contains 14k examples of claims that are fully annotated with various properties supporting the aforementioned tasks. We evaluate multiple generative baselines for each of these tasks, including representative LLMs. We find, that while they show promising results on individual tasks in our benchmark, their end-to-end performance on all four tasks in succession deteriorates significantly, both in automated measures as well as in human-centred evaluation. This challenge presented by our proposed dataset motivates future research on end-to-end argument mining and summarisation. The repository of this project is available at https://github.com/HarrywillDr/ArgSum-Datatset. | [
"Li, Hao",
"Wu, Yuping",
"Schlegel, Viktor",
"Batista-Navarro, Riza",
"Madusanka, Tharindu",
"Zahid, Iqra",
"Zeng, Jiayan",
"Wang, Xiaochi",
"He, Xinran",
"Li, Yizhi",
"Nenadic, Goran"
] | Which Side Are You On? A Multi-task Dataset for End-to-End Argument Summarisation and Evaluation | findings-acl.9 | Poster | 2406.03151 | [
""
] | https://huggingface.co/papers/2406.03151 | 0 | 0 | 0 | 11 | https://aclanthology.org/2024.findings-acl.9/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.10.bib | @inproceedings{naseem-etal-2024-grounded,
title = "A Grounded Preference Model for {LLM} Alignment",
author = "Naseem, Tahira and
Xu, Guangxuan and
Swaminathan, Sarathkrishna and
Yehudai, Asaf and
Chaudhury, Subhajit and
Florian, Radu and
Astudillo, Ram{\'o}n and
Munawar, Asim",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.10",
pages = "151--162",
abstract = "Despite LLMs{'} recent advancements, they still suffer from factual inconsistency and hallucination. An often-opted remedy is retrieval-augmented generation {--} however, there is no guarantee that the model will strictly adhere to retrieved grounding. Fundamentally, LLMs need to be aligned to be more faithful to grounding, which will require high-quality preference annotations. This paper investigates whether we can create high-quality grounded preference data for model alignment without using annotations from humans or large proprietary models. We experimented with existing entailment data and proposed approaches to generate synthetic grounded preference data, with which we train a Grounded Preference Model(GPM). We demonstrate through Proximal Policy Optimization(PPO) training of Mistral-7B-Instruct that our GPM model can successfully align powerful LLMs to generate much better grounded responses as judged by GPT4. Moreover, we show that our GPM is also a great faithfulness classifier, achieving SoTA in dialogue sub-tasks of the TRUE faithfulness Benchmark. We will release our GPM under the Apache 2.0 license.",
}
| Despite LLMs{'} recent advancements, they still suffer from factual inconsistency and hallucination. An often-opted remedy is retrieval-augmented generation {--} however, there is no guarantee that the model will strictly adhere to retrieved grounding. Fundamentally, LLMs need to be aligned to be more faithful to grounding, which will require high-quality preference annotations. This paper investigates whether we can create high-quality grounded preference data for model alignment without using annotations from humans or large proprietary models. We experimented with existing entailment data and proposed approaches to generate synthetic grounded preference data, with which we train a Grounded Preference Model(GPM). We demonstrate through Proximal Policy Optimization(PPO) training of Mistral-7B-Instruct that our GPM model can successfully align powerful LLMs to generate much better grounded responses as judged by GPT4. Moreover, we show that our GPM is also a great faithfulness classifier, achieving SoTA in dialogue sub-tasks of the TRUE faithfulness Benchmark. We will release our GPM under the Apache 2.0 license. | [
"Naseem, Tahira",
"Xu, Guangxuan",
"Swaminathan, Sarathkrishna",
"Yehudai, Asaf",
"Chaudhury, Subhajit",
"Florian, Radu",
"Astudillo, Ram{\\'o}n",
"Munawar, Asim"
] | A Grounded Preference Model for LLM Alignment | findings-acl.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.10/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.11.bib | @inproceedings{jin-etal-2024-graph,
title = "Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs",
author = "Jin, Bowen and
Xie, Chulin and
Zhang, Jiawei and
Roy, Kashob Kumar and
Zhang, Yu and
Li, Zheng and
Li, Ruirui and
Tang, Xianfeng and
Wang, Suhang and
Meng, Yu and
Han, Jiawei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.11",
pages = "163--184",
abstract = "Large language models (LLMs), while exhibiting exceptional performance, suffer from hallucinations, especially on knowledge-intensive tasks. Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora to alleviate the issue. However, in many domains, texts are interconnected (e.g., academic papers in a bibliographic graph are linked by citations and co-authorships) which form a (text-attributed) graph. The knowledge in such graphs is encoded not only in single texts/nodes but also in their associated connections. To facilitate the research of augmenting LLMs with graphs, we manually construct a Graph Reasoning Benchmark dataset called GRBench, containing 1,740 questions that can be answered with the knowledge from 10 domain graphs. Then, we propose a simple and effective framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively. Each Graph-CoT iteration consists of three sub-steps: LLM reasoning, LLM-graph interaction, and graph execution. We conduct systematic experiments with three LLM backbones on GRBench, where Graph-CoT outperforms the baselines consistently. The code is available at https://github.com/PeterGriffinJin/Graph-CoT/.",
}
| Large language models (LLMs), while exhibiting exceptional performance, suffer from hallucinations, especially on knowledge-intensive tasks. Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora to alleviate the issue. However, in many domains, texts are interconnected (e.g., academic papers in a bibliographic graph are linked by citations and co-authorships) which form a (text-attributed) graph. The knowledge in such graphs is encoded not only in single texts/nodes but also in their associated connections. To facilitate the research of augmenting LLMs with graphs, we manually construct a Graph Reasoning Benchmark dataset called GRBench, containing 1,740 questions that can be answered with the knowledge from 10 domain graphs. Then, we propose a simple and effective framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively. Each Graph-CoT iteration consists of three sub-steps: LLM reasoning, LLM-graph interaction, and graph execution. We conduct systematic experiments with three LLM backbones on GRBench, where Graph-CoT outperforms the baselines consistently. The code is available at https://github.com/PeterGriffinJin/Graph-CoT/. | [
"Jin, Bowen",
"Xie, Chulin",
"Zhang, Jiawei",
"Roy, Kashob Kumar",
"Zhang, Yu",
"Li, Zheng",
"Li, Ruirui",
"Tang, Xianfeng",
"Wang, Suhang",
"Meng, Yu",
"Han, Jiawei"
] | Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs | findings-acl.11 | Poster | 2404.07103 | [
"https://github.com/petergriffinjin/graph-cot"
] | https://huggingface.co/papers/2404.07103 | 2 | 0 | 0 | 8 | https://aclanthology.org/2024.findings-acl.11/ | [] | [
"PeterJinGo/GRBench"
] | [] | 1 |
https://aclanthology.org/2024.findings-acl.12.bib | @inproceedings{jiao-etal-2024-text2db,
title = "{T}ext2{DB}: Integration-Aware Information Extraction with Large Language Model Agents",
author = "Jiao, Yizhu and
Li, Sha and
Zhou, Sizhe and
Ji, Heng and
Han, Jiawei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.12",
pages = "185--205",
abstract = "The task of information extraction (IE) is to extract structured knowledge from text. However, it is often not straightforward to utilize IE output due to the mismatch between the IE ontology and the downstream application needs. We propose a new formulation of IE, Text2DB, that emphasizes the integration of IE output and the target database (or knowledge base). Given a user instruction, a document set, and a database, our task requires the model to update the database with values from the document set to satisfy the user instruction. This task requires understanding user instructions for \textit{what to extract} and adapting to the given DB/KB schema for \textit{how to extract} on the fly. To evaluate this new task, we introduce a new benchmark featuring common demands such as data infilling, row population, and column addition. In addition, we propose an LLM agent framework OPAL (Observe-Plan-Analyze LLM) which includes an Observer component that interacts with the database, the Planner component that generates a code-based plan with calls to IE models, and the Analyzer component that provides feedback regarding code quality before execution. Experiments show that OPAL can successfully adapt to diverse database schemas by generating different code plans and calling the required IE models. We also highlight difficult cases such as dealing with large databases with complex dependencies and extraction hallucination, which we believe deserve further investigation.",
}
| The task of information extraction (IE) is to extract structured knowledge from text. However, it is often not straightforward to utilize IE output due to the mismatch between the IE ontology and the downstream application needs. We propose a new formulation of IE, Text2DB, that emphasizes the integration of IE output and the target database (or knowledge base). Given a user instruction, a document set, and a database, our task requires the model to update the database with values from the document set to satisfy the user instruction. This task requires understanding user instructions for \textit{what to extract} and adapting to the given DB/KB schema for \textit{how to extract} on the fly. To evaluate this new task, we introduce a new benchmark featuring common demands such as data infilling, row population, and column addition. In addition, we propose an LLM agent framework OPAL (Observe-Plan-Analyze LLM) which includes an Observer component that interacts with the database, the Planner component that generates a code-based plan with calls to IE models, and the Analyzer component that provides feedback regarding code quality before execution. Experiments show that OPAL can successfully adapt to diverse database schemas by generating different code plans and calling the required IE models. We also highlight difficult cases such as dealing with large databases with complex dependencies and extraction hallucination, which we believe deserve further investigation. | [
"Jiao, Yizhu",
"Li, Sha",
"Zhou, Sizhe",
"Ji, Heng",
"Han, Jiawei"
] | Text2DB: Integration-Aware Information Extraction with Large Language Model Agents | findings-acl.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.12/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.13.bib | @inproceedings{liu-etal-2024-important,
title = "How Important is a Language Model for Low-resource {ASR}?",
author = "Liu, Zoey and
Venkateswaran, Nitin and
Le Ferrand, Eric and
Prud{'}hommeaux, Emily",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.13",
pages = "206--213",
abstract = "N-gram language models (LMs) are the innovation that first made large-vocabulary continuous automatic speech recognition (ASR) viable. With neural end-to-end ASR architectures, however, LMs have become an afterthought. While the effect on accuracy may be negligible for English and Mandarin, jettisoning the LM might not make sense for the world{'}s remaining 6000+ languages. In this paper, we investigate the role of the LM in low-resource ASR. First we ask: does using an n-gram LM in decoding in neural architectures help ASR performance? While it may seem obvious that it should, its absence in most implementations suggests otherwise. Second, we ask: when an n-gram LM is used in ASR, is there a relationship between the size of the LM and ASR accuracy? We have discovered that gut feelings on this question vary considerably, but there is little empirical work to support any particular claim. We explore these questions {``}in the wild{''} using a deliberately diverse set of 9 very small ASR corpora. The results show that: (1) decoding with an n-gram LM, regardless of its size, leads to lower word error rates; and (2) increasing the size of the LM appears to yield improvements only when the audio corpus itself is already relatively large. This suggests that collecting additional LM training text may benefit widely-spoken languages which typically have larger audio corpora. In contrast, for endangered languages where data of any kind will always be limited, efforts may be better spent collecting additional transcribed audio.",
}
| N-gram language models (LMs) are the innovation that first made large-vocabulary continuous automatic speech recognition (ASR) viable. With neural end-to-end ASR architectures, however, LMs have become an afterthought. While the effect on accuracy may be negligible for English and Mandarin, jettisoning the LM might not make sense for the world{'}s remaining 6000+ languages. In this paper, we investigate the role of the LM in low-resource ASR. First we ask: does using an n-gram LM in decoding in neural architectures help ASR performance? While it may seem obvious that it should, its absence in most implementations suggests otherwise. Second, we ask: when an n-gram LM is used in ASR, is there a relationship between the size of the LM and ASR accuracy? We have discovered that gut feelings on this question vary considerably, but there is little empirical work to support any particular claim. We explore these questions {``}in the wild{''} using a deliberately diverse set of 9 very small ASR corpora. The results show that: (1) decoding with an n-gram LM, regardless of its size, leads to lower word error rates; and (2) increasing the size of the LM appears to yield improvements only when the audio corpus itself is already relatively large. This suggests that collecting additional LM training text may benefit widely-spoken languages which typically have larger audio corpora. In contrast, for endangered languages where data of any kind will always be limited, efforts may be better spent collecting additional transcribed audio. | [
"Liu, Zoey",
"Venkateswaran, Nitin",
"Le Ferr",
", Eric",
"Prud{'}hommeaux, Emily"
] | How Important is a Language Model for Low-resource ASR? | findings-acl.13 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.13/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.14.bib | @inproceedings{thangarasa-etal-2024-mediswift,
title = "{M}edi{S}wift: Efficient Sparse Pre-trained Biomedical Language Models",
author = "Thangarasa, Vithursan and
Salem, Mahmoud and
Saxena, Shreyas and
Leong, Chen-Yu and
Hestness, Joel and
Lie, Sean",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.14",
pages = "214--230",
abstract = "Large language models (LLMs) are typically trained on general source data forvarious domains, but a recent surge in domain-specific LLMs has shown theirpotential to outperform general-purpose models in domain-specific tasks (e.g.,biomedicine). Although domain-specific pre-training enhances efficiency andleads to smaller models, the computational costs of training these LLMs remainhigh, posing budgeting challenges. We introduce MediSwift, a suite of biomedicalLMs that leverage sparse pre-training on domain-specific biomedical text data.By inducing up to 75{\%} weight sparsity during the pre-training phase, MediSwiftachieves a 2-2.5x reduction in training FLOPs. Notably, all sparse pre-trainingwas performed on the Cerebras CS-2 system, which is specifically designed torealize the acceleration benefits from unstructured weight sparsity, therebysignificantly enhancing the efficiency of the MediSwift models. Throughsubsequent dense fine-tuning and strategic soft prompting, MediSwift modelsoutperform existing LLMs up to 7B parameters on biomedical tasks, setting newbenchmarks w.r.t efficiency-accuracy on tasks such as PubMedQA. Our results showthat sparse pre-training, along with dense fine-tuning and soft prompting,offers an effective method for creating high-performing, computationallyefficient models in specialized domains.",
}
| Large language models (LLMs) are typically trained on general source data forvarious domains, but a recent surge in domain-specific LLMs has shown theirpotential to outperform general-purpose models in domain-specific tasks (e.g.,biomedicine). Although domain-specific pre-training enhances efficiency andleads to smaller models, the computational costs of training these LLMs remainhigh, posing budgeting challenges. We introduce MediSwift, a suite of biomedicalLMs that leverage sparse pre-training on domain-specific biomedical text data.By inducing up to 75{\%} weight sparsity during the pre-training phase, MediSwiftachieves a 2-2.5x reduction in training FLOPs. Notably, all sparse pre-trainingwas performed on the Cerebras CS-2 system, which is specifically designed torealize the acceleration benefits from unstructured weight sparsity, therebysignificantly enhancing the efficiency of the MediSwift models. Throughsubsequent dense fine-tuning and strategic soft prompting, MediSwift modelsoutperform existing LLMs up to 7B parameters on biomedical tasks, setting newbenchmarks w.r.t efficiency-accuracy on tasks such as PubMedQA. Our results showthat sparse pre-training, along with dense fine-tuning and soft prompting,offers an effective method for creating high-performing, computationallyefficient models in specialized domains. | [
"Thangarasa, Vithursan",
"Salem, Mahmoud",
"Saxena, Shreyas",
"Leong, Chen-Yu",
"Hestness, Joel",
"Lie, Sean"
] | MediSwift: Efficient Sparse Pre-trained Biomedical Language Models | findings-acl.14 | Poster | 2403.00952 | [
""
] | https://huggingface.co/papers/2403.00952 | 1 | 0 | 0 | 6 | https://aclanthology.org/2024.findings-acl.14/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.15.bib | @inproceedings{zhuang-etal-2024-lexicon,
title = "Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling",
author = "Zhuang, Chengxu and
Fedorenko, Evelina and
Andreas, Jacob",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.15",
pages = "231--247",
abstract = "Today{'}s most accurate language models are trained on orders of magnitude more language data than human language learners receive{---} but with no supervision from other sensory modalities that play a crucial role in human learning. Can we make LMs{'} representations and predictions more accurate (and more human-like) with more ecologically plausible supervision? This paper describes LexiContrastive Grounding (LCG), a grounded language learning procedure that leverages visual supervision to improve textual representations. LexiContrastive Grounding combines a next-token prediction strategy with a contrastive visual grounding objective, focusing on early-layerrepresentations that encode lexical information. Across multiple word-learning and sentence-understanding benchmarks, LexiContrastiveGrounding not only outperforms standard language-only models in terms of learning efficiency in small and developmentally plausible data regimes, but also improves upon vision-and-language learning procedures including CLIP, GIT, Flamingo, and Vokenization.Moreover, LexiContrastive Grounding improves perplexity by around 5{\%} on multiple language modeling tasks compared to other models trained on the same amount of text data. This work underscores the potential of incorporating visual grounding into language models, aligning more closely with the multimodal nature of human language acquisition.",
}
| Today{'}s most accurate language models are trained on orders of magnitude more language data than human language learners receive{---} but with no supervision from other sensory modalities that play a crucial role in human learning. Can we make LMs{'} representations and predictions more accurate (and more human-like) with more ecologically plausible supervision? This paper describes LexiContrastive Grounding (LCG), a grounded language learning procedure that leverages visual supervision to improve textual representations. LexiContrastive Grounding combines a next-token prediction strategy with a contrastive visual grounding objective, focusing on early-layerrepresentations that encode lexical information. Across multiple word-learning and sentence-understanding benchmarks, LexiContrastiveGrounding not only outperforms standard language-only models in terms of learning efficiency in small and developmentally plausible data regimes, but also improves upon vision-and-language learning procedures including CLIP, GIT, Flamingo, and Vokenization.Moreover, LexiContrastive Grounding improves perplexity by around 5{\%} on multiple language modeling tasks compared to other models trained on the same amount of text data. This work underscores the potential of incorporating visual grounding into language models, aligning more closely with the multimodal nature of human language acquisition. | [
"Zhuang, Chengxu",
"Fedorenko, Evelina",
"Andreas, Jacob"
] | Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling | findings-acl.15 | Poster | 2403.14551 | [
""
] | https://huggingface.co/papers/2403.14551 | 0 | 2 | 0 | 3 | https://aclanthology.org/2024.findings-acl.15/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.16.bib | @inproceedings{yang-etal-2024-p,
title = "{P}-{TA}: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models",
author = "Yang, Shuo and
Yuan, Chenchen and
Rong, Yao and
Steinbauer, Felix and
Kasneci, Gjergji",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.16",
pages = "248--264",
abstract = "A multitude of industries depend on accurate and reasonable tabular data augmentation for their business processes. Contemporary methodologies in generating tabular data revolve around utilizing Generative Adversarial Networks (GAN) or fine-tuning Large Language Models (LLM). However, GAN-based approaches are documented to produce samples with common-sense errors attributed to the absence of external knowledge. On the other hand, LLM-based methods exhibit a limited capacity to capture the disparities between synthesized and actual data distribution due to the absence of feedback from a discriminator during training. Furthermore, the decoding of LLM-based generation introduces gradient breakpoints, impeding the backpropagation of loss from a discriminator, thereby complicating the integration of these two approaches. To solve this challenge, we propose using proximal policy optimization (PPO) to apply GANs, guiding LLMs to enhance the probability distribution of tabular features. This approach enables the utilization of LLMs as generators for GANs in synthesizing tabular data. Our experiments demonstrate that PPO leads to an approximately 4{\%} improvement in the accuracy of models trained on synthetically generated data over state-of-the-art across three real-world datasets.",
}
| A multitude of industries depend on accurate and reasonable tabular data augmentation for their business processes. Contemporary methodologies in generating tabular data revolve around utilizing Generative Adversarial Networks (GAN) or fine-tuning Large Language Models (LLM). However, GAN-based approaches are documented to produce samples with common-sense errors attributed to the absence of external knowledge. On the other hand, LLM-based methods exhibit a limited capacity to capture the disparities between synthesized and actual data distribution due to the absence of feedback from a discriminator during training. Furthermore, the decoding of LLM-based generation introduces gradient breakpoints, impeding the backpropagation of loss from a discriminator, thereby complicating the integration of these two approaches. To solve this challenge, we propose using proximal policy optimization (PPO) to apply GANs, guiding LLMs to enhance the probability distribution of tabular features. This approach enables the utilization of LLMs as generators for GANs in synthesizing tabular data. Our experiments demonstrate that PPO leads to an approximately 4{\%} improvement in the accuracy of models trained on synthetically generated data over state-of-the-art across three real-world datasets. | [
"Yang, Shuo",
"Yuan, Chenchen",
"Rong, Yao",
"Steinbauer, Felix",
"Kasneci, Gjergji"
] | P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models | findings-acl.16 | Poster | 2406.11391 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.16/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.17.bib | @inproceedings{zhou-ai-2024-teaching,
title = "Teaching-Assistant-in-the-Loop: Improving Knowledge Distillation from Imperfect Teacher Models in Low-Budget Scenarios",
author = "Zhou, Yuhang and
Ai, Wei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.17",
pages = "265--282",
abstract = "There is increasing interest in distilling task-specific knowledge from large language models (LLM) to smaller student models.Nonetheless, LLM distillation presents a dual challenge: 1) there is a high cost associated with querying the teacher LLM, such as GPT-4, for gathering an ample number of demonstrations; 2) the teacher LLM might provide imperfect outputs with a negative impact on the student{'}s learning process. To enhance sample efficiency within resource-constrained, imperfect teacher scenarios, we propose a three-component framework leveraging three signal types. The first signal is the student{'}s self-consistency (consistency of student multiple outputs), which is a proxy of the student{'}s confidence. Specifically, we introduce a {''}teaching assistant{''} (TA) model to assess the uncertainty of both the student{'}s and the teacher{'}s outputs via confidence scoring, which serves as another two signals for student training. Furthermore, we propose a two-stage training schema to first warm up the student with a small proportion of data to better utilize student{'}s signal. Experiments have shown the superiority of our proposed framework for four complex reasoning tasks. On average, our proposed two-stage framework brings a relative improvement of up to 20.79{\%} compared to fine-tuning without any signals across datasets.",
}
| There is increasing interest in distilling task-specific knowledge from large language models (LLM) to smaller student models.Nonetheless, LLM distillation presents a dual challenge: 1) there is a high cost associated with querying the teacher LLM, such as GPT-4, for gathering an ample number of demonstrations; 2) the teacher LLM might provide imperfect outputs with a negative impact on the student{'}s learning process. To enhance sample efficiency within resource-constrained, imperfect teacher scenarios, we propose a three-component framework leveraging three signal types. The first signal is the student{'}s self-consistency (consistency of student multiple outputs), which is a proxy of the student{'}s confidence. Specifically, we introduce a {''}teaching assistant{''} (TA) model to assess the uncertainty of both the student{'}s and the teacher{'}s outputs via confidence scoring, which serves as another two signals for student training. Furthermore, we propose a two-stage training schema to first warm up the student with a small proportion of data to better utilize student{'}s signal. Experiments have shown the superiority of our proposed framework for four complex reasoning tasks. On average, our proposed two-stage framework brings a relative improvement of up to 20.79{\%} compared to fine-tuning without any signals across datasets. | [
"Zhou, Yuhang",
"Ai, Wei"
] | Teaching-Assistant-in-the-Loop: Improving Knowledge Distillation from Imperfect Teacher Models in Low-Budget Scenarios | findings-acl.17 | Poster | 2406.05322 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.17/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.18.bib | @inproceedings{xu-etal-2024-small,
title = "Small Models are Valuable Plug-ins for Large Language Models",
author = "Xu, Canwen and
Xu, Yichong and
Wang, Shuohang and
Liu, Yang and
Zhu, Chenguang and
McAuley, Julian",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.18",
pages = "283--294",
abstract = "Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their weights are often publicly unavailable and their immense sizes make the models difficult to be tuned with common hardware. As a result, effectively tuning these models with large-scale supervised data can be challenging. As an alternative, In-Context Learning (ICL) can only use a small number of supervised examples due to context length limits. In this paper, we propose Super In-Context Learning (SuperICL) which allows black-box LLMs to work with locally fine-tuned smaller models, resulting in superior performance on supervised tasks. Our experiments demonstrate that SuperICL can improve performance beyond state-of-the-art fine-tuned models while addressing the instability problem of in-context learning.",
}
| Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their weights are often publicly unavailable and their immense sizes make the models difficult to be tuned with common hardware. As a result, effectively tuning these models with large-scale supervised data can be challenging. As an alternative, In-Context Learning (ICL) can only use a small number of supervised examples due to context length limits. In this paper, we propose Super In-Context Learning (SuperICL) which allows black-box LLMs to work with locally fine-tuned smaller models, resulting in superior performance on supervised tasks. Our experiments demonstrate that SuperICL can improve performance beyond state-of-the-art fine-tuned models while addressing the instability problem of in-context learning. | [
"Xu, Canwen",
"Xu, Yichong",
"Wang, Shuohang",
"Liu, Yang",
"Zhu, Chenguang",
"McAuley, Julian"
] | Small Models are Valuable Plug-ins for Large Language Models | findings-acl.18 | Poster | 2305.08848 | [
"https://github.com/JetRunner/SuperICL"
] | https://huggingface.co/papers/2305.08848 | 3 | 3 | 0 | 6 | https://aclanthology.org/2024.findings-acl.18/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.19.bib | @inproceedings{madsen-etal-2024-self,
title = "Are self-explanations from Large Language Models faithful?",
author = "Madsen, Andreas and
Chandar, Sarath and
Reddy, Siva",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.19",
pages = "295--337",
abstract = "Instruction-tuned Large Language Models (LLMs) excel at many tasks and will even explain their reasoning, so-called self-explanations. However, convincing and wrong self-explanations can lead to unsupported confidence in LLMs, thus increasing risk. Therefore, it{'}s important to measure if self-explanations truly reflect the model{'}s behavior. Such a measure is called interpretability-faithfulness and is challenging to perform since the ground truth is inaccessible, and many LLMs only have an inference API. To address this, we propose employing self-consistency checks to measure faithfulness. For example, if an LLM says a set of words is important for making a prediction, then it should not be able to make its prediction without these words. While self-consistency checks are a common approach to faithfulness, they have not previously been successfully applied to LLM self-explanations for counterfactual, feature attribution, and redaction explanations. Our results demonstrate that faithfulness is explanation, model, and task-dependent, showing self-explanations should not be trusted in general. For example, with sentiment classification, counterfactuals are more faithful for Llama2, feature attribution for Mistral, and redaction for Falcon 40B.",
}
| Instruction-tuned Large Language Models (LLMs) excel at many tasks and will even explain their reasoning, so-called self-explanations. However, convincing and wrong self-explanations can lead to unsupported confidence in LLMs, thus increasing risk. Therefore, it{'}s important to measure if self-explanations truly reflect the model{'}s behavior. Such a measure is called interpretability-faithfulness and is challenging to perform since the ground truth is inaccessible, and many LLMs only have an inference API. To address this, we propose employing self-consistency checks to measure faithfulness. For example, if an LLM says a set of words is important for making a prediction, then it should not be able to make its prediction without these words. While self-consistency checks are a common approach to faithfulness, they have not previously been successfully applied to LLM self-explanations for counterfactual, feature attribution, and redaction explanations. Our results demonstrate that faithfulness is explanation, model, and task-dependent, showing self-explanations should not be trusted in general. For example, with sentiment classification, counterfactuals are more faithful for Llama2, feature attribution for Mistral, and redaction for Falcon 40B. | [
"Madsen, Andreas",
"Ch",
"ar, Sarath",
"Reddy, Siva"
] | Are self-explanations from Large Language Models faithful? | findings-acl.19 | Poster | 2401.07927 | [
"https://github.com/AndreasMadsen/llm-introspection"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.19/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.20.bib | @inproceedings{zou-etal-2024-implicitave,
title = "{I}mplicit{AVE}: An Open-Source Dataset and Multimodal {LLM}s Benchmark for Implicit Attribute Value Extraction",
author = "Zou, Henry and
Samuel, Vinay and
Zhou, Yue and
Zhang, Weizhi and
Fang, Liancheng and
Song, Zihe and
Yu, Philip and
Caragea, Cornelia",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.20",
pages = "338--354",
abstract = "Existing datasets for attribute value extraction (AVE) predominantly focus on explicit attribute values while neglecting the implicit ones, lack product images, are often not publicly available, and lack an in-depth human inspection across diverse domains. To address these limitations, we present ImplicitAVE, the first, publicly available multimodal dataset for implicit attribute value extraction. ImplicitAVE, sourced from the MAVE dataset, is carefully curated and expanded to include implicit AVE and multimodality, resulting in a refined dataset of 68k training and 1.6k testing data across five domains. We also explore the application of multimodal large language models (MLLMs) to implicit AVE, establishing a comprehensive benchmark for MLLMs on the ImplicitAVE dataset. Six recent MLLMs with eleven variants are evaluated across diverse settings, revealing that implicit value extraction remains a challenging task for MLLMs. The contributions of this work include the development and release of ImplicitAVE, and the exploration and benchmarking of various MLLMs for implicit AVE, providing valuable insights and potential future research directions. Dataset and code are available at https://github.com/HenryPengZou/ImplicitAVE.",
}
| Existing datasets for attribute value extraction (AVE) predominantly focus on explicit attribute values while neglecting the implicit ones, lack product images, are often not publicly available, and lack an in-depth human inspection across diverse domains. To address these limitations, we present ImplicitAVE, the first, publicly available multimodal dataset for implicit attribute value extraction. ImplicitAVE, sourced from the MAVE dataset, is carefully curated and expanded to include implicit AVE and multimodality, resulting in a refined dataset of 68k training and 1.6k testing data across five domains. We also explore the application of multimodal large language models (MLLMs) to implicit AVE, establishing a comprehensive benchmark for MLLMs on the ImplicitAVE dataset. Six recent MLLMs with eleven variants are evaluated across diverse settings, revealing that implicit value extraction remains a challenging task for MLLMs. The contributions of this work include the development and release of ImplicitAVE, and the exploration and benchmarking of various MLLMs for implicit AVE, providing valuable insights and potential future research directions. Dataset and code are available at https://github.com/HenryPengZou/ImplicitAVE. | [
"Zou, Henry",
"Samuel, Vinay",
"Zhou, Yue",
"Zhang, Weizhi",
"Fang, Liancheng",
"Song, Zihe",
"Yu, Philip",
"Caragea, Cornelia"
] | ImplicitAVE: An Open-Source Dataset and Multimodal LLMs Benchmark for Implicit Attribute Value Extraction | findings-acl.20 | Poster | 2404.15592 | [
"https://github.com/HenryPengZou/ImplicitAVE"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.20/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.21.bib | @inproceedings{ye-etal-2024-prompt,
title = "Prompt Engineering a Prompt Engineer",
author = "Ye, Qinyuan and
Ahmed, Mohamed and
Pryzant, Reid and
Khani, Fereshte",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.21",
pages = "355--385",
abstract = "Prompt engineering is a challenging yet crucial task for optimizing the performance of large language models on customized tasks. It requires complex reasoning to examine the model{'}s errors, hypothesize what is missing or misleading in the current prompt, and communicate the task with clarity. While recent works indicate that large language models can be meta-prompted to perform automatic prompt engineering, we argue that their potential is limited due to insufficient guidance for complex reasoning in the meta-prompt. We fill this gap by infusing into the meta-prompt three key components: detailed descriptions, context specification, and a step-by-step reasoning template. The resulting method, named PE2, showcases remarkable versatility across diverse language tasks. It finds prompts that outperform {``}let{'}s think step by step{''} by 6.3{\%} on MultiArith and 3.1{\%} on GSM8K, and outperforms competitive baselines on counterfactual tasks by 6.9{\%}. Further, we show that PE2 can make targeted prompt edits, rectify erroneous prompts, and induce multi-step plans for complex tasks.",
}
| Prompt engineering is a challenging yet crucial task for optimizing the performance of large language models on customized tasks. It requires complex reasoning to examine the model{'}s errors, hypothesize what is missing or misleading in the current prompt, and communicate the task with clarity. While recent works indicate that large language models can be meta-prompted to perform automatic prompt engineering, we argue that their potential is limited due to insufficient guidance for complex reasoning in the meta-prompt. We fill this gap by infusing into the meta-prompt three key components: detailed descriptions, context specification, and a step-by-step reasoning template. The resulting method, named PE2, showcases remarkable versatility across diverse language tasks. It finds prompts that outperform {``}let{'}s think step by step{''} by 6.3{\%} on MultiArith and 3.1{\%} on GSM8K, and outperforms competitive baselines on counterfactual tasks by 6.9{\%}. Further, we show that PE2 can make targeted prompt edits, rectify erroneous prompts, and induce multi-step plans for complex tasks. | [
"Ye, Qinyuan",
"Ahmed, Mohamed",
"Pryzant, Reid",
"Khani, Fereshte"
] | Prompt Engineering a Prompt Engineer | findings-acl.21 | Poster | 2311.05661 | [
""
] | https://huggingface.co/papers/2311.05661 | 2 | 20 | 1 | 4 | https://aclanthology.org/2024.findings-acl.21/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.22.bib | @inproceedings{ghosh-etal-2024-aspire,
title = "{ASPIRE}: Language-Guided Data Augmentation for Improving Robustness Against Spurious Correlations",
author = "Ghosh, Sreyan and
Evuru, Chandra Kiran and
Kumar, Sonal and
Tyagi, Utkarsh and
Sakshi, S and
Chowdhury, Sanjoy and
Manocha, Dinesh",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.22",
pages = "386--406",
abstract = "Neural image classifiers can often learn to make predictions by overly relying on non-predictive features that are spuriously correlated with the class labels in the training data. This leads to poor performance in real-world atypical scenarios where such features are absent. This paper presents ASPIRE (Language-guided Data Augmentation for SPurIous correlation REmoval), a simple yet effective solution for supplementing the training dataset with images without spurious features, for robust learning against spurious correlations via better generalization. ASPIRE, guided by language at various steps, can generate non-spurious images without requiring any group labeling or existing non-spurious images in the training set. Precisely, we employ LLMs to first extract foreground and background features from textual descriptions of an image, followed by advanced language-guided image editing to discover the features that are spuriously correlated with the class label. Finally, we personalize a text-to-image generation model using the edited images to generate diverse in-domain images without spurious features. ASPIRE is complementary to all prior robust training methods in literature, and we demonstrate its effectiveness across 4 datasets and 9 baselines and show that ASPIRE improves the worst-group classification accuracy of prior methods by 1{\%} - 38{\%}. We also contribute a novel test set for the challenging Hard ImageNet dataset.",
}
| Neural image classifiers can often learn to make predictions by overly relying on non-predictive features that are spuriously correlated with the class labels in the training data. This leads to poor performance in real-world atypical scenarios where such features are absent. This paper presents ASPIRE (Language-guided Data Augmentation for SPurIous correlation REmoval), a simple yet effective solution for supplementing the training dataset with images without spurious features, for robust learning against spurious correlations via better generalization. ASPIRE, guided by language at various steps, can generate non-spurious images without requiring any group labeling or existing non-spurious images in the training set. Precisely, we employ LLMs to first extract foreground and background features from textual descriptions of an image, followed by advanced language-guided image editing to discover the features that are spuriously correlated with the class label. Finally, we personalize a text-to-image generation model using the edited images to generate diverse in-domain images without spurious features. ASPIRE is complementary to all prior robust training methods in literature, and we demonstrate its effectiveness across 4 datasets and 9 baselines and show that ASPIRE improves the worst-group classification accuracy of prior methods by 1{\%} - 38{\%}. We also contribute a novel test set for the challenging Hard ImageNet dataset. | [
"Ghosh, Sreyan",
"Evuru, Ch",
"ra Kiran",
"Kumar, Sonal",
"Tyagi, Utkarsh",
"Sakshi, S",
"Chowdhury, Sanjoy",
"Manocha, Dinesh"
] | ASPIRE: Language-Guided Data Augmentation for Improving Robustness Against Spurious Correlations | findings-acl.22 | Poster | 2308.10103 | [
"https://github.com/sreyan88/aspire"
] | https://huggingface.co/papers/2308.10103 | 2 | 0 | 0 | 7 | https://aclanthology.org/2024.findings-acl.22/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.23.bib | @inproceedings{deng-etal-2024-tables,
title = "Tables as Texts or Images: Evaluating the Table Reasoning Ability of {LLM}s and {MLLM}s",
author = "Deng, Naihao and
Sun, Zhenjie and
He, Ruiqi and
Sikka, Aman and
Chen, Yulong and
Ma, Lin and
Zhang, Yue and
Mihalcea, Rada",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.23",
pages = "407--426",
abstract = "Tables contrast with unstructured text data by its structure to organize the information.In this paper, we investigate the efficiency of various LLMs in interpreting tabular data through different prompting strategies and data formats. Our analysis extends across six benchmarks for table-related tasks such as question-answering and fact-checking. We pioneer in the assessment of LLMs{'} performance on image-based table representation. Specifically, we compare five text-based and three image-based table representations, revealing the influence of representation and prompting on LLM performance. We hope our study provides researchers insights into optimizing LLMs{'} application in table-related tasks.",
}
| Tables contrast with unstructured text data by its structure to organize the information.In this paper, we investigate the efficiency of various LLMs in interpreting tabular data through different prompting strategies and data formats. Our analysis extends across six benchmarks for table-related tasks such as question-answering and fact-checking. We pioneer in the assessment of LLMs{'} performance on image-based table representation. Specifically, we compare five text-based and three image-based table representations, revealing the influence of representation and prompting on LLM performance. We hope our study provides researchers insights into optimizing LLMs{'} application in table-related tasks. | [
"Deng, Naihao",
"Sun, Zhenjie",
"He, Ruiqi",
"Sikka, Aman",
"Chen, Yulong",
"Ma, Lin",
"Zhang, Yue",
"Mihalcea, Rada"
] | Tables as Texts or Images: Evaluating the Table Reasoning Ability of LLMs and MLLMs | findings-acl.23 | Poster | 2402.12424 | [
""
] | https://huggingface.co/papers/2402.12424 | 0 | 0 | 0 | 8 | https://aclanthology.org/2024.findings-acl.23/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.24.bib | @inproceedings{sheppard-etal-2024-biasly,
title = "Biasly: An Expert-Annotated Dataset for Subtle Misogyny Detection and Mitigation",
author = "Sheppard, Brooklyn and
Richter, Anna and
Cohen, Allison and
Smith, Elizabeth and
Kneese, Tamara and
Pelletier, Carolyne and
Baldini, Ioana and
Dong, Yue",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.24",
pages = "427--452",
abstract = "Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature. Built in collaboration with multi-disciplinary experts and annotators themselves, the dataset contains annotations of movie subtitles, capturing colloquial expressions of misogyny in North American film. The open-source dataset can be used for a range of NLP tasks, including binary and multi-label classification, severity score regression, and text generation for rewrites. In this paper, we discuss the methodology used, analyze the annotations obtained, provide baselines for each task using common NLP algorithms, and furnish error analyses to give insight into model behaviour when fine-tuned on the Biasly dataset.",
}
| Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature. Built in collaboration with multi-disciplinary experts and annotators themselves, the dataset contains annotations of movie subtitles, capturing colloquial expressions of misogyny in North American film. The open-source dataset can be used for a range of NLP tasks, including binary and multi-label classification, severity score regression, and text generation for rewrites. In this paper, we discuss the methodology used, analyze the annotations obtained, provide baselines for each task using common NLP algorithms, and furnish error analyses to give insight into model behaviour when fine-tuned on the Biasly dataset. | [
"Sheppard, Brooklyn",
"Richter, Anna",
"Cohen, Allison",
"Smith, Elizabeth",
"Kneese, Tamara",
"Pelletier, Carolyne",
"Baldini, Ioana",
"Dong, Yue"
] | Biasly: An Expert-Annotated Dataset for Subtle Misogyny Detection and Mitigation | findings-acl.24 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.24/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.25.bib | @inproceedings{glenn-etal-2024-blendsql,
title = "{B}lend{SQL}: A Scalable Dialect for Unifying Hybrid Question Answering in Relational Algebra",
author = "Glenn, Parker and
Dakle, Parag and
Wang, Liang and
Raghavan, Preethi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.25",
pages = "453--466",
abstract = "Many existing end-to-end systems for hybrid question answering tasks can often be boiled down to a {``}prompt-and-pray{''} paradigm, where the user has limited control and insight into the intermediate reasoning steps used to achieve the final result. Additionally, due to the context size limitation of many transformer-based LLMs, it is often not reasonable to expect that the full structured and unstructured context will fit into a given prompt in a zero-shot setting, let alone a few-shot setting. We introduce BlendSQL, a superset of SQLite to act as a unified dialect for orchestrating reasoning across both unstructured and structured data. For hybrid question answering tasks involving multi-hop reasoning, we encode the full decomposed reasoning roadmap into a single interpretable BlendSQL query. Notably, we show that BlendSQL can scale to massive datasets and improve the performance of end-to-end systems while using 35{\%} fewer tokens. Our code is available and installable as a package at https://github.com/parkervg/blendsql.",
}
| Many existing end-to-end systems for hybrid question answering tasks can often be boiled down to a {``}prompt-and-pray{''} paradigm, where the user has limited control and insight into the intermediate reasoning steps used to achieve the final result. Additionally, due to the context size limitation of many transformer-based LLMs, it is often not reasonable to expect that the full structured and unstructured context will fit into a given prompt in a zero-shot setting, let alone a few-shot setting. We introduce BlendSQL, a superset of SQLite to act as a unified dialect for orchestrating reasoning across both unstructured and structured data. For hybrid question answering tasks involving multi-hop reasoning, we encode the full decomposed reasoning roadmap into a single interpretable BlendSQL query. Notably, we show that BlendSQL can scale to massive datasets and improve the performance of end-to-end systems while using 35{\%} fewer tokens. Our code is available and installable as a package at https://github.com/parkervg/blendsql. | [
"Glenn, Parker",
"Dakle, Parag",
"Wang, Liang",
"Raghavan, Preethi"
] | BlendSQL: A Scalable Dialect for Unifying Hybrid Question Answering in Relational Algebra | findings-acl.25 | Poster | 2402.17882 | [
"https://github.com/parkervg/blendsql"
] | https://huggingface.co/papers/2402.17882 | 1 | 0 | 0 | 4 | https://aclanthology.org/2024.findings-acl.25/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.26.bib | @inproceedings{liu-etal-2024-llm,
title = "{LLM}-{QAT}: Data-Free Quantization Aware Training for Large Language Models",
author = "Liu, Zechun and
Oguz, Barlas and
Zhao, Changsheng and
Chang, Ernie and
Stock, Pierre and
Mehdad, Yashar and
Shi, Yangyang and
Krishnamoorthi, Raghuraman and
Chandra, Vikas",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.26",
pages = "467--484",
abstract = "Several post-training quantization methods have been applied to large language models (LLMs), and have been shown to perform well down to 8-bits. We find that these methods break down at lower bit precision, and investigate quantization-aware training for LLMs (LLM-QAT) to push quantization levels even further. We propose a data-free distillation method that leverages generations produced by the pre-trained model, which better preserves the original output distribution and allows quantizing any generative model independent of its training data, similar to post-training quantization methods. In addition to quantizing weights and activations, we also quantize the KV cache, which is critical for increasing throughput and supporting long sequence dependencies at current model sizes. We experiment with LLaMA models of sizes 7B, 13B, and 30B, at quantization levels down to 4-bits. We observe large improvements over training-free methods, especially in the low-bit settings.",
}
| Several post-training quantization methods have been applied to large language models (LLMs), and have been shown to perform well down to 8-bits. We find that these methods break down at lower bit precision, and investigate quantization-aware training for LLMs (LLM-QAT) to push quantization levels even further. We propose a data-free distillation method that leverages generations produced by the pre-trained model, which better preserves the original output distribution and allows quantizing any generative model independent of its training data, similar to post-training quantization methods. In addition to quantizing weights and activations, we also quantize the KV cache, which is critical for increasing throughput and supporting long sequence dependencies at current model sizes. We experiment with LLaMA models of sizes 7B, 13B, and 30B, at quantization levels down to 4-bits. We observe large improvements over training-free methods, especially in the low-bit settings. | [
"Liu, Zechun",
"Oguz, Barlas",
"Zhao, Changsheng",
"Chang, Ernie",
"Stock, Pierre",
"Mehdad, Yashar",
"Shi, Yangyang",
"Krishnamoorthi, Raghuraman",
"Ch",
"ra, Vikas"
] | LLM-QAT: Data-Free Quantization Aware Training for Large Language Models | findings-acl.26 | Poster | 2305.17888 | [
"https://github.com/facebookresearch/LLM-QAT"
] | https://huggingface.co/papers/2305.17888 | 1 | 1 | 0 | 9 | https://aclanthology.org/2024.findings-acl.26/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.27.bib | @inproceedings{liu-etal-2024-infimm,
title = "{I}nfi{MM}: Advancing Multimodal Understanding with an Open-Sourced Visual Language Model",
author = "Liu, Haogeng and
You, Quanzeng and
Wang, Yiqi and
Han, Xiaotian and
Zhai, Bohan and
Liu, Yongfei and
Chen, Wentao and
Jian, Yiren and
Tao, Yunzhe and
Yuan, Jianbo and
He, Ran and
Yang, Hongxia",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.27",
pages = "485--492",
abstract = "In this work, we present InfiMM, an advanced Multimodal Large Language Model that adapts to intricate vision-language tasks. InfiMM, inspired by the Flamingo architecture, distinguishes itself through the utilization of large-scale training data, comprehensive training strategies, and diverse large language models. This approach ensures the preservation of Flamingo{'}s foundational strengths while simultaneously introducing augmented capabilities. Empirical evaluations across a variety of benchmarks underscore InfiMM{'}s remarkable capability in multimodal understanding. The code can be found at: https://anonymous.4open.science/r/infimm-zephyr-F60C/.",
}
| In this work, we present InfiMM, an advanced Multimodal Large Language Model that adapts to intricate vision-language tasks. InfiMM, inspired by the Flamingo architecture, distinguishes itself through the utilization of large-scale training data, comprehensive training strategies, and diverse large language models. This approach ensures the preservation of Flamingo{'}s foundational strengths while simultaneously introducing augmented capabilities. Empirical evaluations across a variety of benchmarks underscore InfiMM{'}s remarkable capability in multimodal understanding. The code can be found at: https://anonymous.4open.science/r/infimm-zephyr-F60C/. | [
"Liu, Haogeng",
"You, Quanzeng",
"Wang, Yiqi",
"Han, Xiaotian",
"Zhai, Bohan",
"Liu, Yongfei",
"Chen, Wentao",
"Jian, Yiren",
"Tao, Yunzhe",
"Yuan, Jianbo",
"He, Ran",
"Yang, Hongxia"
] | InfiMM: Advancing Multimodal Understanding with an Open-Sourced Visual Language Model | findings-acl.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.27/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.28.bib | @inproceedings{li-etal-2024-towards-verifiable,
title = "Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution",
author = "Li, Xinze and
Cao, Yixin and
Pan, Liangming and
Ma, Yubo and
Sun, Aixin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.28",
pages = "493--516",
abstract = "Although achieving great success, Large Language Models (LLMs) usually suffer from unreliable hallucinations. Although language attribution can be a potential solution, there are no suitable benchmarks and evaluation metrics to attribute LLMs to structured knowledge. In this paper, we define a new task of Knowledge-aware Language Model Attribution (KaLMA) that improves upon three core concerns with conventional attributed LMs. First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios. Second, we propose a new {``}Conscious Incompetence{''} setting considering the incomplete knowledge repository, where the model identifies the need for supporting knowledge beyond the provided KG. Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment. To implement the above innovations, we build a dataset in biography domain BioKaLMA via evolutionary question generation strategy, to control the question complexity and necessary knowledge to the answer. For evaluation, we develop a baseline solution and demonstrate the room for improvement in LLMs{'} citation generation, emphasizing the importance of incorporating the {``}Conscious Incompetence{''} setting, and the critical role of retrieval accuracy.",
}
| Although achieving great success, Large Language Models (LLMs) usually suffer from unreliable hallucinations. Although language attribution can be a potential solution, there are no suitable benchmarks and evaluation metrics to attribute LLMs to structured knowledge. In this paper, we define a new task of Knowledge-aware Language Model Attribution (KaLMA) that improves upon three core concerns with conventional attributed LMs. First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios. Second, we propose a new {``}Conscious Incompetence{''} setting considering the incomplete knowledge repository, where the model identifies the need for supporting knowledge beyond the provided KG. Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment. To implement the above innovations, we build a dataset in biography domain BioKaLMA via evolutionary question generation strategy, to control the question complexity and necessary knowledge to the answer. For evaluation, we develop a baseline solution and demonstrate the room for improvement in LLMs{'} citation generation, emphasizing the importance of incorporating the {``}Conscious Incompetence{''} setting, and the critical role of retrieval accuracy. | [
"Li, Xinze",
"Cao, Yixin",
"Pan, Liangming",
"Ma, Yubo",
"Sun, Aixin"
] | Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution | findings-acl.28 | Poster | 2310.05634 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.28/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.29.bib | @inproceedings{koo-etal-2024-benchmarking,
title = "Benchmarking Cognitive Biases in Large Language Models as Evaluators",
author = "Koo, Ryan and
Lee, Minhwa and
Raheja, Vipul and
Park, Jong Inn and
Kim, Zae Myung and
Kang, Dongyeop",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.29",
pages = "517--545",
abstract = "Large Language Models (LLMs) have recently been shown to be effective as automatic evaluators with simple prompting and in-context learning. In this work, we assemble 16 LLMs encompassing four different size ranges and evaluate their output responses by preference ranking from the other LLMs as evaluators, such as System Star is better than System Square. We then evaluate the quality of ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators (CoBBLer), a benchmark to measure six different cognitive biases in LLM evaluation outputs, such as the Egocentric bias where a model prefers to rank its own outputs highly in evaluation. We find that LLMs are biased text quality evaluators, exhibiting strong indications on our bias benchmark (40{\%} of comparisons made by all models) within each of their evaluations that question their robustness as evaluators. Furthermore, we examine the correlation between human and machine preferences and calculate the average Rank-Biased Overlap (RBO) score to be 44{\%}, indicating that machine preferences are misaligned with humans. According to our findings, LLMs may still be unable to be utilized for automatic annotation aligned with human preferences.",
}
| Large Language Models (LLMs) have recently been shown to be effective as automatic evaluators with simple prompting and in-context learning. In this work, we assemble 16 LLMs encompassing four different size ranges and evaluate their output responses by preference ranking from the other LLMs as evaluators, such as System Star is better than System Square. We then evaluate the quality of ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators (CoBBLer), a benchmark to measure six different cognitive biases in LLM evaluation outputs, such as the Egocentric bias where a model prefers to rank its own outputs highly in evaluation. We find that LLMs are biased text quality evaluators, exhibiting strong indications on our bias benchmark (40{\%} of comparisons made by all models) within each of their evaluations that question their robustness as evaluators. Furthermore, we examine the correlation between human and machine preferences and calculate the average Rank-Biased Overlap (RBO) score to be 44{\%}, indicating that machine preferences are misaligned with humans. According to our findings, LLMs may still be unable to be utilized for automatic annotation aligned with human preferences. | [
"Koo, Ryan",
"Lee, Minhwa",
"Raheja, Vipul",
"Park, Jong Inn",
"Kim, Zae Myung",
"Kang, Dongyeop"
] | Benchmarking Cognitive Biases in Large Language Models as Evaluators | findings-acl.29 | Poster | 2309.17012 | [
"https://github.com/minnesotanlp/cobbler"
] | https://huggingface.co/papers/2309.17012 | 1 | 1 | 0 | 6 | https://aclanthology.org/2024.findings-acl.29/ | [
"SeaLLMs/SeaLLM-13B-Chat"
] | [] | [
"SeaLLMs/SeaLLM-Chat",
"SeaLLMs/SeaLLM-7B-v2.5-simple"
] | 1 |
https://aclanthology.org/2024.findings-acl.30.bib | @inproceedings{li-etal-2024-x,
title = "{X}-Instruction: Aligning Language Model in Low-resource Languages with Self-curated Cross-lingual Instructions",
author = "Li, Chong and
Yang, Wen and
Zhang, Jiajun and
Lu, Jinliang and
Wang, Shaonan and
Zong, Chengqing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.30",
pages = "546--566",
abstract = "Large language models respond well in high-resource languages like English but struggle in low-resource languages. It may arise from the lack of high-quality instruction following data in these languages. Directly translating English samples into these languages can be a solution but unreliable, leading to responses with translation errors and lacking language-specific or cultural knowledge. To address this issue, we propose a novel method to construct cross-lingual instruction following samples with instruction in English and response in low-resource languages. Specifically, the language model first learns to generate appropriate English instructions according to the natural web texts in other languages as responses. The candidate cross-lingual instruction tuning samples are further refined and diversified. We have employed this method to build a large-scale cross-lingual instruction tuning dataset on 10 languages, namely X-Instruction. The instruction data built using our method incorporate more language-specific knowledge compared with the naive translation method. Experimental results have shown that the response quality of the model tuned on X-Instruction greatly exceeds the model distilled from a powerful teacher model, reaching or even surpassing the ones of ChatGPT. In addition, we find that models tuned on cross-lingual instruction following samples can follow the instruction in the output language without further tuning.",
}
| Large language models respond well in high-resource languages like English but struggle in low-resource languages. It may arise from the lack of high-quality instruction following data in these languages. Directly translating English samples into these languages can be a solution but unreliable, leading to responses with translation errors and lacking language-specific or cultural knowledge. To address this issue, we propose a novel method to construct cross-lingual instruction following samples with instruction in English and response in low-resource languages. Specifically, the language model first learns to generate appropriate English instructions according to the natural web texts in other languages as responses. The candidate cross-lingual instruction tuning samples are further refined and diversified. We have employed this method to build a large-scale cross-lingual instruction tuning dataset on 10 languages, namely X-Instruction. The instruction data built using our method incorporate more language-specific knowledge compared with the naive translation method. Experimental results have shown that the response quality of the model tuned on X-Instruction greatly exceeds the model distilled from a powerful teacher model, reaching or even surpassing the ones of ChatGPT. In addition, we find that models tuned on cross-lingual instruction following samples can follow the instruction in the output language without further tuning. | [
"Li, Chong",
"Yang, Wen",
"Zhang, Jiajun",
"Lu, Jinliang",
"Wang, Shaonan",
"Zong, Chengqing"
] | X-Instruction: Aligning Language Model in Low-resource Languages with Self-curated Cross-lingual Instructions | findings-acl.30 | Poster | 2405.19744 | [
"https://github.com/znlp/x-instruction"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.30/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.31.bib | @inproceedings{wang-etal-2024-muffin,
title = "Muffin: Mitigating Unhelpfulness in Emotional Support Conversations with Multifaceted {AI} Feedback",
author = "Wang, Jiashuo and
Xu, Chunpu and
Leong, Chak Tou and
Li, Wenjie and
Li, Jing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.31",
pages = "567--585",
abstract = "Emotional support conversation systems are designed to alleviate users{'} emotional distress and assist them in overcoming their challenges. While previous studies have made progress, their models occasionally generate unhelpful responses, which are intended to be supportive but instead have counterproductive effects. Since unhelpful responses can hinder the effectiveness of emotional support, it is crucial to mitigate them within conversations. Our solution is motivated by two principal considerations: (1) multiple facets of emotional support are expected to be considered when developing emotional support conversation models, and (2) directly reducing the probability of generating unhelpful responses can effectively mitigate their occurrence. Accordingly, we introduce a novel $\textbf{model-agnostic}$ framework named $\underline{M}$itigating $\underline{u}$nhelpfulness with multifaceted AI $\underline{f}$eedback for emot$\underline{i}$o$\underline{n}$al support ($\textit{Muffin}$). It first employs a multifaceted AI feedback module designed to assess the helpfulness model responses across various facets of emotional support. Leveraging contrastive learning, Muffin then reduces the unhelpful responses{'} likelihoods. To validate the effectiveness of our proposed framework, we apply Muffin to various previous emotional support generation models, including the state-of-the-art. Experimental results demonstrate that Muffin can significantly mitigate unhelpful response generation while enhancing response fluency and relevance.",
}
| Emotional support conversation systems are designed to alleviate users{'} emotional distress and assist them in overcoming their challenges. While previous studies have made progress, their models occasionally generate unhelpful responses, which are intended to be supportive but instead have counterproductive effects. Since unhelpful responses can hinder the effectiveness of emotional support, it is crucial to mitigate them within conversations. Our solution is motivated by two principal considerations: (1) multiple facets of emotional support are expected to be considered when developing emotional support conversation models, and (2) directly reducing the probability of generating unhelpful responses can effectively mitigate their occurrence. Accordingly, we introduce a novel $\textbf{model-agnostic}$ framework named $\underline{M}$itigating $\underline{u}$nhelpfulness with multifaceted AI $\underline{f}$eedback for emot$\underline{i}$o$\underline{n}$al support ($\textit{Muffin}$). It first employs a multifaceted AI feedback module designed to assess the helpfulness model responses across various facets of emotional support. Leveraging contrastive learning, Muffin then reduces the unhelpful responses{'} likelihoods. To validate the effectiveness of our proposed framework, we apply Muffin to various previous emotional support generation models, including the state-of-the-art. Experimental results demonstrate that Muffin can significantly mitigate unhelpful response generation while enhancing response fluency and relevance. | [
"Wang, Jiashuo",
"Xu, Chunpu",
"Leong, Chak Tou",
"Li, Wenjie",
"Li, Jing"
] | Muffin: Mitigating Unhelpfulness in Emotional Support Conversations with Multifaceted AI Feedback | findings-acl.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.31/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.32.bib | @inproceedings{wang-etal-2024-resonance,
title = "Resonance {R}o{PE}: Improving Context Length Generalization of Large Language Models",
author = "Wang, Suyuchen and
Kobyzev, Ivan and
Lu, Peng and
Rezagholizadeh, Mehdi and
Liu, Bang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.32",
pages = "586--598",
abstract = "This paper addresses the challenge of train-short-test-long (TSTL) scenarios in Large Language Models (LLMs) equipped with Rotary Position Embedding (RoPE), where models pre-trained on shorter sequences face difficulty with out-of-distribution (OOD) token positions in longer sequences. We introduce Resonance RoPE, a novel approach designed to narrow the generalization gap in TSTL scenarios by refining the interpolation of RoPE features for OOD positions, significantly improving the model performance without additional online computational costs. Furthermore, we present PosGen, a new synthetic benchmark specifically designed for fine-grained behavior analysis in TSTL scenarios, aiming to isolate the constantly increasing difficulty of token generation on long contexts from the challenges of recognizing new token positions. Our experiments on synthetic tasks show that after applying Resonance RoPE, Transformers recognize OOD position better and more robustly. Our extensive LLM experiments also show superior performance after applying Resonance RoPE to the current state-of-the-art RoPE scaling method, YaRN, on both upstream language modeling tasks and a variety of downstream long-text applications.",
}
| This paper addresses the challenge of train-short-test-long (TSTL) scenarios in Large Language Models (LLMs) equipped with Rotary Position Embedding (RoPE), where models pre-trained on shorter sequences face difficulty with out-of-distribution (OOD) token positions in longer sequences. We introduce Resonance RoPE, a novel approach designed to narrow the generalization gap in TSTL scenarios by refining the interpolation of RoPE features for OOD positions, significantly improving the model performance without additional online computational costs. Furthermore, we present PosGen, a new synthetic benchmark specifically designed for fine-grained behavior analysis in TSTL scenarios, aiming to isolate the constantly increasing difficulty of token generation on long contexts from the challenges of recognizing new token positions. Our experiments on synthetic tasks show that after applying Resonance RoPE, Transformers recognize OOD position better and more robustly. Our extensive LLM experiments also show superior performance after applying Resonance RoPE to the current state-of-the-art RoPE scaling method, YaRN, on both upstream language modeling tasks and a variety of downstream long-text applications. | [
"Wang, Suyuchen",
"Kobyzev, Ivan",
"Lu, Peng",
"Rezagholizadeh, Mehdi",
"Liu, Bang"
] | Resonance RoPE: Improving Context Length Generalization of Large Language Models | findings-acl.32 | Poster | 2403.00071 | [
"https://github.com/sheryc/resonance_rope"
] | https://huggingface.co/papers/2403.00071 | 3 | 22 | 1 | 5 | https://aclanthology.org/2024.findings-acl.32/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.33.bib | @inproceedings{tang-etal-2024-medagents,
title = "{M}ed{A}gents: Large Language Models as Collaborators for Zero-shot Medical Reasoning",
author = "Tang, Xiangru and
Zou, Anni and
Zhang, Zhuosheng and
Li, Ziming and
Zhao, Yilun and
Zhang, Xingyao and
Cohan, Arman and
Gerstein, Mark",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.33",
pages = "599--621",
abstract = "Large language models (LLMs), despite their remarkable progress across various general domains, encounter significant barriers in medicine and healthcare. This field faces unique challenges such as domain-specific terminologies and reasoning over specialized knowledge. To address these issues, we propose MedAgents, a novel multi-disciplinary collaboration framework for the medical domain. MedAgents leverages LLM-based agents in a role-playing setting that participate in a collaborative multi-round discussion, thereby enhancing LLM proficiency and reasoning capabilities. This training-free framework encompasses five critical steps: gathering domain experts, proposing individual analyses, summarising these analyses into a report, iterating over discussions until a consensus is reached, and ultimately making a decision. Our work focuses on the zero-shot setting, which is applicable in real-world scenarios. Experimental results on nine datasets (MedQA, MedMCQA, PubMedQA, and six subtasks from MMLU) establish that our proposed MedAgents framework excels at mining and harnessing the medical expertise within LLMs, as well as extending its reasoning abilities. Our code can be found at https://github.com/gersteinlab/MedAgents.",
}
| Large language models (LLMs), despite their remarkable progress across various general domains, encounter significant barriers in medicine and healthcare. This field faces unique challenges such as domain-specific terminologies and reasoning over specialized knowledge. To address these issues, we propose MedAgents, a novel multi-disciplinary collaboration framework for the medical domain. MedAgents leverages LLM-based agents in a role-playing setting that participate in a collaborative multi-round discussion, thereby enhancing LLM proficiency and reasoning capabilities. This training-free framework encompasses five critical steps: gathering domain experts, proposing individual analyses, summarising these analyses into a report, iterating over discussions until a consensus is reached, and ultimately making a decision. Our work focuses on the zero-shot setting, which is applicable in real-world scenarios. Experimental results on nine datasets (MedQA, MedMCQA, PubMedQA, and six subtasks from MMLU) establish that our proposed MedAgents framework excels at mining and harnessing the medical expertise within LLMs, as well as extending its reasoning abilities. Our code can be found at https://github.com/gersteinlab/MedAgents. | [
"Tang, Xiangru",
"Zou, Anni",
"Zhang, Zhuosheng",
"Li, Ziming",
"Zhao, Yilun",
"Zhang, Xingyao",
"Cohan, Arman",
"Gerstein, Mark"
] | MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning | findings-acl.33 | Poster | 2311.10537 | [
"https://github.com/gersteinlab/medagents"
] | https://huggingface.co/papers/2311.10537 | 0 | 3 | 0 | 7 | https://aclanthology.org/2024.findings-acl.33/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.34.bib | @inproceedings{wang-etal-2024-meta,
title = "Meta-Reasoning: Semantics-Symbol Deconstruction for Large Language Models",
author = "Wang, Yiming and
Zhang, Zhuosheng and
Zhang, Pei and
Yang, Baosong and
Wang, Rui",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.34",
pages = "622--643",
abstract = "Neural-symbolic methods have demonstrated efficiency in enhancing the reasoning abilities of large language models (LLMs). However, existing methods mainly rely on syntactically mapping natural languages to complete formal languages like Python and SQL. Those methods require that reasoning tasks be convertible into programs, which cater to the computer execution mindset and deviate from human reasoning habits. To broaden symbolic methods{'} applicability and adaptability in the real world, we propose Meta-Reasoning from a linguistic perspective. This method empowers LLMs to deconstruct reasoning-independent semantic information into generic symbolic representations, thereby efficiently capturing more generalized reasoning knowledge. We conduct extensive experiments on more than ten datasets encompassing conventional reasoning tasks like arithmetic, symbolic, and logical reasoning, and the more complex interactive reasoning tasks like theory-of-mind reasoning. Experimental results demonstrate that Meta-Reasoning significantly enhances in-context reasoning accuracy, learning efficiency, out-of-domain generalization, and output stability compared to the Chain-of-Thought technique.",
}
| Neural-symbolic methods have demonstrated efficiency in enhancing the reasoning abilities of large language models (LLMs). However, existing methods mainly rely on syntactically mapping natural languages to complete formal languages like Python and SQL. Those methods require that reasoning tasks be convertible into programs, which cater to the computer execution mindset and deviate from human reasoning habits. To broaden symbolic methods{'} applicability and adaptability in the real world, we propose Meta-Reasoning from a linguistic perspective. This method empowers LLMs to deconstruct reasoning-independent semantic information into generic symbolic representations, thereby efficiently capturing more generalized reasoning knowledge. We conduct extensive experiments on more than ten datasets encompassing conventional reasoning tasks like arithmetic, symbolic, and logical reasoning, and the more complex interactive reasoning tasks like theory-of-mind reasoning. Experimental results demonstrate that Meta-Reasoning significantly enhances in-context reasoning accuracy, learning efficiency, out-of-domain generalization, and output stability compared to the Chain-of-Thought technique. | [
"Wang, Yiming",
"Zhang, Zhuosheng",
"Zhang, Pei",
"Yang, Baosong",
"Wang, Rui"
] | Meta-Reasoning: Semantics-Symbol Deconstruction for Large Language Models | findings-acl.34 | Poster | 2306.17820 | [
"https://github.com/alsace08/meta-reasoning"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.34/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.35.bib | @inproceedings{zhou-etal-2024-dpdllm,
title = "{DPDLLM}: A Black-box Framework for Detecting Pre-training Data from Large Language Models",
author = "Zhou, Baohang and
Wang, Zezhong and
Wang, Lingzhi and
Wang, Hongru and
Zhang, Ying and
Song, Kehui and
Sui, Xuhui and
Wong, Kam-Fai",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.35",
pages = "644--653",
abstract = "The success of large language models (LLM) benefits from large-scale model parameters and large amounts of pre-training data. However, the textual data for training LLM can not be confirmed to be legal because they are crawled from different web sites. For example, there are copyrighted articles, personal reviews and information in the pre-training data for LLM which are illegal. To address the above issue and develop legal LLM, we propose to detect the pre-training data from LLM in a pure black-box way because the existing LLM services only return the generated text. The previous most related works are the membership inference attack (MIA) on machine learning models to detect the training data from them. But the existing methods are based on analyzing the output probabilities of models which are unrealistic to LLM services. To tackle the problem, we firstly construct the benchmark datasets by collecting textual data from different domains as the seen and unseen pre-training data for LLMs. Then, we investigate a black-box framework named DPDLLM, with the only access to the generated texts from LLM for detecting textual data whether was used to train it. In the proposed framework, we exploit GPT-2 as the reference model to fit the textual data and feed the generated text from LLM into it to acquire sequence probabilities as the significant feature for detection. The experimental results on the benchmark datasets demonstrate that DPDLLM is effective on different popular LLMs and outperforms the existing methods.",
}
| The success of large language models (LLM) benefits from large-scale model parameters and large amounts of pre-training data. However, the textual data for training LLM can not be confirmed to be legal because they are crawled from different web sites. For example, there are copyrighted articles, personal reviews and information in the pre-training data for LLM which are illegal. To address the above issue and develop legal LLM, we propose to detect the pre-training data from LLM in a pure black-box way because the existing LLM services only return the generated text. The previous most related works are the membership inference attack (MIA) on machine learning models to detect the training data from them. But the existing methods are based on analyzing the output probabilities of models which are unrealistic to LLM services. To tackle the problem, we firstly construct the benchmark datasets by collecting textual data from different domains as the seen and unseen pre-training data for LLMs. Then, we investigate a black-box framework named DPDLLM, with the only access to the generated texts from LLM for detecting textual data whether was used to train it. In the proposed framework, we exploit GPT-2 as the reference model to fit the textual data and feed the generated text from LLM into it to acquire sequence probabilities as the significant feature for detection. The experimental results on the benchmark datasets demonstrate that DPDLLM is effective on different popular LLMs and outperforms the existing methods. | [
"Zhou, Baohang",
"Wang, Zezhong",
"Wang, Lingzhi",
"Wang, Hongru",
"Zhang, Ying",
"Song, Kehui",
"Sui, Xuhui",
"Wong, Kam-Fai"
] | DPDLLM: A Black-box Framework for Detecting Pre-training Data from Large Language Models | findings-acl.35 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.35/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.36.bib | @inproceedings{xue-etal-2024-pacit,
title = "{PACIT}: Unlocking the Power of Examples for Better In-Context Instruction Tuning",
author = "Xue, Tianci and
Wang, Ziqi and
Li, Yixia and
Chen, Yun and
Chen, Guanhua",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.36",
pages = "654--665",
abstract = "Instruction tuning enhances the instruction following ability of large language models by finetuning with supervised instruction data. Previous work proposes in-context instruction tuning (ICIT) where specific positive or negative examples are incorporated into the prompt for better performance. In this work, we propose PACIT, a simple and effective in-context instruction tuning method, inspired by the pedagogical concept of desirable difficulty. The PACIT method unlocks the power of examples by encouraging the model to actively learn to grasp the distinctions between the positive and negative examples instead of merely reading. The model is expected to first verify the correctness of the provided example according to the task description, which is then set as the condition for generating a better response to the task instance. Our extensive experiments prove the effectiveness of PACIT, outperforming ICIT baseline on both in-domain and out-domain tasks up to 9.16 and 3.14 average ROUGE-L scores, respectively. Moreover, PACIT can notably enhance the performance of instruction tuning even when all positive and negative examples are generated with a self-instruct method.",
}
| Instruction tuning enhances the instruction following ability of large language models by finetuning with supervised instruction data. Previous work proposes in-context instruction tuning (ICIT) where specific positive or negative examples are incorporated into the prompt for better performance. In this work, we propose PACIT, a simple and effective in-context instruction tuning method, inspired by the pedagogical concept of desirable difficulty. The PACIT method unlocks the power of examples by encouraging the model to actively learn to grasp the distinctions between the positive and negative examples instead of merely reading. The model is expected to first verify the correctness of the provided example according to the task description, which is then set as the condition for generating a better response to the task instance. Our extensive experiments prove the effectiveness of PACIT, outperforming ICIT baseline on both in-domain and out-domain tasks up to 9.16 and 3.14 average ROUGE-L scores, respectively. Moreover, PACIT can notably enhance the performance of instruction tuning even when all positive and negative examples are generated with a self-instruct method. | [
"Xue, Tianci",
"Wang, Ziqi",
"Li, Yixia",
"Chen, Yun",
"Chen, Guanhua"
] | PACIT: Unlocking the Power of Examples for Better In-Context Instruction Tuning | findings-acl.36 | Poster | 2310.00901 | [
"https://github.com/xuetianci/pacit"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.36/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.37.bib | @inproceedings{hu-etal-2024-listen,
title = "Listen Again and Choose the Right Answer: A New Paradigm for Automatic Speech Recognition with Large Language Models",
author = "Hu, Yuchen and
Chen, Chen and
Qin, Chengwei and
Zhu, Qiushi and
Chng, EngSiong and
Li, Ruizhe",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.37",
pages = "666--679",
abstract = "Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which aims to predict the ground-truth transcription from the decoded N-best hypotheses. Thanks to the strong language generation ability of LLMs and rich information in the N-best list, GER shows great effectiveness in enhancing ASR results. However, it still suffers from two limitations: 1) LLMs are unaware of the source speech during GER, which may lead to results that are grammatically correct but violate the source speech content, 2) N-best hypotheses usually only vary in a few tokens, making it redundant to send all of them for GER, which could confuse LLM about which tokens to focus on and thus lead to increased miscorrection. In this paper, we propose ClozeGER, a new paradigm for ASR generative error correction. First, we introduce a multimodal LLM (i.e., SpeechGPT) to receive source speech as extra input to improve the fidelity of correction output. Then, we reformat GER as a cloze test with logits calibration to remove the input information redundancy and simplify GER with clear instructions. Experiments show that ClozeGER achieves a new breakthrough over vanilla GER on 9 popular ASR datasets.",
}
| Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which aims to predict the ground-truth transcription from the decoded N-best hypotheses. Thanks to the strong language generation ability of LLMs and rich information in the N-best list, GER shows great effectiveness in enhancing ASR results. However, it still suffers from two limitations: 1) LLMs are unaware of the source speech during GER, which may lead to results that are grammatically correct but violate the source speech content, 2) N-best hypotheses usually only vary in a few tokens, making it redundant to send all of them for GER, which could confuse LLM about which tokens to focus on and thus lead to increased miscorrection. In this paper, we propose ClozeGER, a new paradigm for ASR generative error correction. First, we introduce a multimodal LLM (i.e., SpeechGPT) to receive source speech as extra input to improve the fidelity of correction output. Then, we reformat GER as a cloze test with logits calibration to remove the input information redundancy and simplify GER with clear instructions. Experiments show that ClozeGER achieves a new breakthrough over vanilla GER on 9 popular ASR datasets. | [
"Hu, Yuchen",
"Chen, Chen",
"Qin, Chengwei",
"Zhu, Qiushi",
"Chng, EngSiong",
"Li, Ruizhe"
] | Listen Again and Choose the Right Answer: A New Paradigm for Automatic Speech Recognition with Large Language Models | findings-acl.37 | Poster | 2405.10025 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.37/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.38.bib | @inproceedings{yue-etal-2024-towards,
title = "Towards Better Graph-based Cross-document Relation Extraction via Non-bridge Entity Enhancement and Prediction Debiasing",
author = "Yue, Hao and
Lai, Shaopeng and
Yang, Chengyi and
Zhang, Liang and
Yao, Junfeng and
Su, Jinsong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.38",
pages = "680--691",
abstract = "Cross-document Relation Extraction aims to predict the relation between target entities located in different documents. In this regard, the dominant models commonly retain useful information for relation prediction via bridge entities, which allows the model to elaborately capture the intrinsic interdependence between target entities. However, these studies ignore the non-bridge entities, each of which co-occurs with only one target entity and offers the semantic association between target entities for relation prediction. Besides, the commonly-used dataset{--}CodRED contains substantial NA instances, leading to the prediction bias during inference. To address these issues, in this paper, we propose a novel graph-based cross-document RE model with non-bridge entity enhancement and prediction debiasing. Specifically, we use a unified entity graph to integrate numerous non-bridge entities with target entities and bridge entities, modeling various associations between them, and then use a graph recurrent network to encode this graph. Finally, we introduce a novel debiasing strategy to calibrate the original prediction distribution. Experimental results on the closed and open settings show that our model significantly outperforms all baselines, including the GPT-3.5-turbo and InstructUIE, achieving state-of-the-art performance. Particularly, our model obtains 66.23{\%} and 55.87{\%} AUC points in the official leaderboard under the two settings, respectively,ranking the first place in all submissions since December 2023. Our code is available at https://github.com/DeepLearnXMU/CoRE-NEPD.",
}
| Cross-document Relation Extraction aims to predict the relation between target entities located in different documents. In this regard, the dominant models commonly retain useful information for relation prediction via bridge entities, which allows the model to elaborately capture the intrinsic interdependence between target entities. However, these studies ignore the non-bridge entities, each of which co-occurs with only one target entity and offers the semantic association between target entities for relation prediction. Besides, the commonly-used dataset{--}CodRED contains substantial NA instances, leading to the prediction bias during inference. To address these issues, in this paper, we propose a novel graph-based cross-document RE model with non-bridge entity enhancement and prediction debiasing. Specifically, we use a unified entity graph to integrate numerous non-bridge entities with target entities and bridge entities, modeling various associations between them, and then use a graph recurrent network to encode this graph. Finally, we introduce a novel debiasing strategy to calibrate the original prediction distribution. Experimental results on the closed and open settings show that our model significantly outperforms all baselines, including the GPT-3.5-turbo and InstructUIE, achieving state-of-the-art performance. Particularly, our model obtains 66.23{\%} and 55.87{\%} AUC points in the official leaderboard under the two settings, respectively,ranking the first place in all submissions since December 2023. Our code is available at https://github.com/DeepLearnXMU/CoRE-NEPD. | [
"Yue, Hao",
"Lai, Shaopeng",
"Yang, Chengyi",
"Zhang, Liang",
"Yao, Junfeng",
"Su, Jinsong"
] | Towards Better Graph-based Cross-document Relation Extraction via Non-bridge Entity Enhancement and Prediction Debiasing | findings-acl.38 | Poster | 2406.16529 | [
"https://github.com/deeplearnxmu/core-nepd"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.38/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.39.bib | @inproceedings{lee-etal-2024-large,
title = "Large Language Models can Share Images, Too!",
author = "Lee, Young-Jun and
Lee, Dokyong and
Sung, Joo Won and
Hyeon, Jonghwan and
Choi, Ho-Jin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.39",
pages = "692--713",
abstract = "This paper explores the image-sharing capability of Large Language Models (LLMs), such as GPT-4 and LLaMA 2, in a zero-shot setting. To facilitate a comprehensive evaluation of LLMs, we introduce the photochatplus dataset, which includes enriched annotations (ie intent, triggering sentence, image description, and salient information). Furthermore, we present the gradient-free and extensible Decide, Describe, and Retrieve () framework. With extensive experiments, we unlock the image-sharing capability of equipped with LLMs in zero-shot prompting, with ChatGPT achieving the best performance.Our findings also reveal the emergent image-sharing ability in LLMs under zero-shot conditions, validating the effectiveness of . We use this framework to demonstrate its practicality and effectiveness in two real-world scenarios: (1) human-bot interaction and (2) dataset augmentation. To the best of our knowledge, this is the first study to assess the image-sharing ability of various LLMs in a zero-shot setting. We make our source code and dataset publicly available at https://github.com/passing2961/DribeR.",
}
| This paper explores the image-sharing capability of Large Language Models (LLMs), such as GPT-4 and LLaMA 2, in a zero-shot setting. To facilitate a comprehensive evaluation of LLMs, we introduce the photochatplus dataset, which includes enriched annotations (ie intent, triggering sentence, image description, and salient information). Furthermore, we present the gradient-free and extensible Decide, Describe, and Retrieve () framework. With extensive experiments, we unlock the image-sharing capability of equipped with LLMs in zero-shot prompting, with ChatGPT achieving the best performance.Our findings also reveal the emergent image-sharing ability in LLMs under zero-shot conditions, validating the effectiveness of . We use this framework to demonstrate its practicality and effectiveness in two real-world scenarios: (1) human-bot interaction and (2) dataset augmentation. To the best of our knowledge, this is the first study to assess the image-sharing ability of various LLMs in a zero-shot setting. We make our source code and dataset publicly available at https://github.com/passing2961/DribeR. | [
"Lee, Young-Jun",
"Lee, Dokyong",
"Sung, Joo Won",
"Hyeon, Jonghwan",
"Choi, Ho-Jin"
] | Large Language Models can Share Images, Too! | findings-acl.39 | Poster | 2310.14804 | [
"https://github.com/passing2961/LLM-Share-Image"
] | https://huggingface.co/papers/2310.14804 | 2 | 1 | 0 | 3 | https://aclanthology.org/2024.findings-acl.39/ | [] | [
"passing2961/photochat_plus"
] | [] | 1 |
https://aclanthology.org/2024.findings-acl.40.bib | @inproceedings{zan-etal-2024-codem,
title = "{C}ode{M}: Less Data Yields More Versatility via Ability Matrix",
author = "Zan, Daoguang and
Yu, Ailun and
Liu, Wei and
Shen, Bo and
Lin, Shaoxin and
Gong, Yongshun and
Yao, Yafen and
Liu, Yan and
Guan, Bei and
Luo, Weihua and
Wang, Yongji and
Wang, Qianxiang and
Cui, Lizhen",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.40",
pages = "714--729",
abstract = "In the era of code large language models (code LLMs), data engineering plays a pivotal role during the instruction fine-tuning phase. To train a versatile model, previous efforts devote tremendous efforts into crafting instruction data covering all the downstream scenarios. Nonetheless, this will incur significant expenses in constructing data and training model. Therefore, this paper introduces CodeM, a novel data construction strategy, which can efficiently train a versatile model using less data via our newly proposed ability matrix. CodeM uses ability matrix to decouple code LLMs{'} abilities into two dimensions, constructing a lightweight training corpus that only covers a subset of target scenarios. Extensive experiments on HumanEvalPack and MultiPL-E imply that code LLMs can combine the single-dimensional abilities to master composed abilities, validating the effectiveness of CodeM.",
}
| In the era of code large language models (code LLMs), data engineering plays a pivotal role during the instruction fine-tuning phase. To train a versatile model, previous efforts devote tremendous efforts into crafting instruction data covering all the downstream scenarios. Nonetheless, this will incur significant expenses in constructing data and training model. Therefore, this paper introduces CodeM, a novel data construction strategy, which can efficiently train a versatile model using less data via our newly proposed ability matrix. CodeM uses ability matrix to decouple code LLMs{'} abilities into two dimensions, constructing a lightweight training corpus that only covers a subset of target scenarios. Extensive experiments on HumanEvalPack and MultiPL-E imply that code LLMs can combine the single-dimensional abilities to master composed abilities, validating the effectiveness of CodeM. | [
"Zan, Daoguang",
"Yu, Ailun",
"Liu, Wei",
"Shen, Bo",
"Lin, Shaoxin",
"Gong, Yongshun",
"Yao, Yafen",
"Liu, Yan",
"Guan, Bei",
"Luo, Weihua",
"Wang, Yongji",
"Wang, Qianxiang",
"Cui, Lizhen"
] | CodeM: Less Data Yields More Versatility via Ability Matrix | findings-acl.40 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.40/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.41.bib | @inproceedings{huang-etal-2024-lvlms,
title = "Do {LVLM}s Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning",
author = "Huang, Kung-Hsiang and
Zhou, Mingyang and
Chan, Hou Pong and
Fung, Yi and
Wang, Zhenhailong and
Zhang, Lingyu and
Chang, Shih-Fu and
Ji, Heng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.41",
pages = "730--749",
abstract = "Advances in large vision-language models (LVLMs) have led to significant progress in generating natural language descriptions for visual contents. These powerful models are known for producing texts that are factually inconsistent with the visual input. While some efforts mitigate such inconsistencies in natural image captioning, the factuality of generated captions for structured visuals, such as charts, has not received as much scrutiny. This work introduces a comprehensive typology of factual errors in generated chart captions. A large-scale human annotation effort provides insight into the error patterns in captions generated by various models, ultimately forming the foundation of a dataset, CHOCOLATE. Our analysis reveals that even advanced models like GPT-4V frequently produce captions laced with factual inaccuracies. To combat this, we establish the task of Chart Caption Factual Error Correction and introduce CHARTVE, a visual entailment model that outperforms current LVLMs in evaluating caption factuality. Furthermore, we propose C2TFEC, an interpretable two-stage framework that excels at correcting factual errors. This work inaugurates a new domain in factual error correction for chart captions, presenting a novel evaluation metric, and demonstrating an effective approach to ensuring the factuality of generated chart captions. The code and data as well as the continuously updated benchmark can be found at: https://khuangaf.github.io/CHOCOLATE/.",
}
| Advances in large vision-language models (LVLMs) have led to significant progress in generating natural language descriptions for visual contents. These powerful models are known for producing texts that are factually inconsistent with the visual input. While some efforts mitigate such inconsistencies in natural image captioning, the factuality of generated captions for structured visuals, such as charts, has not received as much scrutiny. This work introduces a comprehensive typology of factual errors in generated chart captions. A large-scale human annotation effort provides insight into the error patterns in captions generated by various models, ultimately forming the foundation of a dataset, CHOCOLATE. Our analysis reveals that even advanced models like GPT-4V frequently produce captions laced with factual inaccuracies. To combat this, we establish the task of Chart Caption Factual Error Correction and introduce CHARTVE, a visual entailment model that outperforms current LVLMs in evaluating caption factuality. Furthermore, we propose C2TFEC, an interpretable two-stage framework that excels at correcting factual errors. This work inaugurates a new domain in factual error correction for chart captions, presenting a novel evaluation metric, and demonstrating an effective approach to ensuring the factuality of generated chart captions. The code and data as well as the continuously updated benchmark can be found at: https://khuangaf.github.io/CHOCOLATE/. | [
"Huang, Kung-Hsiang",
"Zhou, Mingyang",
"Chan, Hou Pong",
"Fung, Yi",
"Wang, Zhenhailong",
"Zhang, Lingyu",
"Chang, Shih-Fu",
"Ji, Heng"
] | Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning | findings-acl.41 | Poster | 2312.10160 | [
"https://github.com/khuangaf/chocolate"
] | https://huggingface.co/papers/2312.10160 | 1 | 1 | 0 | 8 | https://aclanthology.org/2024.findings-acl.41/ | [
"khhuang/chart-to-table",
"khhuang/chartve"
] | [
"khhuang/CHOCOLATE",
"khhuang/chartve_dataset"
] | [] | 1 |
https://aclanthology.org/2024.findings-acl.42.bib | @inproceedings{jin-etal-2024-bider,
title = "{BIDER}: Bridging Knowledge Inconsistency for Efficient Retrieval-Augmented {LLM}s via Key Supporting Evidence",
author = "Jin, Jiajie and
Zhu, Yutao and
Zhou, Yujia and
Dou, Zhicheng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.42",
pages = "750--761",
abstract = "Retrieval-augmented large language models (LLMs) have demonstrated efficacy in knowledge-intensive tasks such as open-domain QA, addressing inherent challenges in knowledge update and factual inadequacy.However, inconsistencies between retrieval knowledge and the necessary knowledge for LLMs, leading to a decline in LLM{'}s answer quality. This paper introduces BIDER, an approach that refines retrieval documents into Key Supporting Evidence (KSE) through knowledge synthesis, supervised fine-tuning (SFT), and preference alignment. We train BIDER by learning from crafting KSE, while maximizing its output to align with LLM{'}s information acquisition preferences through reinforcement learning. Evaluations across five datasets show BIDER boosts LLMs{'} answer quality by 7{\%} while reducing input content length in retrieval documents by 80{\%}, outperforming existing methods. The proposed KSE simulation effectively equips LLMs with essential information for accurate question answering.",
}
| Retrieval-augmented large language models (LLMs) have demonstrated efficacy in knowledge-intensive tasks such as open-domain QA, addressing inherent challenges in knowledge update and factual inadequacy.However, inconsistencies between retrieval knowledge and the necessary knowledge for LLMs, leading to a decline in LLM{'}s answer quality. This paper introduces BIDER, an approach that refines retrieval documents into Key Supporting Evidence (KSE) through knowledge synthesis, supervised fine-tuning (SFT), and preference alignment. We train BIDER by learning from crafting KSE, while maximizing its output to align with LLM{'}s information acquisition preferences through reinforcement learning. Evaluations across five datasets show BIDER boosts LLMs{'} answer quality by 7{\%} while reducing input content length in retrieval documents by 80{\%}, outperforming existing methods. The proposed KSE simulation effectively equips LLMs with essential information for accurate question answering. | [
"Jin, Jiajie",
"Zhu, Yutao",
"Zhou, Yujia",
"Dou, Zhicheng"
] | BIDER: Bridging Knowledge Inconsistency for Efficient Retrieval-Augmented LLMs via Key Supporting Evidence | findings-acl.42 | Poster | 2402.12174 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.42/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.43.bib | @inproceedings{wang-etal-2024-beyond-literal,
title = "Beyond Literal Descriptions: Understanding and Locating Open-World Objects Aligned with Human Intentions",
author = "Wang, Wenxuan and
Zhang, Yisi and
He, Xingjian and
Yan, Yichen and
Zhao, Zijia and
Wang, Xinlong and
Liu, Jing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.43",
pages = "762--776",
abstract = "Visual grounding (VG) aims at locating the foreground entities that match the given natural language expression. Previous datasets and methods for classic VG task mainly rely on the prior assumption that the given expression must literally refer to the target object, which greatly impedes the practical deployment of agents in real-world scenarios. Since users usually prefer to provide the intention-based expressions for the desired object instead of covering all the details, it is necessary for the agents to interpret the intention-driven instructions. Thus, in this work, we take a step further to the intention-driven visual-language (V-L) understanding. To promote classic VG towards human intention interpretation, we propose a new intention-driven visual grounding (IVG) task and build a largest-scale IVG dataset named IntentionVG with free-form intention expressions. Considering that practical agents need to move and find specific targets among various scenarios to realize the grounding task, our IVG task and IntentionVG dataset have taken the crucial properties of both multi-scenario perception and egocentric view into consideration. Besides, various types of models are set up as the baselines to realize our IVG task. Extensive experiments on our IntentionVG dataset and baselines demonstrate the necessity and efficacy of our method for the V-L field. To foster future research in this direction, our newly built dataset and baselines will be publicly available at https://github.com/Rubics-Xuan/IVG.",
}
| Visual grounding (VG) aims at locating the foreground entities that match the given natural language expression. Previous datasets and methods for classic VG task mainly rely on the prior assumption that the given expression must literally refer to the target object, which greatly impedes the practical deployment of agents in real-world scenarios. Since users usually prefer to provide the intention-based expressions for the desired object instead of covering all the details, it is necessary for the agents to interpret the intention-driven instructions. Thus, in this work, we take a step further to the intention-driven visual-language (V-L) understanding. To promote classic VG towards human intention interpretation, we propose a new intention-driven visual grounding (IVG) task and build a largest-scale IVG dataset named IntentionVG with free-form intention expressions. Considering that practical agents need to move and find specific targets among various scenarios to realize the grounding task, our IVG task and IntentionVG dataset have taken the crucial properties of both multi-scenario perception and egocentric view into consideration. Besides, various types of models are set up as the baselines to realize our IVG task. Extensive experiments on our IntentionVG dataset and baselines demonstrate the necessity and efficacy of our method for the V-L field. To foster future research in this direction, our newly built dataset and baselines will be publicly available at https://github.com/Rubics-Xuan/IVG. | [
"Wang, Wenxuan",
"Zhang, Yisi",
"He, Xingjian",
"Yan, Yichen",
"Zhao, Zijia",
"Wang, Xinlong",
"Liu, Jing"
] | Beyond Literal Descriptions: Understanding and Locating Open-World Objects Aligned with Human Intentions | findings-acl.43 | Poster | 2402.11265 | [
"https://github.com/rubics-xuan/ivg"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.43/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.44.bib | @inproceedings{qiu-etal-2024-incremental,
title = "Incremental Sequence Labeling: A Tale of Two Shifts",
author = "Qiu, Shengjie and
Zheng, Junhao and
Liu, Zhen and
Luo, Yicheng and
Ma, Qianli",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.44",
pages = "777--791",
abstract = "The incremental sequence labeling task involves continuously learning new classes over time while retaining knowledge of the previous ones. Our investigation identifies two significant semantic shifts: E2O (where the model mislabels an old entity as a non-entity) and O2E (where the model labels a non-entity or old entity as a new entity). Previous research has predominantly focused on addressing the E2O problem, neglecting the O2E issue. This negligence results in a model bias towards classifying new data samples as belonging to the new class during the learning process. To address these challenges, we propose a novel framework, Incremental Sequential Labeling without Semantic Shifts (IS3). Motivated by the identified semantic shifts (E2O and O2E), IS3 aims to mitigate catastrophic forgetting in models. As for the E2O problem, we use knowledge distillation to maintain the model{'}s discriminative ability for old entities. Simultaneously, to tackle the O2E problem, we alleviate the model{'}s bias towards new entities through debiased loss and optimization levels.Our experimental evaluation, conducted on three datasets with various incremental settings, demonstrates the superior performance of IS3 compared to the previous state-of-the-art method by a significant margin.",
}
| The incremental sequence labeling task involves continuously learning new classes over time while retaining knowledge of the previous ones. Our investigation identifies two significant semantic shifts: E2O (where the model mislabels an old entity as a non-entity) and O2E (where the model labels a non-entity or old entity as a new entity). Previous research has predominantly focused on addressing the E2O problem, neglecting the O2E issue. This negligence results in a model bias towards classifying new data samples as belonging to the new class during the learning process. To address these challenges, we propose a novel framework, Incremental Sequential Labeling without Semantic Shifts (IS3). Motivated by the identified semantic shifts (E2O and O2E), IS3 aims to mitigate catastrophic forgetting in models. As for the E2O problem, we use knowledge distillation to maintain the model{'}s discriminative ability for old entities. Simultaneously, to tackle the O2E problem, we alleviate the model{'}s bias towards new entities through debiased loss and optimization levels.Our experimental evaluation, conducted on three datasets with various incremental settings, demonstrates the superior performance of IS3 compared to the previous state-of-the-art method by a significant margin. | [
"Qiu, Shengjie",
"Zheng, Junhao",
"Liu, Zhen",
"Luo, Yicheng",
"Ma, Qianli"
] | Incremental Sequence Labeling: A Tale of Two Shifts | findings-acl.44 | Poster | 2402.10447 | [
"https://github.com/zzz47zzz/codebase-for-incremental-learning-with-llm"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.44/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.45.bib | @inproceedings{liu-etal-2024-proficient,
title = "How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering",
author = "Liu, Jinxin and
Cao, Shulin and
Shi, Jiaxin and
Zhang, Tingjian and
Nie, Lunyiu and
Hu, Linmei and
Hou, Lei and
Li, Juanzi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.45",
pages = "792--815",
abstract = "Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases. A typical approach to KBQA is semantic parsing, which translates a question into an executable logical form in a formal language. Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance. However, although it is validated that LLMs are capable of solving some KBQA problems, there has been little discussion on the differences in LLMs{'} proficiency in formal languages used in semantic parsing. In this work, we propose to evaluate the understanding and generation ability of LLMs to deal with differently structured logical forms by examining the inter-conversion of natural and formal language through in-context learning of LLMs. Extensive experiments with models of different sizes show that state-of-the-art LLMs can understand formal languages as well as humans, but generating correct logical forms given a few examples remains a challenge. Most importantly, our results also indicate that LLMs exhibit considerable sensitivity. In general, the formal language with a lower formalization level, i.e., the more similar it is to natural language, is more friendly to LLMs. Code and data can be found at https://github.com/Matthewlliu/structure{\_}probe.",
}
| Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases. A typical approach to KBQA is semantic parsing, which translates a question into an executable logical form in a formal language. Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance. However, although it is validated that LLMs are capable of solving some KBQA problems, there has been little discussion on the differences in LLMs{'} proficiency in formal languages used in semantic parsing. In this work, we propose to evaluate the understanding and generation ability of LLMs to deal with differently structured logical forms by examining the inter-conversion of natural and formal language through in-context learning of LLMs. Extensive experiments with models of different sizes show that state-of-the-art LLMs can understand formal languages as well as humans, but generating correct logical forms given a few examples remains a challenge. Most importantly, our results also indicate that LLMs exhibit considerable sensitivity. In general, the formal language with a lower formalization level, i.e., the more similar it is to natural language, is more friendly to LLMs. Code and data can be found at https://github.com/Matthewlliu/structure{\_}probe. | [
"Liu, Jinxin",
"Cao, Shulin",
"Shi, Jiaxin",
"Zhang, Tingjian",
"Nie, Lunyiu",
"Hu, Linmei",
"Hou, Lei",
"Li, Juanzi"
] | How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering | findings-acl.45 | Poster | 2401.05777 | [
""
] | https://huggingface.co/papers/2401.05777 | 0 | 0 | 0 | 6 | https://aclanthology.org/2024.findings-acl.45/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.46.bib | @inproceedings{sui-etal-2024-melov,
title = "{MELOV}: Multimodal Entity Linking with Optimized Visual Features in Latent Space",
author = "Sui, Xuhui and
Zhang, Ying and
Zhao, Yu and
Song, Kehui and
Zhou, Baohang and
Yuan, Xiaojie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.46",
pages = "816--826",
abstract = "Multimodal entity linking (MEL), which aligns ambiguous mentions within multimodal contexts to referent entities from multimodal knowledge bases, is essential for many natural language processing applications. Previous MEL methods mainly focus on exploring complex multimodal interaction mechanisms to better capture coherence evidence between mentions and entities by mining complementary information. However, in real-world social media scenarios, vision modality often exhibits low quality, low value, or low relevance to the mention. Integrating such information directly will backfire, leading to a weakened consistency between mentions and their corresponding entities. In this paper, we propose a novel latent space vision feature optimization framework MELOV, which combines inter-modality and intra-modality optimizations to address these challenges. For the inter-modality optimization, we exploit the variational autoencoder to mine shared information and generate text-based visual features. For the intra-modality optimization, we consider the relationships between mentions and build graph convolutional network to aggregate the visual features of semantic similar neighbors. Extensive experiments on three benchmark datasets demonstrate the superiority of our proposed framework.",
}
| Multimodal entity linking (MEL), which aligns ambiguous mentions within multimodal contexts to referent entities from multimodal knowledge bases, is essential for many natural language processing applications. Previous MEL methods mainly focus on exploring complex multimodal interaction mechanisms to better capture coherence evidence between mentions and entities by mining complementary information. However, in real-world social media scenarios, vision modality often exhibits low quality, low value, or low relevance to the mention. Integrating such information directly will backfire, leading to a weakened consistency between mentions and their corresponding entities. In this paper, we propose a novel latent space vision feature optimization framework MELOV, which combines inter-modality and intra-modality optimizations to address these challenges. For the inter-modality optimization, we exploit the variational autoencoder to mine shared information and generate text-based visual features. For the intra-modality optimization, we consider the relationships between mentions and build graph convolutional network to aggregate the visual features of semantic similar neighbors. Extensive experiments on three benchmark datasets demonstrate the superiority of our proposed framework. | [
"Sui, Xuhui",
"Zhang, Ying",
"Zhao, Yu",
"Song, Kehui",
"Zhou, Baohang",
"Yuan, Xiaojie"
] | MELOV: Multimodal Entity Linking with Optimized Visual Features in Latent Space | findings-acl.46 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.46/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.47.bib | @inproceedings{qu-etal-2024-unsupervised,
title = "Unsupervised Distractor Generation via Large Language Model Distilling and Counterfactual Contrastive Decoding",
author = "Qu, Fanyi and
Sun, Hao and
Wu, Yunfang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.47",
pages = "827--838",
abstract = "Within the context of reading comprehension, the task of Distractor Generation (DG) aims to generate several incorrect options to confuse readers. In recent years, the emergence of Large Language Models (LLMs) provides a potential for unsupervised DG without expensive human-annotated distractor labels. In this paper, we leverage LLMs as a cost-effective annotator to enhance the DG capability of smaller student models. To perform knowledge distilling, we propose a dual task training framework that integrates pseudo distractors from LLMs and answer information as the objective target with a two-stage training process. Moreover, we devise a counterfactual contrastive decoding mechanism for increasing the distracting capability of the DG model. Experiments show that our unsupervised generation method with Bart-base greatly surpasses GPT-3.5-turbo zero-shot performance with only 200$\times$ fewer model parameters. Our proposed unsupervised DG method offers a cost-effective framework for practical reading comprehension applications, without the need of laborious distractor annotation and costly large-size models.",
}
| Within the context of reading comprehension, the task of Distractor Generation (DG) aims to generate several incorrect options to confuse readers. In recent years, the emergence of Large Language Models (LLMs) provides a potential for unsupervised DG without expensive human-annotated distractor labels. In this paper, we leverage LLMs as a cost-effective annotator to enhance the DG capability of smaller student models. To perform knowledge distilling, we propose a dual task training framework that integrates pseudo distractors from LLMs and answer information as the objective target with a two-stage training process. Moreover, we devise a counterfactual contrastive decoding mechanism for increasing the distracting capability of the DG model. Experiments show that our unsupervised generation method with Bart-base greatly surpasses GPT-3.5-turbo zero-shot performance with only 200$\times$ fewer model parameters. Our proposed unsupervised DG method offers a cost-effective framework for practical reading comprehension applications, without the need of laborious distractor annotation and costly large-size models. | [
"Qu, Fanyi",
"Sun, Hao",
"Wu, Yunfang"
] | Unsupervised Distractor Generation via Large Language Model Distilling and Counterfactual Contrastive Decoding | findings-acl.47 | Poster | 2406.01306 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.47/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.48.bib | @inproceedings{liu-etal-2024-conversational,
title = "Conversational Question Answering with Language Models Generated Reformulations over Knowledge Graph",
author = "Liu, Lihui and
Hill, Blaine and
Du, Boxin and
Wang, Fei and
Tong, Hanghang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.48",
pages = "839--850",
abstract = "Conversational question answering (ConvQA) over knowledge graphs (KGs) involves answering multi-turn natural language questions about information contained in a KG. State-of-the-art methods of ConvQA often struggle with inexplicit question-answer pairs. These inputs are easy for human beings to understand given a conversation history, but hard for a machine to interpret, which can degrade ConvQA performance. To address this problem, we propose a reinforcement learning (RL) based model, CoRnNet, which utilizes question reformulations generated by large language models (LLMs) to improve ConvQA performance. CoRnNet adopts a teacher-student architecture where a teacher model learns question representations using human writing reformulations, and a student model to mimic the teacher model{'}s output via reformulations generated by LLMs. The learned question representation is then used by a RL model to locate the correct answer in a KG. Extensive experimental results show that CoRnNet outperforms state-of-the-art ConvQA models.",
}
| Conversational question answering (ConvQA) over knowledge graphs (KGs) involves answering multi-turn natural language questions about information contained in a KG. State-of-the-art methods of ConvQA often struggle with inexplicit question-answer pairs. These inputs are easy for human beings to understand given a conversation history, but hard for a machine to interpret, which can degrade ConvQA performance. To address this problem, we propose a reinforcement learning (RL) based model, CoRnNet, which utilizes question reformulations generated by large language models (LLMs) to improve ConvQA performance. CoRnNet adopts a teacher-student architecture where a teacher model learns question representations using human writing reformulations, and a student model to mimic the teacher model{'}s output via reformulations generated by LLMs. The learned question representation is then used by a RL model to locate the correct answer in a KG. Extensive experimental results show that CoRnNet outperforms state-of-the-art ConvQA models. | [
"Liu, Lihui",
"Hill, Blaine",
"Du, Boxin",
"Wang, Fei",
"Tong, Hanghang"
] | Conversational Question Answering with Language Models Generated Reformulations over Knowledge Graph | findings-acl.48 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.48/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.49.bib | @inproceedings{zhong-etal-2024-debug,
title = "Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step by Step",
author = "Zhong, Li and
Wang, Zilong and
Shang, Jingbo",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.49",
pages = "851--870",
abstract = "Large language models (LLMs) are leading significant progress in code generation. Beyond one-pass code generation, recent works further integrate unit tests and program verifiers into LLMs to iteratively refine the generated programs. However, these works consider the generated programs as an indivisible entity, which falls short for LLMs in debugging the programs, especially when the programs contain complex logic flows and data operations. In contrast, when human developers debug programs, they typically set breakpoints and selectively examine runtime execution information. The execution flow and the intermediate variables play a crucial role in the debugging process, yet they are underutilized in the existing literature on code generation. In this study, we introduce Large Language Model Debugger (LDB), a novel debugging framework that enables LLMs to refine their generated programs with the runtime execution information. Specifically, LDB segments the programs into basic blocks and tracks the values of intermediate variables after each block throughout the runtime execution. This allows LLMs to concentrate on simpler code units within the overall execution flow, verify their correctness against the task description block by block, and efficiently pinpoint any potential errors. Experiments demonstrate that LDB consistently enhances the baseline performance by up to 9.8{\%} across the HumanEval, MBPP, and TransCoder benchmarks, archiving new state-of-the-art performance in code debugging for various LLM selections.",
}
| Large language models (LLMs) are leading significant progress in code generation. Beyond one-pass code generation, recent works further integrate unit tests and program verifiers into LLMs to iteratively refine the generated programs. However, these works consider the generated programs as an indivisible entity, which falls short for LLMs in debugging the programs, especially when the programs contain complex logic flows and data operations. In contrast, when human developers debug programs, they typically set breakpoints and selectively examine runtime execution information. The execution flow and the intermediate variables play a crucial role in the debugging process, yet they are underutilized in the existing literature on code generation. In this study, we introduce Large Language Model Debugger (LDB), a novel debugging framework that enables LLMs to refine their generated programs with the runtime execution information. Specifically, LDB segments the programs into basic blocks and tracks the values of intermediate variables after each block throughout the runtime execution. This allows LLMs to concentrate on simpler code units within the overall execution flow, verify their correctness against the task description block by block, and efficiently pinpoint any potential errors. Experiments demonstrate that LDB consistently enhances the baseline performance by up to 9.8{\%} across the HumanEval, MBPP, and TransCoder benchmarks, archiving new state-of-the-art performance in code debugging for various LLM selections. | [
"Zhong, Li",
"Wang, Zilong",
"Shang, Jingbo"
] | Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step by Step | findings-acl.49 | Poster | [
"https://github.com/floridsleeves/llmdebugger"
] | https://huggingface.co/papers/2402.16906 | 0 | 0 | 0 | 3 | https://aclanthology.org/2024.findings-acl.49/ | [] | [] | [] | 1 |
|
https://aclanthology.org/2024.findings-acl.50.bib | @inproceedings{sun-etal-2024-effective,
title = "Effective In-Context Example Selection through Data Compression",
author = "Sun, ZhongXiang and
Zhang, Kepu and
Wang, Haoyu and
Zhang, Xiao and
Xu, Jun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.50",
pages = "871--877",
abstract = "In-context learning has been extensively validated in large language models. However, the mechanism and selection strategy for in-context example selection, which is a crucial ingredient in this approach, lacks systematic and in-depth research. In this paper, we propose a data compression approach to the selection of in-context examples. We introduce a two-stage method that can effectively choose relevant examples and retain sufficient information about the training dataset within the in-context examples. Our method shows a significant improvement of an average of 5.90{\%} across five different real-world datasets using four language models.",
}
| In-context learning has been extensively validated in large language models. However, the mechanism and selection strategy for in-context example selection, which is a crucial ingredient in this approach, lacks systematic and in-depth research. In this paper, we propose a data compression approach to the selection of in-context examples. We introduce a two-stage method that can effectively choose relevant examples and retain sufficient information about the training dataset within the in-context examples. Our method shows a significant improvement of an average of 5.90{\%} across five different real-world datasets using four language models. | [
"Sun, ZhongXiang",
"Zhang, Kepu",
"Wang, Haoyu",
"Zhang, Xiao",
"Xu, Jun"
] | Effective In-Context Example Selection through Data Compression | findings-acl.50 | Poster | 2405.11465 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.50/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.51.bib | @inproceedings{chen-etal-2024-u,
title = "Are {U} a Joke Master? Pun Generation via Multi-Stage Curriculum Learning towards a Humor {LLM}",
author = "Chen, Yang and
Yang, Chong and
Hu, Tu and
Chen, Xinhao and
Lan, Man and
Cai, Li and
Zhuang, Xinlin and
Lin, Xuan and
Lu, Xin and
Zhou, Aimin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.51",
pages = "878--890",
abstract = "Although large language models (LLMs) acquire extensive world knowledge and some reasoning abilities, their proficiency in generating humorous sentences remains a challenge. Previous research has demonstrated that the humor generation capabilities of ChatGPT are confined to producing merely 25 unique jokes. In this work, we concentrate on endowing LLMs with the ability of generating puns, a particular category of humor by preference learning method. We propose a multi-stage curriculum preference learning framework to optimize both pun structure preferences and humor preferences. Specifically, we improve the Direct Preference Optimization (DPO) algorithm to address the challenge of multi-objective alignment problem. Besides, to facilitate further advancement in this field, we collect a Chinese Pun (ChinesePun) dataset, containing 2.1k puns and corresponding annotations. Experimental results on both Chinese and English benchmark datasets demonstrate that our method significantly outperforms all the baseline models.",
}
| Although large language models (LLMs) acquire extensive world knowledge and some reasoning abilities, their proficiency in generating humorous sentences remains a challenge. Previous research has demonstrated that the humor generation capabilities of ChatGPT are confined to producing merely 25 unique jokes. In this work, we concentrate on endowing LLMs with the ability of generating puns, a particular category of humor by preference learning method. We propose a multi-stage curriculum preference learning framework to optimize both pun structure preferences and humor preferences. Specifically, we improve the Direct Preference Optimization (DPO) algorithm to address the challenge of multi-objective alignment problem. Besides, to facilitate further advancement in this field, we collect a Chinese Pun (ChinesePun) dataset, containing 2.1k puns and corresponding annotations. Experimental results on both Chinese and English benchmark datasets demonstrate that our method significantly outperforms all the baseline models. | [
"Chen, Yang",
"Yang, Chong",
"Hu, Tu",
"Chen, Xinhao",
"Lan, Man",
"Cai, Li",
"Zhuang, Xinlin",
"Lin, Xuan",
"Lu, Xin",
"Zhou, Aimin"
] | Are U a Joke Master? Pun Generation via Multi-Stage Curriculum Learning towards a Humor LLM | findings-acl.51 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.51/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.52.bib | @inproceedings{zhang-etal-2024-knowledgeable,
title = "Knowledgeable Preference Alignment for {LLM}s in Domain-specific Question Answering",
author = "Zhang, Yichi and
Chen, Zhuo and
Fang, Yin and
Lu, Yanxi and
Fangming, Li and
Zhang, Wen and
Chen, Huajun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.52",
pages = "891--904",
abstract = "Deploying large language models (LLMs) to real scenarios for domain-specific question answering (QA) is a key thrust for LLM applications, which poses numerous challenges, especially in ensuring that responses are both accommodating to user requirements and appropriately leveraging domain-specific knowledge bases. They are the two major difficulties for LLM application as vanilla fine-tuning falls short of addressing. Combining these requirements, we conceive of them as the requirement for the model{'}s preference to be harmoniously aligned with humans{'}. Thus, we introduce Knowledgeable Preference AlignmenT (KnowPAT), which constructs two kinds of preference sets to tackle the two issues. Besides, we design a new alignment objective to align the LLM preference with different human preferences uniformly, aiming to optimize LLM performance in real-world, domain-specific QA settings. Adequate experiments and comprehensive comparisons with 15 baseline methods illustrate that our KnowPAT is a superior pipeline for real-scenario domain-specific QA with LLMs.",
}
| Deploying large language models (LLMs) to real scenarios for domain-specific question answering (QA) is a key thrust for LLM applications, which poses numerous challenges, especially in ensuring that responses are both accommodating to user requirements and appropriately leveraging domain-specific knowledge bases. They are the two major difficulties for LLM application as vanilla fine-tuning falls short of addressing. Combining these requirements, we conceive of them as the requirement for the model{'}s preference to be harmoniously aligned with humans{'}. Thus, we introduce Knowledgeable Preference AlignmenT (KnowPAT), which constructs two kinds of preference sets to tackle the two issues. Besides, we design a new alignment objective to align the LLM preference with different human preferences uniformly, aiming to optimize LLM performance in real-world, domain-specific QA settings. Adequate experiments and comprehensive comparisons with 15 baseline methods illustrate that our KnowPAT is a superior pipeline for real-scenario domain-specific QA with LLMs. | [
"Zhang, Yichi",
"Chen, Zhuo",
"Fang, Yin",
"Lu, Yanxi",
"Fangming, Li",
"Zhang, Wen",
"Chen, Huajun"
] | Knowledgeable Preference Alignment for LLMs in Domain-specific Question Answering | findings-acl.52 | Poster | 2311.06503 | [
"https://github.com/zjukg/knowpat"
] | https://huggingface.co/papers/2311.06503 | 1 | 0 | 0 | 8 | https://aclanthology.org/2024.findings-acl.52/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.53.bib | @inproceedings{liao-etal-2024-mario,
title = "{MARIO}: {MA}th Reasoning with code Interpreter Output - A Reproducible Pipeline",
author = "Liao, Minpeng and
Li, Chengxi and
Luo, Wei and
Jing, Wu and
Fan, Kai",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.53",
pages = "905--924",
abstract = "Large language models (LLMs) have significantly improved in understanding natural language but still lack in mathematical reasoning, a hurdle on the path to true artificial general intelligence. The training of large language models, based on next-token prediction, struggles to capture the precise nature of mathematical reasoning, presenting both practical and theoretical challenges. In this paper, we address this challenge by enriching the data landscape and introducing a reasonable data format, enhanced the text analysis of the LLM with a capability to utilize a Python code interpreter. This dataset is derived from GSM8K and MATH and has been further refined through a combination of GPT annotations, human review, and self-training processes. Additionally, we propose a tentative, easily replicable protocol for the fine-tuning of math-specific LLMs, which has led to a significant improvement in the performance of a 7B-parameter LLM on the GSM8K and MATH datasets. A solution generator and a value estimator are fine-tuned simultaneously in a multi-task fashion, while an outlier-free value model-based inference method is proposed to further boost the performance. We are committed to advancing the field of mathematical reasoning in LLMs and, to that end, we will make the source code and checkpoints publicly available.",
}
| Large language models (LLMs) have significantly improved in understanding natural language but still lack in mathematical reasoning, a hurdle on the path to true artificial general intelligence. The training of large language models, based on next-token prediction, struggles to capture the precise nature of mathematical reasoning, presenting both practical and theoretical challenges. In this paper, we address this challenge by enriching the data landscape and introducing a reasonable data format, enhanced the text analysis of the LLM with a capability to utilize a Python code interpreter. This dataset is derived from GSM8K and MATH and has been further refined through a combination of GPT annotations, human review, and self-training processes. Additionally, we propose a tentative, easily replicable protocol for the fine-tuning of math-specific LLMs, which has led to a significant improvement in the performance of a 7B-parameter LLM on the GSM8K and MATH datasets. A solution generator and a value estimator are fine-tuned simultaneously in a multi-task fashion, while an outlier-free value model-based inference method is proposed to further boost the performance. We are committed to advancing the field of mathematical reasoning in LLMs and, to that end, we will make the source code and checkpoints publicly available. | [
"Liao, Minpeng",
"Li, Chengxi",
"Luo, Wei",
"Jing, Wu",
"Fan, Kai"
] | MARIO: MAth Reasoning with code Interpreter Output - A Reproducible Pipeline | findings-acl.53 | Poster | [
"https://github.com/mario-math-reasoning/mario"
] | https://huggingface.co/papers/2401.08190 | 0 | 0 | 0 | 5 | https://aclanthology.org/2024.findings-acl.53/ | [] | [] | [] | 1 |
|
https://aclanthology.org/2024.findings-acl.54.bib | @inproceedings{cheng-li-2024-diffuspoll,
title = "{D}iffus{P}oll: Conditional Text Diffusion Model for Poll Generation",
author = "Cheng, Le and
Li, Shuangyin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.54",
pages = "925--935",
abstract = "Online social media platforms often gather user feedback through polls to enhance user engagement. Automatically generating polls from social media and its context can decrease the labor expenses of media workers and enhance workplace productivity. However, on social media platforms, there are internet water armies that manipulate public opinion through sheer numbers and causing the comments to be biased, drowning out minority views. In such circumstances, polls created based on biased comments often have limited types of options and poor coverage. Therefore, it is crucial to diversify the poll options and try to listen to the voices of the minority. To achieve this, we introduce DiffusPoll, a novel paradigm for poll generation based on a non-autoregressive diffusion model that can generate diversified and high-quality samples. Under the new paradigm, we design a task-specific mask strategy tailored to the inherent logic of polls to optimize controlled generation. Furthermore, we also leverage additional attribute tags from comments to enhance the generation quality. Experimental results indicate that DiffusPoll has achieved state-of-the-art performance in both the quality and diversity of poll generation tasks, and is more likely to hit the voices of minority.",
}
| Online social media platforms often gather user feedback through polls to enhance user engagement. Automatically generating polls from social media and its context can decrease the labor expenses of media workers and enhance workplace productivity. However, on social media platforms, there are internet water armies that manipulate public opinion through sheer numbers and causing the comments to be biased, drowning out minority views. In such circumstances, polls created based on biased comments often have limited types of options and poor coverage. Therefore, it is crucial to diversify the poll options and try to listen to the voices of the minority. To achieve this, we introduce DiffusPoll, a novel paradigm for poll generation based on a non-autoregressive diffusion model that can generate diversified and high-quality samples. Under the new paradigm, we design a task-specific mask strategy tailored to the inherent logic of polls to optimize controlled generation. Furthermore, we also leverage additional attribute tags from comments to enhance the generation quality. Experimental results indicate that DiffusPoll has achieved state-of-the-art performance in both the quality and diversity of poll generation tasks, and is more likely to hit the voices of minority. | [
"Cheng, Le",
"Li, Shuangyin"
] | DiffusPoll: Conditional Text Diffusion Model for Poll Generation | findings-acl.54 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.54/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.55.bib | @inproceedings{li-etal-2024-exploring-mathematical,
title = "Exploring Mathematical Extrapolation of Large Language Models with Synthetic Data",
author = "Li, Haolong and
Ma, Yu and
Zhang, Yinqi and
Ye, Chen and
Chen, Jie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.55",
pages = "936--946",
abstract = "While large language models (LLMs) have shown excellent capabilities in language understanding, text generation and many other tasks, they still struggle in complex multi-step reasoning problems such as mathematical reasoning. In this paper, through a newly proposed arithmetical puzzle problem, we show that the model can perform well on multi-step reasoning tasks via fine tuning on high-quality synthetic data. Experiments with the open-llama-3B model on three different test datasets show that not only the model can reach a zero-shot pass@1 at 0.44 on the in-domain dataset, it also demonstrates certain generalization capabilities on the out-of-domain datasets. Specifically, this paper has designed two out-of-domain datasets in the form of extending the numerical range and the composing components of the arithmetical puzzle problem separately. The fine-tuned model have shown encouraging performance on these two far more difficult tasks with the zero-shot pass@1 at 0.33 and 0.35 correspondingly.",
}
| While large language models (LLMs) have shown excellent capabilities in language understanding, text generation and many other tasks, they still struggle in complex multi-step reasoning problems such as mathematical reasoning. In this paper, through a newly proposed arithmetical puzzle problem, we show that the model can perform well on multi-step reasoning tasks via fine tuning on high-quality synthetic data. Experiments with the open-llama-3B model on three different test datasets show that not only the model can reach a zero-shot pass@1 at 0.44 on the in-domain dataset, it also demonstrates certain generalization capabilities on the out-of-domain datasets. Specifically, this paper has designed two out-of-domain datasets in the form of extending the numerical range and the composing components of the arithmetical puzzle problem separately. The fine-tuned model have shown encouraging performance on these two far more difficult tasks with the zero-shot pass@1 at 0.33 and 0.35 correspondingly. | [
"Li, Haolong",
"Ma, Yu",
"Zhang, Yinqi",
"Ye, Chen",
"Chen, Jie"
] | Exploring Mathematical Extrapolation of Large Language Models with Synthetic Data | findings-acl.55 | Poster | 2406.02100 | [
""
] | https://huggingface.co/papers/2406.02100 | 0 | 0 | 0 | 5 | https://aclanthology.org/2024.findings-acl.55/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.56.bib | @inproceedings{kang-qian-2024-implanting,
title = "Implanting {LLM}{'}s Knowledge via Reading Comprehension Tree for Toxicity Detection",
author = "Kang, Hankun and
Qian, Tieyun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.56",
pages = "947--962",
abstract = "Toxicity detection plays a crucial role in maintaining the peace of the society. Existing methods can be roughly categorized as small language model (SLM) based and large language model (LLM) based. However, due to the limitation of SLMs on general knowledge and the potential embedded bias in LLMs despite their large amount of knowledge, it is not a good idea to detect toxicity only with either SLM or LLM based method.In this work, we propose to implant LLM{'}s knowledge into SLM based methods such that we can stick to both types of models{'} strengths. To this end, we develop a reading comprehension (RC) tree to transfer knowledge between two models. Specifically, we first construct the RC tree, from an extensive to intensive reading perspective, to capture the local and global information in the text. We then model samples encoded by SLM and knowledge extracted from LLM as two distributions using the constructed RT tree. We finally transfer knowledge via optimal transportation between two distributions. Extensive experiments prove the effectiveness of our method on real-world and machine-generated datasets.",
}
| Toxicity detection plays a crucial role in maintaining the peace of the society. Existing methods can be roughly categorized as small language model (SLM) based and large language model (LLM) based. However, due to the limitation of SLMs on general knowledge and the potential embedded bias in LLMs despite their large amount of knowledge, it is not a good idea to detect toxicity only with either SLM or LLM based method.In this work, we propose to implant LLM{'}s knowledge into SLM based methods such that we can stick to both types of models{'} strengths. To this end, we develop a reading comprehension (RC) tree to transfer knowledge between two models. Specifically, we first construct the RC tree, from an extensive to intensive reading perspective, to capture the local and global information in the text. We then model samples encoded by SLM and knowledge extracted from LLM as two distributions using the constructed RT tree. We finally transfer knowledge via optimal transportation between two distributions. Extensive experiments prove the effectiveness of our method on real-world and machine-generated datasets. | [
"Kang, Hankun",
"Qian, Tieyun"
] | Implanting LLM's Knowledge via Reading Comprehension Tree for Toxicity Detection | findings-acl.56 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.56/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.57.bib | @inproceedings{pan-etal-2024-llmlingua,
title = "{LLML}ingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression",
author = {Pan, Zhuoshi and
Wu, Qianhui and
Jiang, Huiqiang and
Xia, Menglin and
Luo, Xufang and
Zhang, Jue and
Lin, Qingwei and
R{\"u}hle, Victor and
Yang, Yuqing and
Lin, Chin-Yew and
Zhao, H. Vicky and
Qiu, Lili and
Zhang, Dongmei},
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.57",
pages = "963--981",
abstract = "This paper focuses on task-agnostic prompt compression for better generalizability and efficiency. Considering the redundancy in natural language, existing approaches compress prompts by removing tokens or lexical units according to their information entropy obtained from a causal language model such as LLaMa-7B. The challenge is that information entropy may be a suboptimal compression metric: (i) it only leverages unidirectional context and may fail to capture all essential information needed for prompt compression; (ii) it is not aligned with the prompt compression objective.To address these issues, we propose a data distillation procedure to derive knowledge from an LLM to compress prompts without losing crucial information, and meantime, introduce an extractive text compression dataset. We formulate prompt compression as a token classification problem to guarantee the faithfulness of the compressed prompt to the original one, and use a Transformer encoder as the base architecture to capture all essential information for prompt compression from the full bidirectional context. Our approach leads to lower latency by explicitly learning the compression objective with smaller models such as XLM-RoBERTa-large and mBERT.We evaluate our method on both in-domain and out-of-domain datasets, including MeetingBank, LongBench, ZeroScrolls, GSM8K, and BBH. Despite its small size, our model shows significant performance gains over strong baselines and demonstrates robust generalization ability across different LLMs. Additionally, our model is 3x-6x faster than existing prompt compression methods, while accelerating the end-to-end latency by 1.6x-2.9x with compression ratios of 2x-5x.",
}
| This paper focuses on task-agnostic prompt compression for better generalizability and efficiency. Considering the redundancy in natural language, existing approaches compress prompts by removing tokens or lexical units according to their information entropy obtained from a causal language model such as LLaMa-7B. The challenge is that information entropy may be a suboptimal compression metric: (i) it only leverages unidirectional context and may fail to capture all essential information needed for prompt compression; (ii) it is not aligned with the prompt compression objective.To address these issues, we propose a data distillation procedure to derive knowledge from an LLM to compress prompts without losing crucial information, and meantime, introduce an extractive text compression dataset. We formulate prompt compression as a token classification problem to guarantee the faithfulness of the compressed prompt to the original one, and use a Transformer encoder as the base architecture to capture all essential information for prompt compression from the full bidirectional context. Our approach leads to lower latency by explicitly learning the compression objective with smaller models such as XLM-RoBERTa-large and mBERT.We evaluate our method on both in-domain and out-of-domain datasets, including MeetingBank, LongBench, ZeroScrolls, GSM8K, and BBH. Despite its small size, our model shows significant performance gains over strong baselines and demonstrates robust generalization ability across different LLMs. Additionally, our model is 3x-6x faster than existing prompt compression methods, while accelerating the end-to-end latency by 1.6x-2.9x with compression ratios of 2x-5x. | [
"Pan, Zhuoshi",
"Wu, Qianhui",
"Jiang, Huiqiang",
"Xia, Menglin",
"Luo, Xufang",
"Zhang, Jue",
"Lin, Qingwei",
"R{\\\"u}hle, Victor",
"Yang, Yuqing",
"Lin, Chin-Yew",
"Zhao, H. Vicky",
"Qiu, Lili",
"Zhang, Dongmei"
] | LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression | findings-acl.57 | Poster | 2403.12968 | [
"https://github.com/microsoft/LLMLingua"
] | https://huggingface.co/papers/2403.12968 | 5 | 24 | 4 | 13 | https://aclanthology.org/2024.findings-acl.57/ | [
"microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank",
"microsoft/llmlingua-2-xlm-roberta-large-meetingbank"
] | [
"microsoft/MeetingBank-LLMCompressed",
"microsoft/MeetingBank-QA-Summary"
] | [
"microsoft/LLMLingua",
"microsoft/llmlingua-2",
"datawithsuman/prompt_optimization",
"themanas021/llmlingua-2",
"Arafath10/llmlingua-2",
"qminh369/Compression",
"dryouviavant/llmlingua-2",
"loveitl/Promot-Compress",
"qminh369/Compression_v1",
"Almaatla/llmlingua-2",
"Oluwatoni/Llmingua",
"qminh369/Final_Compression",
"Oluwatoni/llm_lingua",
"cornzz/llmlingua-demo"
] | 1 |
https://aclanthology.org/2024.findings-acl.58.bib | @inproceedings{guo-yang-2024-econnli,
title = "{E}con{NLI}: Evaluating Large Language Models on Economics Reasoning",
author = "Guo, Yue and
Yang, Yi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.58",
pages = "982--994",
abstract = "Large Language Models (LLMs) are widely used for writing economic analysis reports or providing financial advice, but their ability to understand economic knowledge and reason about potential results of specific economic events lacks systematic evaluation. To address this gap, we propose a new dataset, natural language inference on economic events (EconNLI), to evaluate LLMs{'} knowledge and reasoning abilities in the economic domain. We evaluate LLMs on (1) their ability to correctly classify whether a premise event will cause a hypothesis event and (2) their ability to generate reasonable events resulting from a given premise. Our experiments reveal that LLMs are not sophisticated in economic reasoning and may generate wrong or hallucinated answers. Our study raises awareness of the limitations of using LLMs for critical decision-making involving economic reasoning and analysis. The dataset and codes are available at \url{https://github.com/Irenehere/EconNLI}.",
}
| Large Language Models (LLMs) are widely used for writing economic analysis reports or providing financial advice, but their ability to understand economic knowledge and reason about potential results of specific economic events lacks systematic evaluation. To address this gap, we propose a new dataset, natural language inference on economic events (EconNLI), to evaluate LLMs{'} knowledge and reasoning abilities in the economic domain. We evaluate LLMs on (1) their ability to correctly classify whether a premise event will cause a hypothesis event and (2) their ability to generate reasonable events resulting from a given premise. Our experiments reveal that LLMs are not sophisticated in economic reasoning and may generate wrong or hallucinated answers. Our study raises awareness of the limitations of using LLMs for critical decision-making involving economic reasoning and analysis. The dataset and codes are available at \url{https://github.com/Irenehere/EconNLI}. | [
"Guo, Yue",
"Yang, Yi"
] | EconNLI: Evaluating Large Language Models on Economics Reasoning | findings-acl.58 | Poster | 2407.01212 | [
"https://github.com/irenehere/econnli"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.58/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.59.bib | @inproceedings{li-etal-2024-better,
title = "Better Late Than Never: Model-Agnostic Hallucination Post-Processing Framework Towards Clinical Text Summarization",
author = "Li, Songda and
Zhang, Yunqi and
Deng, Chunyuan and
Niu, Yake and
Zhao, Hui",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.59",
pages = "995--1011",
abstract = "Clinical text summarization has proven successful in generating concise and coherent summaries. However, these summaries may include unintended text with hallucinations, which can mislead clinicians and patients. Existing methods for mitigating hallucinations can be categorized into task-specific and task-agnostic approaches. Task-specific methods lack versatility for real-world applicability. Meanwhile, task-agnostic methods are not model-agnostic, so they require retraining for different models, resulting in considerable computational costs. To address these challenges, we propose MEDAL, a model-agnostic framework designed to post-process medical hallucinations. MEDAL can seamlessly integrate with any medical summarization model, requiring no additional computational overhead. MEDAL comprises a medical infilling model and a hallucination correction model. The infilling model generates non-factual summaries with common errors to train the correction model. The correction model is incorporated with a self-examination mechanism to activate its cognitive capability. We conduct comprehensive experiments using 11 widely accepted metrics on 7 baseline models across 3 medical text summarization tasks. MEDAL demonstrates superior performance in correcting hallucinations when applied to summaries generated by pre-trained language models and large language models.",
}
| Clinical text summarization has proven successful in generating concise and coherent summaries. However, these summaries may include unintended text with hallucinations, which can mislead clinicians and patients. Existing methods for mitigating hallucinations can be categorized into task-specific and task-agnostic approaches. Task-specific methods lack versatility for real-world applicability. Meanwhile, task-agnostic methods are not model-agnostic, so they require retraining for different models, resulting in considerable computational costs. To address these challenges, we propose MEDAL, a model-agnostic framework designed to post-process medical hallucinations. MEDAL can seamlessly integrate with any medical summarization model, requiring no additional computational overhead. MEDAL comprises a medical infilling model and a hallucination correction model. The infilling model generates non-factual summaries with common errors to train the correction model. The correction model is incorporated with a self-examination mechanism to activate its cognitive capability. We conduct comprehensive experiments using 11 widely accepted metrics on 7 baseline models across 3 medical text summarization tasks. MEDAL demonstrates superior performance in correcting hallucinations when applied to summaries generated by pre-trained language models and large language models. | [
"Li, Songda",
"Zhang, Yunqi",
"Deng, Chunyuan",
"Niu, Yake",
"Zhao, Hui"
] | Better Late Than Never: Model-Agnostic Hallucination Post-Processing Framework Towards Clinical Text Summarization | findings-acl.59 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.59/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.60.bib | @inproceedings{pan-etal-2024-finding,
title = "Finding and Editing Multi-Modal Neurons in Pre-Trained Transformers",
author = "Pan, Haowen and
Cao, Yixin and
Wang, Xiaozhi and
Yang, Xun and
Wang, Meng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.60",
pages = "1012--1037",
abstract = "Understanding the internal mechanisms by which multi-modal large language models (LLMs) interpret different modalities and integrate cross-modal representations is becoming increasingly critical for continuous improvements in both academia and industry. In this paper, we propose a novel method to identify key neurons for interpretability {---} how multi-modal LLMs bridge visual and textual concepts for captioning. Our method improves conventional works upon efficiency and applied range by removing needs of costly gradient computation. Based on those identified neurons, we further design a multi-modal knowledge editing method, beneficial to mitigate sensitive words or hallucination. For rationale of our design, we provide theoretical assumption. For empirical evaluation, we have conducted extensive quantitative and qualitative experiments. The results not only validate the effectiveness of our methods, but also offer insightful findings that highlight three key properties of multi-modal neurons: sensitivity, specificity and causal-effect, to shed light for future research.",
}
| Understanding the internal mechanisms by which multi-modal large language models (LLMs) interpret different modalities and integrate cross-modal representations is becoming increasingly critical for continuous improvements in both academia and industry. In this paper, we propose a novel method to identify key neurons for interpretability {---} how multi-modal LLMs bridge visual and textual concepts for captioning. Our method improves conventional works upon efficiency and applied range by removing needs of costly gradient computation. Based on those identified neurons, we further design a multi-modal knowledge editing method, beneficial to mitigate sensitive words or hallucination. For rationale of our design, we provide theoretical assumption. For empirical evaluation, we have conducted extensive quantitative and qualitative experiments. The results not only validate the effectiveness of our methods, but also offer insightful findings that highlight three key properties of multi-modal neurons: sensitivity, specificity and causal-effect, to shed light for future research. | [
"Pan, Haowen",
"Cao, Yixin",
"Wang, Xiaozhi",
"Yang, Xun",
"Wang, Meng"
] | Finding and Editing Multi-Modal Neurons in Pre-Trained Transformers | findings-acl.60 | Poster | 2311.07470 | [
"https://github.com/opanhw/MM_Neurons"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.60/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.findings-acl.61.bib | @inproceedings{luong-etal-2024-realistic,
title = "Realistic Evaluation of Toxicity in Large Language Models",
author = "Luong, Tinh and
Le, Thanh-Thien and
Ngo, Linh and
Nguyen, Thien",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.61",
pages = "1038--1047",
abstract = "Large language models (LLMs) have become integral to our professional workflows and daily lives. Nevertheless, these machine companions of ours have a critical flaw: the huge amount of data which endows them with vast and diverse knowledge, also exposes them to the inevitable toxicity and bias. While most LLMs incorporate defense mechanisms to prevent the generation of harmful content, these safeguards can be easily bypassed with minimal prompt engineering. In this paper, we introduce the new Thoroughly Engineered Toxicity (TET) dataset, comprising manually crafted prompts designed to nullify the protective layers of such models. Through extensive evaluations, we demonstrate the pivotal role of TET in providing a rigorous benchmark for evaluation of toxicity awareness in several popular LLMs: it highlights the toxicity in the LLMs that might remain hidden when using normal prompts, thus revealing subtler issues in their behavior.",
}
| Large language models (LLMs) have become integral to our professional workflows and daily lives. Nevertheless, these machine companions of ours have a critical flaw: the huge amount of data which endows them with vast and diverse knowledge, also exposes them to the inevitable toxicity and bias. While most LLMs incorporate defense mechanisms to prevent the generation of harmful content, these safeguards can be easily bypassed with minimal prompt engineering. In this paper, we introduce the new Thoroughly Engineered Toxicity (TET) dataset, comprising manually crafted prompts designed to nullify the protective layers of such models. Through extensive evaluations, we demonstrate the pivotal role of TET in providing a rigorous benchmark for evaluation of toxicity awareness in several popular LLMs: it highlights the toxicity in the LLMs that might remain hidden when using normal prompts, thus revealing subtler issues in their behavior. | [
"Luong, Tinh",
"Le, Thanh-Thien",
"Ngo, Linh",
"Nguyen, Thien"
] | Realistic Evaluation of Toxicity in Large Language Models | findings-acl.61 | Poster | 2405.10659 | [
""
] | https://huggingface.co/papers/2405.10659 | 3 | 2 | 0 | 4 | https://aclanthology.org/2024.findings-acl.61/ | [] | [
"convoicon/Thoroughly_Engineered_Toxicity"
] | [] | 1 |
https://aclanthology.org/2024.findings-acl.62.bib | @inproceedings{zhang-etal-2024-controllable,
title = "Controllable Text Generation with Residual Memory Transformer",
author = "Zhang, Hanqing and
Sun, Si and
Wu, Haiming and
Song, Dawei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.62",
pages = "1048--1066",
abstract = "Large-scale Causal Language Models (CLMs), e.g., GPT3 and ChatGPT, have brought great success in text generation. However, it is still an open challenge to effectively control the generation process of a CLM while balancing the flexibility, control granularity, and generation efficiency. In this paper, we provide a new alternative for controllable text generation (CTG), by designing a non-intrusive, lightweight control plugin, namely Residual Memory Transformer (RMT), to accompany the generation of CLM at arbitrary time steps. With an encoder-decoder setup, RMT can accept any types of control conditions and cooperate with the base CLM through a residual learning paradigm, to achieve a more flexible, general, and efficient CTG. Extensive experiments are carried out on various control tasks, in the form of both automatic and human evaluations. The results demonstrate the superiority of RMT over a wide range of state-of-the-art CTG approaches. The code implementation of our work is available at: https://github.com/Residual{\_}Memory{\_}Transformer.",
}
| Large-scale Causal Language Models (CLMs), e.g., GPT3 and ChatGPT, have brought great success in text generation. However, it is still an open challenge to effectively control the generation process of a CLM while balancing the flexibility, control granularity, and generation efficiency. In this paper, we provide a new alternative for controllable text generation (CTG), by designing a non-intrusive, lightweight control plugin, namely Residual Memory Transformer (RMT), to accompany the generation of CLM at arbitrary time steps. With an encoder-decoder setup, RMT can accept any types of control conditions and cooperate with the base CLM through a residual learning paradigm, to achieve a more flexible, general, and efficient CTG. Extensive experiments are carried out on various control tasks, in the form of both automatic and human evaluations. The results demonstrate the superiority of RMT over a wide range of state-of-the-art CTG approaches. The code implementation of our work is available at: https://github.com/Residual{\_}Memory{\_}Transformer. | [
"Zhang, Hanqing",
"Sun, Si",
"Wu, Haiming",
"Song, Dawei"
] | Controllable Text Generation with Residual Memory Transformer | findings-acl.62 | Poster | 2309.16231 | [
"https://github.com/littlehacker26/discriminator-cooperative-unlikelihood-prompt-tuning"
] | https://huggingface.co/papers/2309.16231 | 0 | 1 | 0 | 4 | https://aclanthology.org/2024.findings-acl.62/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.63.bib | @inproceedings{jie-etal-2024-prompt,
title = "Prompt-Based Length Controlled Generation with Multiple Control Types",
author = "Jie, Renlong and
Meng, Xiaojun and
Shang, Lifeng and
Jiang, Xin and
Liu, Qun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.63",
pages = "1067--1085",
abstract = "Large language models (LLMs) have attracted great attention given their strong performance on a wide range of NLP tasks. In practice, users often expect generated texts to fall within a specific length range, making length controlled generation an important topic, especially for GPT-style models. Existing length control methods mostly focus on a simple control type of {``}equal to{''} a target length. Different from them, we propose a prompt-based method to achieve length controlled generation under different control types with high accuracy. In particular, we adopt reinforcement learning (RL) and sample filtering with the reward signal given by rule-based reward models, which enhances the length control ability of models by rewarding outputs that follow certain control instructions. In addition, we introduce a standard prompt extractor to parse arbitrary users{'} input into standard control instructions. Experiments show that our method significantly improves the accuracy of prompt-based length control on popular summarization datasets like CNNDM and NYT under multiple control types. Moreover, both the standard prompt extractor and RL-tuned model show strong generalization to unseen control prompt templates.",
}
| Large language models (LLMs) have attracted great attention given their strong performance on a wide range of NLP tasks. In practice, users often expect generated texts to fall within a specific length range, making length controlled generation an important topic, especially for GPT-style models. Existing length control methods mostly focus on a simple control type of {``}equal to{''} a target length. Different from them, we propose a prompt-based method to achieve length controlled generation under different control types with high accuracy. In particular, we adopt reinforcement learning (RL) and sample filtering with the reward signal given by rule-based reward models, which enhances the length control ability of models by rewarding outputs that follow certain control instructions. In addition, we introduce a standard prompt extractor to parse arbitrary users{'} input into standard control instructions. Experiments show that our method significantly improves the accuracy of prompt-based length control on popular summarization datasets like CNNDM and NYT under multiple control types. Moreover, both the standard prompt extractor and RL-tuned model show strong generalization to unseen control prompt templates. | [
"Jie, Renlong",
"Meng, Xiaojun",
"Shang, Lifeng",
"Jiang, Xin",
"Liu, Qun"
] | Prompt-Based Length Controlled Generation with Multiple Control Types | findings-acl.63 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.63/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.findings-acl.64.bib | @inproceedings{chen-etal-2024-pca,
title = "{PCA}-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain",
author = "Chen, Liang and
Zhang, Yichi and
Ren, Shuhuai and
Zhao, Haozhe and
Cai, Zefan and
Wang, Yuchi and
Wang, Peiyi and
Meng, Xiangdi and
Liu, Tianyu and
Chang, Baobao",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.64",
pages = "1086--1104",
abstract = "We present PCA-Bench, a multimodal decision-making benchmark for evaluating the integrated capabilities of Multimodal Large Language Models (MLLMs). Departing from previous benchmarks focusing on simplistic tasks and individual model capability, PCA-Bench introduces three complex scenarios: autonomous driving, domestic robotics, and open-world games. Given task instructions and diverse contexts, the model is required to seamlessly integrate multiple capabilities of Perception, Cognition, and Action in a reasoning chain to make accurate decisions. Moreover, PCA-Bench features error localization capabilities, scrutinizing model inaccuracies in areas such as perception, knowledge, or reasoning. This enhances the reliability of deploying MLLMs. To balance accuracy and efficiency in evaluation, we propose PCA-Eval, an automatic evaluation protocol, and assess 10 prevalent MLLMs. The results reveal significant performance disparities between open-source models and powerful proprietary models like GPT-4 Vision. To address this, we introduce Embodied-Instruction-Evolution (EIE), an automatic framework for synthesizing instruction tuning examples in multimodal embodied environments. EIE generates 7,510 training examples in PCA-Bench and enhances the performance of open-source MLLMs, occasionally surpassing GPT-4 Vision (+3{\%} in decision accuracy), thereby validating the effectiveness of EIE. Our findings suggest that robust MLLMs like GPT4-Vision show promise for decision-making in embodied agents, opening new avenues for MLLM research. All benchmark data and evaluation code are made public.",
}
| We present PCA-Bench, a multimodal decision-making benchmark for evaluating the integrated capabilities of Multimodal Large Language Models (MLLMs). Departing from previous benchmarks focusing on simplistic tasks and individual model capability, PCA-Bench introduces three complex scenarios: autonomous driving, domestic robotics, and open-world games. Given task instructions and diverse contexts, the model is required to seamlessly integrate multiple capabilities of Perception, Cognition, and Action in a reasoning chain to make accurate decisions. Moreover, PCA-Bench features error localization capabilities, scrutinizing model inaccuracies in areas such as perception, knowledge, or reasoning. This enhances the reliability of deploying MLLMs. To balance accuracy and efficiency in evaluation, we propose PCA-Eval, an automatic evaluation protocol, and assess 10 prevalent MLLMs. The results reveal significant performance disparities between open-source models and powerful proprietary models like GPT-4 Vision. To address this, we introduce Embodied-Instruction-Evolution (EIE), an automatic framework for synthesizing instruction tuning examples in multimodal embodied environments. EIE generates 7,510 training examples in PCA-Bench and enhances the performance of open-source MLLMs, occasionally surpassing GPT-4 Vision (+3{\%} in decision accuracy), thereby validating the effectiveness of EIE. Our findings suggest that robust MLLMs like GPT4-Vision show promise for decision-making in embodied agents, opening new avenues for MLLM research. All benchmark data and evaluation code are made public. | [
"Chen, Liang",
"Zhang, Yichi",
"Ren, Shuhuai",
"Zhao, Haozhe",
"Cai, Zefan",
"Wang, Yuchi",
"Wang, Peiyi",
"Meng, Xiangdi",
"Liu, Tianyu",
"Chang, Baobao"
] | PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain | findings-acl.64 | Poster | 2402.15527 | [
"https://github.com/pkunlp-icler/pca-eval"
] | https://huggingface.co/papers/2402.15527 | 3 | 0 | 1 | 10 | https://aclanthology.org/2024.findings-acl.64/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.65.bib | @inproceedings{kim-etal-2024-pearl,
title = "Pearl: A Review-driven Persona-Knowledge Grounded Conversational Recommendation Dataset",
author = "Kim, Minjin and
Kim, Minju and
Kim, Hana and
Kwak, Beong-woo and
Kang, SeongKu and
Yu, Youngjae and
Yeo, Jinyoung and
Lee, Dongha",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.65",
pages = "1105--1120",
abstract = "Conversational recommender systems are an emerging area that has garnered increasing interest in the community, especially with the advancements in large language models (LLMs) that enable sophisticated handling of conversational input. Despite the progress, the field still has many aspects left to explore. The currently available public datasets for conversational recommendation lack specific user preferences and explanations for recommendations, hindering high-quality recommendations. To address such challenges, we present a novel conversational recommendation dataset named PEARL, synthesized with persona- and knowledge-augmented LLM simulators. We obtain detailed persona and knowledge from real-world reviews and construct a large-scale dataset with over 57k dialogues. Our experimental results demonstrate that PEARL contains more specific user preferences, show expertise in the target domain, and provides recommendations more relevant to the dialogue context than those in prior datasets. Furthermore, we demonstrate the utility of PEARL by showing that our downstream models outperform baselines in both human and automatic evaluations. We release our dataset and code.",
}
| Conversational recommender systems are an emerging area that has garnered increasing interest in the community, especially with the advancements in large language models (LLMs) that enable sophisticated handling of conversational input. Despite the progress, the field still has many aspects left to explore. The currently available public datasets for conversational recommendation lack specific user preferences and explanations for recommendations, hindering high-quality recommendations. To address such challenges, we present a novel conversational recommendation dataset named PEARL, synthesized with persona- and knowledge-augmented LLM simulators. We obtain detailed persona and knowledge from real-world reviews and construct a large-scale dataset with over 57k dialogues. Our experimental results demonstrate that PEARL contains more specific user preferences, show expertise in the target domain, and provides recommendations more relevant to the dialogue context than those in prior datasets. Furthermore, we demonstrate the utility of PEARL by showing that our downstream models outperform baselines in both human and automatic evaluations. We release our dataset and code. | [
"Kim, Minjin",
"Kim, Minju",
"Kim, Hana",
"Kwak, Beong-woo",
"Kang, SeongKu",
"Yu, Youngjae",
"Yeo, Jinyoung",
"Lee, Dongha"
] | Pearl: A Review-driven Persona-Knowledge Grounded Conversational Recommendation Dataset | findings-acl.65 | Poster | 2403.04460 | [
"https://github.com/kkmjkim/pearl"
] | https://huggingface.co/papers/2403.04460 | 4 | 0 | 0 | 10 | https://aclanthology.org/2024.findings-acl.65/ | [] | [
"DLI-Lab/pearl"
] | [] | 1 |
https://aclanthology.org/2024.findings-acl.66.bib | @inproceedings{lee-etal-2024-collavo,
title = "{C}o{LL}a{VO}: Crayon Large Language and Vision m{O}del",
author = "Lee, Byung-Kwan and
Park, Beomchan and
Kim, Chae Won and
Ro, Yong Man",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.66",
pages = "1121--1138",
abstract = "The remarkable success of Large Language Models (LLMs) and instruction tuning drives the evolution of Vision Language Models (VLMs) towards a versatile general-purpose model. Yet, it remains unexplored whether current VLMs genuinely possess quality object-level image understanding capabilities determined from {`}what objects are in the image?{'} or {`}which object corresponds to a specified bounding box?{'}. Our findings reveal that the image understanding capabilities of current VLMs are strongly correlated with their zero-shot performance on vision language (VL) tasks. This suggests that prioritizing basic image understanding is crucial for VLMs to excel at VL tasks. To enhance object-level image understanding, we propose Crayon Large Language and Vision mOdel (CoLLaVO), which incorporates instruction tuning with Crayon Prompt as a new visual prompt tuning scheme based on panoptic color maps. Furthermore, we present a learning strategy of Dual QLoRA to preserve object-level image understanding without forgetting it during visual instruction tuning, thereby achieving a significant leap in numerous VL benchmarks in a zero-shot setting.",
}
| The remarkable success of Large Language Models (LLMs) and instruction tuning drives the evolution of Vision Language Models (VLMs) towards a versatile general-purpose model. Yet, it remains unexplored whether current VLMs genuinely possess quality object-level image understanding capabilities determined from {`}what objects are in the image?{'} or {`}which object corresponds to a specified bounding box?{'}. Our findings reveal that the image understanding capabilities of current VLMs are strongly correlated with their zero-shot performance on vision language (VL) tasks. This suggests that prioritizing basic image understanding is crucial for VLMs to excel at VL tasks. To enhance object-level image understanding, we propose Crayon Large Language and Vision mOdel (CoLLaVO), which incorporates instruction tuning with Crayon Prompt as a new visual prompt tuning scheme based on panoptic color maps. Furthermore, we present a learning strategy of Dual QLoRA to preserve object-level image understanding without forgetting it during visual instruction tuning, thereby achieving a significant leap in numerous VL benchmarks in a zero-shot setting. | [
"Lee, Byung-Kwan",
"Park, Beomchan",
"Kim, Chae Won",
"Ro, Yong Man"
] | CoLLaVO: Crayon Large Language and Vision mOdel | findings-acl.66 | Poster | 2402.11248 | [
"https://github.com/ByungKwanLee/CoLLaVO"
] | https://huggingface.co/papers/2402.11248 | 4 | 18 | 5 | 4 | https://aclanthology.org/2024.findings-acl.66/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.findings-acl.67.bib | @inproceedings{wu-etal-2024-modelling,
title = "Modelling Variability in Human Annotator Simulation",
author = "Wu, Wen and
Chen, Wenlin and
Zhang, Chao and
Woodland, Phil",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.67",
pages = "1139--1157",
abstract = "Human annotator simulation (HAS) serves as a cost-effective substitute for human evaluation tasks such as data annotation and system assessment. It is important to incorporate the variability present in human evaluation into HAS, since it helps capture diverse subjective interpretations and mitigate potential biases and over-representation. This work introduces a novel framework for modelling variability in HAS. Conditional softmax flow (S-CNF) is proposed to model the distribution of subjective human annotations, which leverages diverse human annotations via meta-learning. This enables efficient generation of annotations that exhibit human variability for unlabelled input. In addition, a wide range of evaluation metrics are adopted to assess the capability and efficiency of HAS systems in predicting the aggregated behaviours of human annotators, matching the distribution of human annotations, and simulating the inter-annotator disagreements. Results demonstrate that the proposed method achieves state-of-the-art performance on two real-world human evaluation tasks: emotion recognition and toxic speech detection.",
}
| Human annotator simulation (HAS) serves as a cost-effective substitute for human evaluation tasks such as data annotation and system assessment. It is important to incorporate the variability present in human evaluation into HAS, since it helps capture diverse subjective interpretations and mitigate potential biases and over-representation. This work introduces a novel framework for modelling variability in HAS. Conditional softmax flow (S-CNF) is proposed to model the distribution of subjective human annotations, which leverages diverse human annotations via meta-learning. This enables efficient generation of annotations that exhibit human variability for unlabelled input. In addition, a wide range of evaluation metrics are adopted to assess the capability and efficiency of HAS systems in predicting the aggregated behaviours of human annotators, matching the distribution of human annotations, and simulating the inter-annotator disagreements. Results demonstrate that the proposed method achieves state-of-the-art performance on two real-world human evaluation tasks: emotion recognition and toxic speech detection. | [
"Wu, Wen",
"Chen, Wenlin",
"Zhang, Chao",
"Woodl",
", Phil"
] | Modelling Variability in Human Annotator Simulation | findings-acl.67 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.findings-acl.67/ | [] | [] | [] | 0 |