AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning
Abstract
Large language models (LLMs) have enabled the creation of multi-modal LLMs that exhibit strong comprehension of visual data such as images and videos. However, these models usually rely on extensive visual tokens from visual encoders, leading to high computational demands, which limits their applicability in resource-constrained environments and for long-context tasks. In this work, we propose a training-free adaptive inference method for multi-modal LLMs that can accommodate a broad range of efficiency requirements with a minimum performance drop. Our method consists of a) iterative token merging based on embedding similarity before LLMs, and b) progressive token pruning within LLM layers based on multi-modal importance. With a minimalist design, our method can be applied to both video and image LLMs. Extensive experiments on diverse video and image benchmarks demonstrate that, our method substantially reduces computation load (e.g., a 7-fold reduction in FLOPs) while preserving the performance of video and image LLMs. Further, under a similar computational cost, our method outperforms the state-of-the-art methods in long video understanding (e.g., +4.6 on MLVU). Additionally, our in-depth analysis provides insights into token redundancy and LLM layer behaviors, offering guidance for future research in designing efficient multi-modal LLMs. Our code will be available at https://github.com/LaVi-Lab/AIM.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ATP-LLaVA: Adaptive Token Pruning for Large Vision Language Models (2024)
- Treat Visual Tokens as Text? But Your MLLM Only Needs Fewer Efforts to See (2024)
- Efficient Multi-modal Large Language Models via Visual Token Grouping (2024)
- [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster (2024)
- FoPru: Focal Pruning for Efficient Large Vision-Language Models (2024)
- PAR: Prompt-Aware Token Reduction Method for Efficient Large Multimodal Models (2024)
- freePruner: A Training-free Approach for Large Multimodal Model Acceleration (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper