Lumos : Empowering Multimodal LLMs with Scene Text Recognition
Abstract
We introduce Lumos, the first end-to-end multimodal question-answering system with text understanding capabilities. At the core of Lumos is a Scene Text Recognition (STR) component that extracts text from first person point-of-view images, the output of which is used to augment input to a Multimodal Large Language Model (MM-LLM). While building Lumos, we encountered numerous challenges related to STR quality, overall latency, and model inference. In this paper, we delve into those challenges, and discuss the system architecture, design choices, and modeling techniques employed to overcome these obstacles. We also provide a comprehensive evaluation for each component, showcasing high quality and efficiency.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SwinTextSpotter v2: Towards Better Synergy for Scene Text Spotting (2024)
- ScreenAI: A Vision-Language Model for UI and Infographics Understanding (2024)
- 3DMIT: 3D Multi-modal Instruction Tuning for Scene Understanding (2024)
- COSMO: COntrastive Streamlined MultimOdal Model with Interleaved Pre-Training (2024)
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper