How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites
Abstract
In this report, we introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple improvements: (1) Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model -- InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs. (2) Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448times448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input. (3) High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks. We evaluate InternVL 1.5 through a series of benchmarks and comparative studies. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Code has been released at https://github.com/OpenGVLab/InternVL.
Community
Plain-english rewrite of the paper: https://www.aimodels.fyi/papers/arxiv/how-far-are-we-to-gpt-4v (feedback welcome!)
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models (2024)
- Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want (2024)
- Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models (2024)
- DeepSeek-VL: Towards Real-World Vision-Language Understanding (2024)
- TextSquare: Scaling up Text-Centric Visual Instruction Tuning (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Authors here. Thanks for posting.
Code: https://github.com/OpenGVLab/InternVL
Demo: https://internvl.opengvlab.com
Model: https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5
Authors here. Thanks for posting.
Code: https://github.com/OpenGVLab/InternVL
Demo: https://internvl.opengvlab.com
Model: https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5
- love from earth
Closing the Gap to GPT-4V: Introducing InternVL 1.5!
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/