File size: 1,569 Bytes
94e735e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
Do we fully leverage image encoders in vision language models? 👀  
A new paper built a dense connector that does it better! Let's dig in 🧶 

![image_1](image_1.jpg)

VLMs consist of an image encoder block, a projection layer that projects image embeddings to text embedding space and then a text decoder sequentially connected 📖  
This [paper](https://t.co/DPQzbj0eWm) explores using intermediate states of image encoder and not a single output 🤩 

![image_2](image_2.jpg)

The authors explore three different ways of instantiating dense connector: sparse token integration, sparse channel integration and dense channel integration (each of them just take intermediate outputs and put them together in different ways, see below).  

![image_3](image_3.jpg)

They explore all three of them integrated to LLaVA 1.5 and found out each of the new models are superior to the original LLaVA 1.5.  

![image_4](image_4.jpg)  

I tried the model and it seems to work very well 🥹  
The authors have released various [checkpoints](https://t.co/iF8zM2qvDa) based on different decoders (Vicuna 7/13B and Llama 3-8B). 

![image_5](image_5.jpg)  


> [!TIP]
Ressources:  
[Dense Connector for MLLMs](https://arxiv.org/abs/2405.13800) 
by Huanjin Yao, Wenhao Wu, Taojiannan Yang, YuXin Song, Mengxi Zhang, Haocheng Feng, Yifan Sun, Zhiheng Li, Wanli Ouyang, Jingdong Wang (2024) 
[GitHub](https://github.com/HJYao00/DenseConnector) 

> [!NOTE]
[Original tweet](https://twitter.com/mervenoyann/status/1796089181988352216) (May 30, 2024)