## MiniCPM-V 1.0
> Archive atοΌ2024-05-19
MiniCPM-V 1.0 is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of MiniCPM-V 1.0 include:
- β‘οΈ **High Efficiency.**
MiniCPM-V 1.0 can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows MiniCPM-V 1.0 to operate with **much less memory cost and higher speed during inference**.
- π₯ **Promising Performance.**
MiniCPM-V 1.0 achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**.
- π **Bilingual Support.**
MiniCPM-V 1.0 is **the first end-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from the ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038).
### Evaluation
Model |
Size |
Visual Tokens |
MME |
MMB dev (en) |
MMB dev (zh) |
MMMU val |
CMMMU val |
LLaVA-Phi |
3B |
576 |
1335 |
59.8 |
- |
- |
- |
MobileVLM |
3B |
144 |
1289 |
59.6 |
- |
- |
- |
Imp-v1 |
3B |
576 |
1434 |
66.5 |
- |
- |
- |
Qwen-VL-Chat |
9.6B |
256 |
1487 |
60.6 |
56.7 |
35.9 |
30.7 |
CogVLM |
17.4B |
1225 |
1438 |
63.7 |
53.8 |
32.1 |
- |
MiniCPM-V 1.0 |
3B |
64 |
1452 |
67.9 |
65.3 |
37.2 |
32.1 |
### Examples
We deploy MiniCPM-V 1.0 on end devices. The demo video is the raw screen recording on a OnePlus 9R without edition.
## Install
1. Clone this repository and navigate to the source folder
```bash
git clone https://github.com/OpenBMB/OmniLMM.git
cd OmniLMM
```
2. Create conda environment
```Shell
conda create -n OmniLMM python=3.10 -y
conda activate OmniLMM
```
3. Install dependencies
```shell
pip install -r requirements.txt
```
## Inference
### Model Zoo
| Model | Description | Download Link |
|:----------------------|:-------------------|:---------------:|
| MiniCPM-V 1.0 | The efficient version for end device deployment. | [π€](https://huggingface.co/openbmb/MiniCPM-V) [](https://modelscope.cn/models/OpenBMB/MiniCPM-V/files) |
### Multi-turn Conversation
Please refer to the following codes to run `MiniCPM-V 1.0`.
```python
from chat import OmniLMMChat, img2base64
chat_model = OmniLMMChat('openbmb/MiniCPM-V')
im_64 = img2base64('./assets/worldmap_ck.jpg')
# First round chat
msgs = [{"role": "user", "content": "What is interesting about this image?"}]
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
print(answer)
# Second round chat
# pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": answer})
msgs.append({"role": "user", "content": "Where is China in the image"})
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
print(answer)
```
### Inference on Mac
Click to view example, MiniCPM-V 1.0 can run on Mac with MPS (Apple silicon or AMD GPUs).
```python
# test.py
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True, torch_dtype=torch.bfloat16)
model = model.to(device='mps', dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True)
model.eval()
image = Image.open('./assets/worldmap_ck.jpg').convert('RGB')
question = 'What is interesting about this image?'
msgs = [{'role': 'user', 'content': question}]
answer, context, _ = model.chat(
image=image,
msgs=msgs,
context=None,
tokenizer=tokenizer,
sampling=True
)
print(answer)
```
Run with command:
```shell
PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
```
### Deployment on Mobile Phone
Currently MiniCPM-V 1.0 can be deployed on mobile phones with Android and Harmony operating systems. π Try it out [here](https://github.com/OpenBMB/mlc-MiniCPM).