license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
Model description
xGen-MM
is a series of the latest foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research. This series advances upon the successful designs of the BLIP
series, incorporating fundamental enhancements that ensure a more robust and superior foundation. These models have been trained at scale on high-quality image caption datasets and interleaved image-text data.
In the v1.5 (08/2024) release, we present a series of XGen-MM models including:
- 🤗 xGen-MM-instruct-interleave (our main instruct model):
xgen-mm-phi3-mini-instruct-interleave-r-v1.5
- This model has higher overall scores than xGen-MM-instruct on both single-image and multi-image benchmarks.
- 🤗 xGen-MM-base:
xgen-mm-phi3-mini-base-r-v1.5
- 🤗 xGen-MM-instruct:
xgen-mm-phi3-mini-instruct-singleimg-r-v1.5
- 🤗 xGen-MM-instruct-dpo:
xgen-mm-phi3-mini-instruct-dpo-r-v1.5
For more details, check out our tech report, fine-tuning code, and project page (coming soon).
Results
Few-shot Evaluation on Base model (without instruction tuning)
Model | Shot | VQAv2 | TextVQA | OKVQA | COCO | NoCaps | TextCaps |
---|---|---|---|---|---|---|---|
Flamingo-3B | 0 | 49.2 | 30.1 | 41.2 | 73.0 | - | - |
4 | 53.2 | 32.7 | 43.3 | 85.0 | - | - | |
8 | 55.4 | 32.4 | 44.6 | 90.6 | - | - | |
MM1-3B | 0 | 46.2 | 29.4 | 26.1 | 73.5 | 55.6 | 63.3 |
4 | 57.9 | 45.3 | 44.6 | 112.3 | 99.7 | 84.1 | |
8 | 63.6 | 44.6 | 48.4 | 114.6 | 104.7 | 88.8 | |
xGen-MM-base | 0 | 43.1 | 34.0 | 28.0 | 67.2 | 82.6 | 69.5 |
4 | 66.3 | 54.2 | 48.9 | 107.6 | 100.8 | 89.9 | |
8 | 66.9 | 55.3 | 50.1 | 109.8 | 104.6 | 94.0 |
Showcases of In-Context Learning
Below are some qualitative examples of the multi-modal in-context learning capacity of our base model.
How to use
Please check out our inference notebook for example code to use our model.
Reproducibility:
The pretraining evaluation is implemented based on OpenFlamingo: An open-source framework for training large multimodal models.. Few-shot examples are randomly drawn so there will be some variance with different random seeds.
Bias, Risks, Limitations, and Ethical Considerations
The main data sources are from the internet, including webpages, image stock sites, and curated datasets released by the research community. We have excluded certain data, such as LAION, due to known CSAM concerns. The model may be subject to bias from the original data source, as well as bias from LLMs and commercial APIs. We strongly recommend users assess safety and fairness before applying to downstream applications.
License
Our code and weights are released under the Apache 2.0 license.
Code acknowledgment
Our training code is based on OpenFlamingo: An open-source framework for training large multimodal models., and part of our data preprocessing code is adapted from LLaVA. Our evaluation code is based on VLMEvalKit: Open-source evaluation toolkit of large vision-language models (LVLMs).
We thank the authors for their open-source implementations.
Citation
@misc{blip3-xgenmm,
author = {Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, Shrikant Kendre, Jieyu Zhang, Can Qin, Shu Zhang, Chia-Chih Chen, Ning Yu, Juntao Tan, Tulika Manoj Awalgaonkar, Shelby Heinecke, Huan Wang, Yejin Choi, Ludwig Schmidt, Zeyuan Chen, Silvio Savarese, Juan Carlos Niebles, Caiming Xiong, Ran Xu},
title = {xGen-MM (BLIP-3): A Family of Open Large Multimodal Models},
year = {2024},
eprint = {2408.08872},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2408.08872},
}
Troubleshoot
- If you missed any packages, please consider the following
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
pip install open_clip_torch==2.24.0
pip install einops
pip install einops-exts
pip install transformers==4.41.1