Edit model card

LLaVA-Next Interleave Model Card

Model Details

Model type: LLaVA-Next Interleave is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

Base LLM: Qwen/Qwen1.5-7B-Chat

Model Description

Repository: https://github.com/LLaVA-VL/LLaVA-NeXT

Primary intended uses: The primary use of LLaVA-Next Interleave is research on large multimodal models and chatbots. This is only for research exploration, and prohibited for commercial usage.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

License Notices

This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g. Llama-1/2 community license for LLaMA-2 and Vicuna-v1.5, Tongyi Qianwen LICENSE AGREEMENT and META LLAMA 3 COMMUNITY LICENSE AGREEMENT). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.

How to Get Started with the Model

Use the code below to get started with the model.

git clone https://github.com/LLaVA-VL/LLaVA-NeXT
# install llava-next
...
# download the ckpt
...
bash playground/demo/interleave_demo.py --model_path path/to/ckpt

Evaluation

Use the code below to evaluate the model.

Please first edit /path/to/ckpt to the path of checkpoint, /path/to/images to the path of "interleave_data" in scripts/interleave/eval_all.sh and then run

bash scripts/interleave/eval_all.sh

Bibtex citation

@misc{li2024llavanextinterleavetacklingmultiimagevideo,
      title={LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models}, 
      author={Feng Li and Renrui Zhang and Hao Zhang and Yuanhan Zhang and Bo Li and Wei Li and Zejun Ma and Chunyuan Li},
      year={2024},
      eprint={2407.07895},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.07895}, 
}
Downloads last month
473
Safetensors
Model size
8.14B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including lmms-lab/llava-next-interleave-qwen-7b-dpo