Edit model card

TinyLLaVA: A Framework of Small-scale Large Multimodal Models

github arXiv License

πŸŽ‰ News

  • [2024.03.10] base recipe out!
  • [2024.03.10] Finetune scripts out!
  • [2024.02.25] Update evaluation scripts and docs!
  • [2024.02.25] Data descriptions out. Release TinyLLaVA-1.5B and TinyLLaVA-2.0B!
  • [2024.02.24] Example code on inference and model loading added!
  • [2024.02.23] Evaluation code and scripts released!
  • [2024.02.21] Creating the TinyLLaVABench repository on GitHub!
  • [2024.02.21] Our paper: TinyLLaVA: A Framework of Small-scale Large Multimodal Models is out!
  • [2024.01.11] Our fist model TinyLLaVA-1.4B is out!

βŒ› TODO

  • Add support for Ollama and llama.cpp.
  • Developers' guide / How to build demo locally.
  • Training and custom finetuning docs.
  • Model Zoo descriptions.
  • Examples and inference.
  • Release code for training.
  • Add descriptions for evaluation.
  • Add descriptions for data preparation.
  • Release TinyLLaVA-1.5B and TinyLLaVA-2.0B.
  • Release TinyLLaVA-3.1B.
  • Release the evaluation code and weights today(2024.2.23).

πŸ”₯ High performance, but with fewer parameters

  • Our best model, TinyLLaVA-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.

Contents

πŸ”§ Requirements and Installation

We recommend the requirements as follows.

  1. Clone this repository and navigate to LLaVA folder
git clone https://github.com/DLCV-BUAA/TinyLLaVABench.git
cd TinyLLaVABench
  1. Install Package
conda create -n tinyllava python=3.10 -y
conda activate tinyllava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
  1. Install additional packages for training cases
pip install -e ".[train]"
pip install flash-attn --no-build-isolation

Upgrade to the latest code base

git pull
pip install -e .

# if you see some import errors when you upgrade, please try running the command below (without #)
# pip install flash-attn --no-build-isolation --no-cache-dir

🐳 Model Zoo

Legacy Model

Pretrained Models

Model Details

Name LLM Checkpoint LLaVA-Bench-Wild MME MMBench MM-Vet SQA-image VQA-v2 GQA TextVQA
TinyLLaVA-3.1B Phi-2 TinyLLaVA-3.1B 75.8 1464.9 66.9 32.0 69.1 79.9 62.0 59.1
TinyLLaVA-2.0B StableLM-2-1.6B TinyLLaVA-2.0B 66.4 1433.8 63.3 32.6 64.7 78.9 61.9 56.4
TinyLLaVA-1.5B TinyLlama TinyLLaVA-1.5B 60.8 1276.5 55.2 25.8 60.3 76.9 60.3 51.7

Demo

Gradio Web Demo

Launch a local web demo by running:

python tinyllava/serve/app.py --model-path bczhou/TinyLLaVA-3.1B --model-name TinyLLaVA-3.1B

CLI Inference

We also support running inference with CLI. To use our model, run:

python -m tinyllava.serve.cli \
    --model-path bczhou/TinyLLaVA-3.1B \
    --image-file "./tinyllava/serve/examples/extreme_ironing.jpg" 

πŸ”§ Quick Start

Load model
from tinyllava.model.builder import load_pretrained_model
from tinyllava.mm_utils import get_model_name_from_path
from tinyllava.eval.run_tiny_llava import eval_model

model_path = "bczhou/TinyLLaVA-3.1B"

tokenizer, model, image_processor, context_len = load_pretrained_model(
    model_path=model_path,
    model_base=None,
    model_name=get_model_name_from_path(model_path)
)

πŸ”§ Run Inference

Here's an example of running inference with TinyLLaVA-3.1B

Run Inference
from tinyllava.model.builder import load_pretrained_model
from tinyllava.mm_utils import get_model_name_from_path
from tinyllava.eval.run_tiny_llava import eval_model

model_path = "bczhou/TinyLLaVA-3.1B"
prompt = "What are the things I should be cautious about when I visit here?"
image_file = "https://llava-vl.github.io/static/images/view.jpg"

args = type('Args', (), {
    "model_path": model_path,
    "model_base": None,
    "model_name": get_model_name_from_path(model_path),
    "query": prompt,
    "conv_mode": "phi",
    "image_file": image_file,
    "sep": ",",
    "temperature": 0,
    "top_p": None,
    "num_beams": 1,
    "max_new_tokens": 512
})()

eval_model(args)

Important

We use different conv_mode for different models. Replace the conv_mode in args according to this table: | model | conv_mode | |---------------- |----------- | | TinyLLaVA-3.1B | phi | | TinyLLaVA-2.0B | phi | | TinyLLaVA-1.5B | v1 |

Evaluation

To ensure the reproducibility, we evaluate the models with greedy decoding.

See Evaluation.md

Data Preparation

In our paper, we used two different datasets: the LLaVA dataset and the ShareGPT4V dataset, and compared their differences. In this section, we provide information on data preparation.

Pretraining Images

  • LLaVA: The pretraining images of LLaVA is from the 558K subset of the LAION-CC-SBU dataset.
  • ShareGPT4V: The pretraining images of ShareGPT4V is a mixture of 558K LAION-CC-SBU subset, SAM dataset, and COCO dataset.

Pretraining Annotations

  • LLaVA: The pretraining annotations of LLaVA are here.
  • ShareGPT4V: The pretraining annotations of ShareGPT4V are here.

SFT Images & Annotations

The majority of the two SFT datasets are the same, with the exception that the 23K detailed description data in LLaVA-1.5-SFT being replaced with detailed captions randomly sampled from the 100K ShareGPT4V data.

Download data

  1. Download relevant images
  1. Download relevant annotations

Organize Data

Organize the image files and annotation files as follows in path/to/your/data:

data
β”œβ”€β”€ llava
β”‚   β”œβ”€β”€ llava_pretrain
β”‚   β”‚   β”œβ”€β”€ images
β”‚   β”‚   β”œβ”€β”€ blip_laion_cc_sbu_558k.json
β”œβ”€β”€ coco
β”‚   β”œβ”€β”€ train2017
β”œβ”€β”€ sam
β”‚   β”œβ”€β”€ images
β”œβ”€β”€ gqa
β”‚   β”œβ”€β”€ images
β”œβ”€β”€ ocr_vqa
β”‚   β”œβ”€β”€ images
β”œβ”€β”€ textvqa
β”‚   β”œβ”€β”€ train_images
β”œβ”€β”€ vg
β”‚   β”œβ”€β”€ VG_100K
β”‚   β”œβ”€β”€ VG_100K_2
β”œβ”€β”€ share_textvqa
β”‚   β”œβ”€β”€ images
β”œβ”€β”€ web-celebrity
β”‚   β”œβ”€β”€ images
β”œβ”€β”€ web-landmark
β”‚   β”œβ”€β”€ images
β”œβ”€β”€ wikiart
β”‚   β”œβ”€β”€ images
β”œβ”€β”€ text_files
β”‚   β”œβ”€β”€ llava_v1_5_mix665k.json
β”‚   β”œβ”€β”€ share-captioner_coco_lcs_sam_1246k_1107.json
β”‚   β”œβ”€β”€ sharegpt4v_mix665k_cap23k_coco-ap9k_lcs3k_sam9k_div2k.json

Train

This section we describe the base recipe.

Hyperparameters

Both hyperparameters used in pretraining and finetuning are provided below.

  1. Pretraining
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
TinyLLaVA-3.1B 256 1e-3 1 3072 0
  1. Finetuning
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
TinyLLaVA-3.1B 128 2e-5 1 3072 0

Pretrain

Replace paths to your paths

Training script with DeepSpeed ZeRO-2: pretrain.sh.

Finetune

Replace paths to your paths

Training script with DeepSpeed ZeRO-3: finetune.sh.

Custom-Finetune

Check out our custom finetune using LoRA here.

✏ Citation

If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.

@misc{zhou2024tinyllava,
      title={TinyLLaVA: A Framework of Small-scale Large Multimodal Models}, 
      author={Baichuan Zhou and Ying Hu and Xi Weng and Junlong Jia and Jie Luo and Xien Liu and Ji Wu and Lei Huang},
      year={2024},
      eprint={2402.14289},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

❀️ Community efforts

  • Our codebase is built upon the LLaVA project. Great work!
  • Our project uses data from the ShareGPT4V project. Great work!
Downloads last month
10
Safetensors
Model size
3.19B params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train bczhou/TinyLLaVA-3.1B-Pretrain