Edit model card

LLaVA-JP Model Card

Model detail

Model type:

LLaVA-JP is a vision-language model that can converse about input images.
This model is an LVLM model trained using google/siglip-so400m-patch14-384 as the image encoder and llm-jp/llm-jp-1.3b-v1.0 as the text decoder. supports the input of 768 x 768 high resolution images by scaling_on_scales method.

Training:

This model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.
In the second phase, it was fine-tuned with 10.5k of commoncatalog-cc-by-ext.

resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main

How to use the model

1. Download dependencies

git clone https://github.com/tosiyuki/LLaVA-JP.git

2. Inference

import torch
import transformers
from PIL import Image

from transformers.generation.streamers import TextStreamer
from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX
from llava.conversation import conv_templates, SeparatorStyle
from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM
from llava.train.dataset import tokenizer_image_token


if __name__ == "__main__":
    model_path = 'toshi456/llava-jp-1.3b-v1.1-commoncatalog-cc-by-ext-10k'
    device = "cuda" if torch.cuda.is_available() else "cpu"
    torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32

    model = LlavaGpt2ForCausalLM.from_pretrained(
        model_path, 
        low_cpu_mem_usage=True,
        use_safetensors=True,
        torch_dtype=torch_dtype,
        device_map=device,
    )
    tokenizer = transformers.AutoTokenizer.from_pretrained(
        model_path,
        model_max_length=1532,
        padding_side="right",
        use_fast=False,
    )
    model.eval()

    conv_mode = "v1"
    conv = conv_templates[conv_mode].copy()

    # image pre-process
    image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg"
    image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB')
    
    image_size = model.get_model().vision_tower.image_processor.size["height"]
    if model.get_model().vision_tower.scales is not None:
        image_size = model.get_model().vision_tower.image_processor.size["height"] * len(model.get_model().vision_tower.scales)
    
    if device == "cuda":
        image_tensor = model.get_model().vision_tower.image_processor(
            image, 
            return_tensors='pt', 
            size={"height": image_size, "width": image_size}
        )['pixel_values'].half().cuda().to(torch_dtype)
    else:
        image_tensor = model.get_model().vision_tower.image_processor(
            image, 
            return_tensors='pt', 
            size={"height": image_size, "width": image_size}
        )['pixel_values'].to(torch_dtype)

    # create prompt
    # ユーザー: <image>\n{prompt}
    prompt = "画像について説明してください。"
    inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt
    conv.append_message(conv.roles[0], inp)
    conv.append_message(conv.roles[1], None)
    prompt = conv.get_prompt()

    input_ids = tokenizer_image_token(
        prompt, 
        tokenizer, 
        IMAGE_TOKEN_INDEX, 
        return_tensors='pt'
    ).unsqueeze(0)
    if device == "cuda":
        input_ids = input_ids.to(device)

    input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する
    stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
    keywords = [stop_str]
    streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0)

    # predict
    with torch.inference_mode():
        output_id = model.generate(
            inputs=input_ids,
            images=image_tensor,
            do_sample=False,
            temperature=1.0,
            top_p=1.0,
            max_new_tokens=256,
            streamer=streamer,
            use_cache=True,
        )

    """画像には、木製の表面に座っている猫が描かれています。猫は、ラップトップの画面に集中しています。ラップトップは、黒い金属フレームと白いキーボードを持つ、鮮やかなオレンジ色です。猫の目は閉じており、リラックスした状態を示唆しています。背景は、猫のラップトップとその周囲の詳細を強調する灰色のテクスチャーです。画像にはテキストや他のオブジェクトは含まれていません。猫とラップトップの相対的な位置関係は、猫がラップトップの画面に集中していることを示唆しています。画像には他のオブジェクトや行動は含まれていません。<EOD|LLM-jp>"""

Training dataset

Stage1 Pretrain

Stage2 Fine-tuning

Acknowledgement

License

CC BY 4.0

Downloads last month
6
Safetensors
Model size
1.86B params
Tensor type
F32
·
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train toshi456/llava-jp-1.3b-v1.1-commoncatalog-cc-by-ext-10k