--- license: apache-2.0 datasets: - liuhaotian/LLaVA-CC3M-Pretrain-595K pipeline_tag: image-text-to-text language: - en --- > [!IMPORTANT] > NOTE: This model is not meant to be used alone, you need to either finetune it with this [notebook](https://github.com/qrsch/doubutsu/blob/main/notebooks/finetuning_base.ipynb) or use an existing adapter. # doubutsu-2b-pt-378 `doubutsu` is a family of smol VLMs meant to be finetuned for your own use-case. Built by [@qtnx_](https://x.com/qtnx_) and [@yeswondwerr](https://x.com/yeswondwerr) ## Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image model_id = "qresearch/doubutsu-2b-pt-378" model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, torch_dtype=torch.float16, ).to("cuda") tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, ) model.load_adapter("qresearch/doubutsu-2b-lora-378-docci") image = Image.open("IMAGE") print( model.answer_question( image, "Describe the image", tokenizer, max_new_tokens=128, temperature=0.1 ), ) ``` > [!TIP] > these models require smaller temperatures. We recommend to use a temperature of 0.1-0.3. ## Evals TBD ## Acknowledgements - Liu et al. : [LLaVA](https://arxiv.org/abs/2304.08485) - Moon et al. : [AnyMAL](https://arxiv.org/abs/2309.16058) - vikhyatk : moondream codebase ``` .x+=:. z` ^% .uef^" .u . . '88" <888'888k 888E~?888L I888 9888 4888> ' d888 '88%" 8888N=*8888 d888 '88%" 9888 9888 4888> ' 9888 'Y" 888E 888E I888 9888 4888> 8888.+" %8" R88 8888.+" 9888 9888 4888> 9888 888E 888E I888 9888 .d888L .+ 8888L @8Wou 9% 8888L 9888 9888 .d888L .+ 9888 888E 888E `888Nx?888 ^"8888*" '8888c. .+ .888888P` '8888c. .+ 9888 9888 ^"8888*" ?8888u../ 888E 888E "88" '888 "Y" "88888% ` ^"F "88888% "888*""888" "Y" "8888P' m888N= 888> 88E "YP' "YP' ^Y" ^Y' "P' `Y" 888 98> J88" '8 @% ` :" ```