Edit model card

GGUF quantisation of https://huggingface.co/Lin-Chen/ShareGPT4V-7B

You can run the model with llama.cpp server, then visit the server webpage to upload an image. Example:

.\server.exe -m ".\models\ShareGPT4V-7B_Q5_K_M.gguf" -t 6 -c 4096 -ngl 26 --mmproj ".\models\mmproj-model-f16.gguf"
Downloads last month
221
GGUF
Model size
6.74B params
Architecture
llama

5-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .