File size: 406 Bytes
7c5a257 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
---
tags:
- llava
- lmm
- ggml
- llama.cpp
---
# ggml_llava-v1.5-13b
This repo contains GGUF files to inference [llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency.
**Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.
|