Edit model card

A llamafile generated for moondream2

Big thanks to @jartine and @vikhyat for their respective works on llamafile and moondream

How to Run (on macos and linux)

  1. Download moondream2.llamafile
  2. chmod +x moondream2.llamafile - make it executable
  3. ./moondream2.llamafile - run the llama.cpp server

Versions

  1. Q5_M
  2. Q8_0

From my short testing the Q8 is noticeably better.

ORIGINAL MODEL CARD

moondream2 is a small vision language model designed to run efficiently on edge devices. Check out the GitHub repository for details, or try it out on the Hugging Face Space!

Benchmarks

Release VQAv2 GQA TextVQA TallyQA (simple) TallyQA (full)
2024-03-04 74.2 58.5 36.4 - -
2024-03-06 75.4 59.8 43.1 79.5 73.2
2024-03-13 76.8 60.6 46.4 79.6 73.3
2024-04-02 (latest) 77.7 61.7 49.7 80.1 74.2

Usage

pip install transformers einops
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image

model_id = "vikhyatk/moondream2"
revision = "2024-04-02"
model = AutoModelForCausalLM.from_pretrained(
    model_id, trust_remote_code=True, revision=revision
)
tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)

image = Image.open('<IMAGE_PATH>')
enc_image = model.encode_image(image)
print(model.answer_question(enc_image, "Describe this image.", tokenizer))

The model is updated regularly, so we recommend pinning the model version to a specific release as shown above.

Downloads last month
423
GGUF
Model size
454M params
Architecture
clip

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for cjpais/moondream2-llamafile

Quantized
(2)
this model