Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF

This model was converted to GGUF format from jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF using llama.cpp via Convert Model to GGUF.

Key Features:

  • Quantized for reduced file size (GGUF format)
  • Optimized for use with llama.cpp
  • Compatible with llama-server for efficient serving

Refer to the original model card for more details on the base model.

Usage with llama.cpp

1. Install llama.cpp:

brew install llama.cpp  # For macOS/Linux

2. Run Inference:

CLI:

llama-cli --hf-repo jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF --hf-file SummLlama3.2-3B-Q4_K_M-GGUF-4bit.gguf -p "Your prompt here"

Server:

llama-server --hf-repo jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF --hf-file SummLlama3.2-3B-Q4_K_M-GGUF-4bit.gguf -c 2048

For more advanced usage, refer to the llama.cpp repository.

Downloads last month

-

Downloads are not tracked for this model. How to track
GGUF
Inference API
Unable to determine this model's library. Check the docs .