Edit model card

Jiayi-Pan/Tiny-Vicuna-1B-GGUF

Quantized GGUF model files for Tiny-Vicuna-1B from Jiayi-Pan

Name Quant method Size
tiny-vicuna-1b.q2_k.gguf q2_k 482.14 MB
tiny-vicuna-1b.q3_k_m.gguf q3_k_m 549.85 MB
tiny-vicuna-1b.q4_k_m.gguf q4_k_m 667.81 MB
tiny-vicuna-1b.q5_k_m.gguf q5_k_m 782.04 MB
tiny-vicuna-1b.q6_k.gguf q6_k 903.41 MB
tiny-vicuna-1b.q8_0.gguf q8_0 1.17 GB

Original Model Card:

Tiny Vicuna 1B

TinyLLama 1.1B finetuned with WizardVicuna dataset. Easy to iterate on for early experiments!

Downloads last month
6,239
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/Tiny-Vicuna-1B-GGUF

Quantized
(3)
this model

Space using afrideva/Tiny-Vicuna-1B-GGUF 1