Llama-3.2-Kapusta-JapanChibi-3B-v1 GGUF Quantizations π²
γγγ¦γγ γγγη§γ―ε°γγγ¦ε½Ήγ«η«γ‘γΎγ
I love this model, but I don't understand Japanese, although it is also good in other languages.
This model was converted to GGUF format using llama.cpp.
For more information of the model, see the original model card: Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1.
Available Quantizations (ββΏβ)
Type | Quantized GGUF Model | Size |
---|---|---|
Q4_0 | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q4_0.gguf | 1.99 GiB |
Q6_K | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q6_K.gguf | 2.76 GiB |
Q8_0 | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q8_0.gguf | 3.57 GiB |
My thanks to the authors of the original models, your work is incredible. Have a good time π€
- Downloads last month
- 80
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.