Edit model card

Llamacpp Quantizations of Hermes-2-Pro-Mistral-10.7B

Using llama.cpp release b2536 for quantization.

Original model: https://huggingface.co/Joseph717171/Hermes-2-Pro-Mistral-10.7B

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
Hermes-2-Pro-Mistral-10.7B-Q8_0.gguf Q8_0 11.40GB Extremely high quality, generally unneeded but max available quant.
Hermes-2-Pro-Mistral-10.7B-Q6_K.gguf Q6_K 8.80GB Very high quality, near perfect, recommended.
Hermes-2-Pro-Mistral-10.7B-Q5_K_M.gguf Q5_K_M 7.59GB High quality, very usable.
Hermes-2-Pro-Mistral-10.7B-Q5_K_S.gguf Q5_K_S 7.39GB High quality, very usable.
Hermes-2-Pro-Mistral-10.7B-Q5_0.gguf Q5_0 7.39GB High quality, older format, generally not recommended.
Hermes-2-Pro-Mistral-10.7B-Q4_K_M.gguf Q4_K_M 6.46GB Good quality, uses about 4.83 bits per weight.
Hermes-2-Pro-Mistral-10.7B-Q4_K_S.gguf Q4_K_S 6.11GB Slightly lower quality with small space savings.
Hermes-2-Pro-Mistral-10.7B-IQ4_NL.gguf IQ4_NL 6.14GB Decent quality, similar to Q4_K_S, new method of quanting,
Hermes-2-Pro-Mistral-10.7B-IQ4_XS.gguf IQ4_XS 5.82GB Decent quality, new method with similar performance to Q4.
Hermes-2-Pro-Mistral-10.7B-Q4_0.gguf Q4_0 6.07GB Decent quality, older format, generally not recommended.
Hermes-2-Pro-Mistral-10.7B-Q3_K_L.gguf Q3_K_L 5.65GB Lower quality but usable, good for low RAM availability.
Hermes-2-Pro-Mistral-10.7B-Q3_K_M.gguf Q3_K_M 5.19GB Even lower quality.
Hermes-2-Pro-Mistral-10.7B-IQ3_M.gguf IQ3_M 4.84GB Medium-low quality, new method with decent performance.
Hermes-2-Pro-Mistral-10.7B-IQ3_S.gguf IQ3_S 4.69GB Lower quality, new method with decent performance, recommended over Q3 quants.
Hermes-2-Pro-Mistral-10.7B-Q3_K_S.gguf Q3_K_S 4.66GB Low quality, not recommended.
Hermes-2-Pro-Mistral-10.7B-Q2_K.gguf Q2_K 4.00GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
358
GGUF
Model size
10.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF

Quantized
(164)
this model

Dataset used to train bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF