Merges
Collection
12 items
•
Updated
Using llama.cpp release b3982 for quantization.
Original model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
Filename | Quant type | File Size |
---|---|---|
Cydonia-v1.2-Magnum-v4-22B-F16.gguf | F16 | 44.5 GB |
Cydonia-v1.2-Magnum-v4-22B-Q8_0.gguf | Q8_0 | 23.6 GB |
Cydonia-v1.2-Magnum-v4-22B-Q6_K.gguf | Q6_K | 18.3 GB |
Cydonia-v1.2-Magnum-v4-22B-Q5_K_M.gguf | Q5_K_M | 15.7 GB |
Cydonia-v1.2-Magnum-v4-22B-Q5_K_S.gguf | Q5_K_S | 15.3 GB |
Cydonia-v1.2-Magnum-v4-22B-Q4_K_M.gguf | Q4_K_M | 13.3 GB |
Cydonia-v1.2-Magnum-v4-22B-Q4_K_S.gguf | Q4_K_S | 12.7 GB |
Cydonia-v1.2-Magnum-v4-22B-Q3_K_L.gguf | Q3_K_L | 11.7 GB |
Cydonia-v1.2-Magnum-v4-22B-Q3_K_M.gguf | Q3_K_M | 10.8 GB |
Cydonia-v1.2-Magnum-v4-22B-Q3_K_S.gguf | Q3_K_S | 9.64 GB |
Cydonia-v1.2-Magnum-v4-22B-Q2_K.gguf | Q2_K | 8.27 GB |
Recipe based on MarsupialAI/Monstral-123B. It should work since it's the same Mistral, TheDrummer and MarsupialAI, right?
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-22B-v1.2
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.2
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
Base model
knifeayumu/Cydonia-v1.2-Magnum-v4-22B