Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,13 @@ tags:
|
|
10 |
- gguf-my-repo
|
11 |
---
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
# Trappu/Magnum-Picaro-0.7-v2-12b-Q5_K_M-GGUF
|
14 |
This model was converted to GGUF format from [`Trappu/Magnum-Picaro-0.7-v2-12b`](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b) for more details on the model.
|
|
|
10 |
- gguf-my-repo
|
11 |
---
|
12 |
|
13 |
+
Ignore this repo and get your quants from mradermacher!
|
14 |
+
|
15 |
+
Imatrix: https://huggingface.co/mradermacher/Magnum-Picaro-0.7-v2-12b-i1-GGUF
|
16 |
+
|
17 |
+
Static: https://huggingface.co/mradermacher/Magnum-Picaro-0.7-v2-12b-GGUF
|
18 |
+
|
19 |
+
|
20 |
# Trappu/Magnum-Picaro-0.7-v2-12b-Q5_K_M-GGUF
|
21 |
This model was converted to GGUF format from [`Trappu/Magnum-Picaro-0.7-v2-12b`](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
22 |
Refer to the [original model card](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b) for more details on the model.
|