Update README.md
Browse files
README.md
CHANGED
@@ -42,15 +42,16 @@ quantized_by: TheBloke
|
|
42 |
|
43 |
This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
|
|
|
|
|
|
48 |
|
49 |
-
|
50 |
|
51 |
-
|
52 |
-
|
53 |
-
I have tested CUDA acceleration and it works great. I have not yet tested other forms of GPU acceleration.
|
54 |
|
55 |
<!-- description end -->
|
56 |
|
|
|
42 |
|
43 |
This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
|
44 |
|
45 |
+
**MIXTRAL GGUF SUPPORT**
|
46 |
|
47 |
+
Known to work in:
|
48 |
+
* llama.cpp as of December 13th
|
49 |
+
* KoboldCpp 1.52 as later
|
50 |
+
* LM Studio 0.2.9 and later
|
51 |
|
52 |
+
Support for Mixtral was merged into Llama.cpp on December 13th.
|
53 |
|
54 |
+
Other clients/libraries, not listed above, may not yet work.
|
|
|
|
|
55 |
|
56 |
<!-- description end -->
|
57 |
|