TheBloke commited on
Commit
90440af
1 Parent(s): f895fc5

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -17
README.md CHANGED
@@ -63,18 +63,18 @@ This repo contains GGUF format model files for [Eric Hartford's Dolphin 2.5 Mixt
63
 
64
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
65
 
66
- Here is an incomplete list of clients and libraries that are known to support GGUF:
67
-
68
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
69
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
70
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
71
- * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
72
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
73
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
74
- * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
75
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
76
- * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
77
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
78
 
79
  <!-- README_GGUF.md-about-gguf end -->
80
  <!-- repositories-available start -->
@@ -103,9 +103,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
103
  <!-- compatibility_gguf start -->
104
  ## Compatibility
105
 
106
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
107
-
108
- They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
109
 
110
  ## Explanation of quantisation methods
111
 
@@ -221,11 +219,13 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
221
 
222
  ## How to run in `text-generation-webui`
223
 
 
 
224
  Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
225
 
226
  ## How to run from Python code
227
 
228
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
229
 
230
  ### How to load this model in Python code, using llama-cpp-python
231
 
@@ -294,7 +294,6 @@ llm.create_chat_completion(
294
  Here are guides on using llama-cpp-python and ctransformers with LangChain:
295
 
296
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
297
- * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
298
 
299
  <!-- README_GGUF.md-how-to-run end -->
300
 
 
63
 
64
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
65
 
66
+ ### Mixtral GGUF
67
+
68
+ Support for Mixtral was merged into Llama.cpp on December 13th.
69
+
70
+ These Mixtral GGUFs are known to work in:
71
+
72
+ * llama.cpp as of December 13th
73
+ * KoboldCpp 1.52 as later
74
+ * LM Studio 0.2.9 and later
75
+ * llama-cpp-python 0.2.23 and later
76
+
77
+ Other clients/libraries, not listed above, may not yet work.
78
 
79
  <!-- README_GGUF.md-about-gguf end -->
80
  <!-- repositories-available start -->
 
103
  <!-- compatibility_gguf start -->
104
  ## Compatibility
105
 
106
+ These Mixtral GGUFs are compatible with llama.cpp from December 13th onwards. Other clients/libraries may not work yet.
 
 
107
 
108
  ## Explanation of quantisation methods
109
 
 
219
 
220
  ## How to run in `text-generation-webui`
221
 
222
+ Note that text-generation-webui may not yet be compatible with Mixtral GGUFs. Please check compatibility first.
223
+
224
  Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
225
 
226
  ## How to run from Python code
227
 
228
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later.
229
 
230
  ### How to load this model in Python code, using llama-cpp-python
231
 
 
294
  Here are guides on using llama-cpp-python and ctransformers with LangChain:
295
 
296
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
 
297
 
298
  <!-- README_GGUF.md-how-to-run end -->
299