TheBloke commited on
Commit
494db41
1 Parent(s): fedea3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -114
README.md CHANGED
@@ -44,26 +44,17 @@ quantized_by: TheBloke
44
 
45
  This repo contains GGUF format model files for [Migel Tissera's Synthia MoE v3 Mixtral 8x7B](https://huggingface.co/migtissera/Synthia-MoE-v3-Mixtral-8x7B).
46
 
47
- <!-- description end -->
48
- <!-- README_GGUF.md-about-gguf start -->
49
- ### About GGUF
50
-
51
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
52
-
53
- Here is an incomplete list of clients and libraries that are known to support GGUF:
54
-
55
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
56
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
57
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
58
- * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
59
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
60
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
61
- * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
62
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
63
- * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
64
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
65
-
66
- <!-- README_GGUF.md-about-gguf end -->
67
  <!-- repositories-available start -->
68
  ## Repositories available
69
 
@@ -79,18 +70,10 @@ Here is an incomplete list of clients and libraries that are known to support GG
79
  SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
80
  USER: {prompt}
81
  ASSISTANT:
82
-
83
  ```
84
-
85
  <!-- prompt-template end -->
86
 
87
 
88
- <!-- compatibility_gguf start -->
89
- ## Compatibility
90
-
91
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
92
-
93
- They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
94
 
95
  ## Explanation of quantisation methods
96
 
@@ -134,18 +117,6 @@ Refer to the Provided Files table below to see what files use which methods, and
134
 
135
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
136
 
137
- The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
138
-
139
- * LM Studio
140
- * LoLLMS Web UI
141
- * Faraday.dev
142
-
143
- ### In `text-generation-webui`
144
-
145
- Under Download Model, you can enter the model repo: TheBloke/Synthia-MoE-v3-Mixtral-8x7B-GGUF and below it, a specific filename to download, such as: synthia-moe-v3-mixtral-8x7b.Q4_K_M.gguf.
146
-
147
- Then click Download.
148
-
149
  ### On the command line, including multiple files at once
150
 
151
  I recommend using the `huggingface-hub` Python library:
@@ -206,82 +177,12 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
206
 
207
  ## How to run in `text-generation-webui`
208
 
209
- Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
210
 
211
  ## How to run from Python code
212
 
213
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
214
-
215
- ### How to load this model in Python code, using llama-cpp-python
216
-
217
- For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
218
-
219
- #### First install the package
220
 
221
- Run one of the following commands, according to your system:
222
-
223
- ```shell
224
- # Base ctransformers with no GPU acceleration
225
- pip install llama-cpp-python
226
- # With NVidia CUDA acceleration
227
- CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
228
- # Or with OpenBLAS acceleration
229
- CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
230
- # Or with CLBLast acceleration
231
- CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
232
- # Or with AMD ROCm GPU acceleration (Linux only)
233
- CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
234
- # Or with Metal GPU acceleration for macOS systems only
235
- CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
236
-
237
- # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
238
- $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
239
- pip install llama-cpp-python
240
- ```
241
-
242
- #### Simple llama-cpp-python example code
243
-
244
- ```python
245
- from llama_cpp import Llama
246
-
247
- # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
248
- llm = Llama(
249
- model_path="./synthia-moe-v3-mixtral-8x7b.Q4_K_M.gguf", # Download the model file first
250
- n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
251
- n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
252
- n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
253
- )
254
-
255
- # Simple inference example
256
- output = llm(
257
- "SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:", # Prompt
258
- max_tokens=512, # Generate up to 512 tokens
259
- stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
260
- echo=True # Whether to echo the prompt
261
- )
262
-
263
- # Chat Completion API
264
-
265
- llm = Llama(model_path="./synthia-moe-v3-mixtral-8x7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
266
- llm.create_chat_completion(
267
- messages = [
268
- {"role": "system", "content": "You are a story writing assistant."},
269
- {
270
- "role": "user",
271
- "content": "Write a story about llamas."
272
- }
273
- ]
274
- )
275
- ```
276
-
277
- ## How to use with LangChain
278
-
279
- Here are guides on using llama-cpp-python and ctransformers with LangChain:
280
-
281
- * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
282
- * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
283
-
284
- <!-- README_GGUF.md-how-to-run end -->
285
 
286
  <!-- footer start -->
287
  <!-- 200823 -->
@@ -323,8 +224,6 @@ And thank you again to a16z for their generous grant.
323
 
324
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
325
 
326
-
327
-
328
  This is Synthia trained on the official Mistral MoE version (Mixtral-8x7B).
329
 
330
  ```
 
44
 
45
  This repo contains GGUF format model files for [Migel Tissera's Synthia MoE v3 Mixtral 8x7B](https://huggingface.co/migtissera/Synthia-MoE-v3-Mixtral-8x7B).
46
 
47
+ ## EXPERIMENTAL - REQUIRES LLAMA.CPP PR
48
+
49
+ These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
50
+
51
+ THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as LM Studio, llama-cpp-python, text-generation-webui, etc.
52
+
53
+ To test these GGUFs, please build llama.cpp from the above PR.
54
+
55
+ I have tested CUDA acceleration and it works great. Metal works too, but has a couple of bugs at the moment.
56
+
57
+
 
 
 
 
 
 
 
 
 
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
 
70
  SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
71
  USER: {prompt}
72
  ASSISTANT:
 
73
  ```
 
74
  <!-- prompt-template end -->
75
 
76
 
 
 
 
 
 
 
77
 
78
  ## Explanation of quantisation methods
79
 
 
117
 
118
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
119
 
 
 
 
 
 
 
 
 
 
 
 
 
120
  ### On the command line, including multiple files at once
121
 
122
  I recommend using the `huggingface-hub` Python library:
 
177
 
178
  ## How to run in `text-generation-webui`
179
 
180
+ Not yet supported
181
 
182
  ## How to run from Python code
183
 
184
+ Not yet supported
 
 
 
 
 
 
185
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
 
187
  <!-- footer start -->
188
  <!-- 200823 -->
 
224
 
225
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
226
 
 
 
227
  This is Synthia trained on the official Mistral MoE version (Mixtral-8x7B).
228
 
229
  ```