Update README.md
Browse files
README.md
CHANGED
@@ -59,7 +59,6 @@ Here is an incomplate list of clients and libraries that are known to support GG
|
|
59 |
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
|
60 |
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
61 |
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
62 |
-
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
63 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
64 |
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
65 |
|
@@ -207,44 +206,11 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
|
|
207 |
|
208 |
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
|
209 |
|
210 |
-
## How to run from Python code
|
211 |
-
|
212 |
-
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
|
213 |
-
|
214 |
-
### How to load this model in Python code, using ctransformers
|
215 |
-
|
216 |
-
#### First install the package
|
217 |
-
|
218 |
-
Run one of the following commands, according to your system:
|
219 |
-
|
220 |
-
```shell
|
221 |
-
# Base ctransformers with no GPU acceleration
|
222 |
-
pip install ctransformers
|
223 |
-
# Or with CUDA GPU acceleration
|
224 |
-
pip install ctransformers[cuda]
|
225 |
-
# Or with AMD ROCm GPU acceleration (Linux only)
|
226 |
-
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
|
227 |
-
# Or with Metal GPU acceleration for macOS systems only
|
228 |
-
CT_METAL=1 pip install ctransformers --no-binary ctransformers
|
229 |
-
```
|
230 |
-
|
231 |
-
#### Simple ctransformers example code
|
232 |
-
|
233 |
-
```python
|
234 |
-
from ctransformers import AutoModelForCausalLM
|
235 |
-
|
236 |
-
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
237 |
-
llm = AutoModelForCausalLM.from_pretrained("TheBloke/sqlcoder-GGUF", model_file="sqlcoder.Q4_K_M.gguf", model_type="starcoder", gpu_layers=50)
|
238 |
-
|
239 |
-
print(llm("AI is going to"))
|
240 |
-
```
|
241 |
-
|
242 |
## How to use with LangChain
|
243 |
|
244 |
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
245 |
|
246 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
247 |
-
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
248 |
|
249 |
<!-- README_GGUF.md-how-to-run end -->
|
250 |
|
|
|
59 |
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
|
60 |
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
61 |
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
|
|
62 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
63 |
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
64 |
|
|
|
206 |
|
207 |
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
|
208 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
209 |
## How to use with LangChain
|
210 |
|
211 |
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
212 |
|
213 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
|
|
214 |
|
215 |
<!-- README_GGUF.md-how-to-run end -->
|
216 |
|