Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,12 @@ license: other
|
|
21 |
|
22 |
These files are **experimental** GGML format model files for [Eric Hartford's WizardLM Uncensored Falcon 7B](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b).
|
23 |
|
24 |
-
These GGML files will **not** work in llama.cpp,
|
25 |
|
26 |
-
They can be used with
|
|
|
|
|
|
|
27 |
|
28 |
Note: It is not currently possible to use the new k-quant formats with Falcon 7B. This is being worked on.
|
29 |
|
@@ -36,7 +39,11 @@ Note: It is not currently possible to use the new k-quant formats with Falcon 7B
|
|
36 |
<!-- compatibility_ggml start -->
|
37 |
## Compatibility
|
38 |
|
39 |
-
|
|
|
|
|
|
|
|
|
40 |
|
41 |
```
|
42 |
git clone https://github.com/cmp-nct/ggllm.cpp
|
@@ -48,7 +55,7 @@ Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VS
|
|
48 |
|
49 |
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
|
50 |
```
|
51 |
-
bin/falcon_main -t 8 -ngl 100 -b 1 -m
|
52 |
```
|
53 |
|
54 |
You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used.
|
|
|
21 |
|
22 |
These files are **experimental** GGML format model files for [Eric Hartford's WizardLM Uncensored Falcon 7B](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b).
|
23 |
|
24 |
+
These GGML files will **not** work in llama.cpp, text-generation-webui or KoboldCpp.
|
25 |
|
26 |
+
They can be used with:
|
27 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui).
|
28 |
+
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers).
|
29 |
+
* A new fork of llama.cpp that introduced this new Falcon GGML support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
|
30 |
|
31 |
Note: It is not currently possible to use the new k-quant formats with Falcon 7B. This is being worked on.
|
32 |
|
|
|
39 |
<!-- compatibility_ggml start -->
|
40 |
## Compatibility
|
41 |
|
42 |
+
The recommended UI for these GGMLs is [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). Preliminary CUDA GPU acceleration is provided.
|
43 |
+
|
44 |
+
For use from Python code, use [ctransformers](https://github.com/marella/ctransformers). Again, with preliminary CUDA GPU acceleration.
|
45 |
+
|
46 |
+
Or to build cmp-nct's fork of llama.cpp with Falcon 7B support plus preliminary CUDA acceleration, please try the following steps:
|
47 |
|
48 |
```
|
49 |
git clone https://github.com/cmp-nct/ggllm.cpp
|
|
|
55 |
|
56 |
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
|
57 |
```
|
58 |
+
bin/falcon_main -t 8 -ngl 100 -b 1 -m falcon7b-instruct.ggmlv3.q4_0.bin -p "What is a falcon?\n### Response:"
|
59 |
```
|
60 |
|
61 |
You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used.
|