FantasiaFoundry
commited on
Commit
•
e086b09
1
Parent(s):
fc46d14
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,9 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
4 |
-
Simple python script to generate various GGUF-Imatrix quantizations from a Hugging Face `author/model` input, for Windows.
|
|
|
|
|
5 |
|
6 |
Your `imatrix.txt` is expected to be located inside the `imatrix` folder. Included file is considered a good option, [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) is where it came from.
|
7 |
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
4 |
+
Simple python script to generate various GGUF-Imatrix quantizations from a Hugging Face `author/model` input, for Windows and NVIDIA hardware.
|
5 |
+
|
6 |
+
This is setup for a Windows machine with 8GB of VRAM. If you want to change the the `-ngl` (number of GPU layers) amount, you can do so at **line 120**. This is only relevant during the `--imatrix` data generation. If you don't have enough VRAM you can decrease the `-ngl` amount or set it to 0 to only use your System RAM instead for all layers.
|
7 |
|
8 |
Your `imatrix.txt` is expected to be located inside the `imatrix` folder. Included file is considered a good option, [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) is where it came from.
|
9 |
|