Update README.md
Browse files
README.md
CHANGED
@@ -77,9 +77,33 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
77 |
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
78 |
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.24 GB | 48.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
79 |
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.92 GB | 47.42 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
|
|
|
|
80 |
|
81 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
## How to run in `llama.cpp`
|
84 |
|
85 |
I use the following command line; adjust for your tastes and needs:
|
|
|
77 |
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
78 |
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.24 GB | 48.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
79 |
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.92 GB | 47.42 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
80 |
+
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.bin | q6_K | 6 | 53.56 GB | 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
81 |
+
| h2ogpt-research-oasst1-llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
82 |
|
83 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
84 |
|
85 |
+
### q6_K and q8_0 files require expansion from archive
|
86 |
+
|
87 |
+
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
|
88 |
+
|
89 |
+
### q6_K
|
90 |
+
Please download:
|
91 |
+
* `h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.zip`
|
92 |
+
* `h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.z01`
|
93 |
+
|
94 |
+
### q8_0
|
95 |
+
Please download:
|
96 |
+
* `h2ogpt-research-oasst1-llama-65b.ggmlv3.q8_0.zip`
|
97 |
+
* `h2ogpt-research-oasst1-llama-65b.ggmlv3.q8_0.z01`
|
98 |
+
|
99 |
+
Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
|
100 |
+
```
|
101 |
+
sudo apt update -y && sudo apt install 7zip
|
102 |
+
7zz x h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.zip
|
103 |
+
```
|
104 |
+
|
105 |
+
Once the `.bin` is extracted you can delete the `.zip` and `.z01` files.
|
106 |
+
|
107 |
## How to run in `llama.cpp`
|
108 |
|
109 |
I use the following command line; adjust for your tastes and needs:
|