Update README.md
Browse files
README.md
CHANGED
@@ -22,8 +22,8 @@ The files in this repo were then quantized to 4bit for use with [llama.cpp](http
|
|
22 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
23 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
24 |
`gpt4-alpaca-lora-30B.GGML.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | Maximum compatibility |
|
25 |
-
`gpt4-alpaca-lora-30B.GGML.q4_2.bin` | q4_2 | 4bit |
|
26 |
-
`gpt4-alpaca-lora-30B.GGML.q4_3.bin` | q4_3 | 4bit |
|
27 |
`gpt4-alpaca-lora-30B.GGML.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
|
28 |
`gpt4-alpaca-lora-30B.GGML.q5_1.bin` | q5_1 | 5bit | 24.4GB | 28GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
|
29 |
|
|
|
22 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
23 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
24 |
`gpt4-alpaca-lora-30B.GGML.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | Maximum compatibility |
|
25 |
+
`gpt4-alpaca-lora-30B.GGML.q4_2.bin` | q4_2 | 4bit | 20.3GB | 23GB | Best compromise between resources, speed and quality |
|
26 |
+
`gpt4-alpaca-lora-30B.GGML.q4_3.bin` | q4_3 | 4bit | 24.4GB | 28GB | Maximum quality 4bit, higher RAM requirements and slower inference |
|
27 |
`gpt4-alpaca-lora-30B.GGML.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
|
28 |
`gpt4-alpaca-lora-30B.GGML.q5_1.bin` | q5_1 | 5bit | 24.4GB | 28GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
|
29 |
|