Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ This is a 4-bit GGML version of the [Chansung GPT4 Alpaca 30B LoRA model](https:
|
|
16 |
|
17 |
It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)
|
18 |
|
19 |
-
The files in this repo were then quantized to 4bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
20 |
|
21 |
## Provided files
|
22 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
|
|
16 |
|
17 |
It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)
|
18 |
|
19 |
+
The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
20 |
|
21 |
## Provided files
|
22 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|