Update README.md
Browse files
README.md
CHANGED
@@ -16,13 +16,13 @@ I have also made these other Koala models available:
|
|
16 |
|
17 |
## Quantization method
|
18 |
|
19 |
-
This GPTQ model was quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) with the following
|
20 |
```
|
21 |
python3 llama.py /content/koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-13B-4bit-128g.pt
|
22 |
python3 llama.py /content/koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors /content/koala-13B-4bit-128g.safetensors
|
23 |
```
|
24 |
|
25 |
-
I
|
26 |
|
27 |
## Provided files
|
28 |
|
|
|
16 |
|
17 |
## Quantization method
|
18 |
|
19 |
+
This GPTQ model was quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) with the following commands:
|
20 |
```
|
21 |
python3 llama.py /content/koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-13B-4bit-128g.pt
|
22 |
python3 llama.py /content/koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors /content/koala-13B-4bit-128g.safetensors
|
23 |
```
|
24 |
|
25 |
+
I used the latest Triton branch of `GPTQ-for-LLaMa` but they can also be loaded with the CUDA branch.
|
26 |
|
27 |
## Provided files
|
28 |
|