Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,5 @@
|
|
1 |
4-bit GPTQ quantization of https://huggingface.co/KoboldAI/OPT-13B-Erebus
|
|
|
2 |
Using this fork of GPTQ: https://github.com/0cc4m/GPTQ-for-LLaMa
|
3 |
|
4 |
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.pt
|
|
|
1 |
4-bit GPTQ quantization of https://huggingface.co/KoboldAI/OPT-13B-Erebus
|
2 |
+
|
3 |
Using this fork of GPTQ: https://github.com/0cc4m/GPTQ-for-LLaMa
|
4 |
|
5 |
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.pt
|