Update README.md
Browse files
README.md
CHANGED
@@ -192,6 +192,7 @@ It was trained by doing supervised fine-tuning over a mixture of regular instruc
|
|
192 |
The current Metharme-13b has been trained as a LoRA, then merged down to the base model for distribuition.
|
193 |
|
194 |
It has also been quantized down to 8Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
|
|
|
195 |
```
|
196 |
python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
|
197 |
```
|
|
|
192 |
The current Metharme-13b has been trained as a LoRA, then merged down to the base model for distribuition.
|
193 |
|
194 |
It has also been quantized down to 8Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
|
195 |
+
|
196 |
```
|
197 |
python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
|
198 |
```
|