Update README.md
Browse files
README.md
CHANGED
@@ -40,7 +40,7 @@ Command to convert was:
|
|
40 |
|
41 |
The files were saved in Safetensors format.
|
42 |
|
43 |
-
I am uploading this repo because I initially tried to create GPTQs using the [
|
44 |
|
45 |
Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
|
46 |
|
|
|
40 |
|
41 |
The files were saved in Safetensors format.
|
42 |
|
43 |
+
I am uploading this repo because I initially tried to create GPTQs using the [Meta Llama 2 70B Chat HF repo](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf), but got strange errors that suggested the weights were not correct. But converting from the PTH files using the latest `convert_llama_weights_to_hf.py` script worked fine.
|
44 |
|
45 |
Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
|
46 |
|