YokaiKoibito
commited on
Commit
•
50cba23
1
Parent(s):
c4276c5
Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ tags:
|
|
8 |
- vicuna
|
9 |
- llama
|
10 |
---
|
11 |
-
This is an fp16 copy of [jarradh/llama2_70b_chat_uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored) for faster downloading and less disk space usage than the fp32 original. I simply imported the model to CPU with torch_dtype=torch.float16 and then exported it again. All credit for the model goes to [jarradh](https://huggingface.co/jarradh).
|
12 |
|
13 |
Arguable a better name for this model would be something like Llama-2-70B_Wizard-Vicuna-Uncensored-fp16, but to avoid confusion I'm sticking with jarradh's naming scheme.
|
14 |
|
|
|
8 |
- vicuna
|
9 |
- llama
|
10 |
---
|
11 |
+
This is an fp16 copy of [jarradh/llama2_70b_chat_uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored) for faster downloading and less disk space usage than the fp32 original. I simply imported the model to CPU with torch_dtype=torch.float16 and then exported it again. I also added a chat_template entry derived from the model card to the tokenizer_config.json file, which previously didn't have one. All credit for the model goes to [jarradh](https://huggingface.co/jarradh).
|
12 |
|
13 |
Arguable a better name for this model would be something like Llama-2-70B_Wizard-Vicuna-Uncensored-fp16, but to avoid confusion I'm sticking with jarradh's naming scheme.
|
14 |
|