Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,13 @@ pipeline_tag: conversational
|
|
8 |
|
9 |
An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
# Prompting Format
|
12 |
|
13 |
Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
|
|
|
8 |
|
9 |
An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
|
10 |
|
11 |
+
Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix):
|
12 |
+
|
13 |
+
- [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp)
|
14 |
+
- [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite)
|
15 |
+
- [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM)
|
16 |
+
- [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI)
|
17 |
+
|
18 |
# Prompting Format
|
19 |
|
20 |
Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
|