Update README.md
Browse files
README.md
CHANGED
@@ -30,9 +30,9 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
30 |
|
31 |
## Repositories available
|
32 |
|
33 |
-
* [4-bit GPTQ models for GPU inference](https://huggingface.co/elinas/alpaca-30b-lora-int4)
|
34 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Alpaca-Lora-30B-GGML)
|
35 |
-
* [
|
36 |
|
37 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
38 |
|
@@ -87,7 +87,7 @@ Donaters will get priority support on any and all AI/LLM/model questions, plus o
|
|
87 |
* Patreon: https://patreon.com/TheBlokeAI
|
88 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
89 |
|
90 |
-
**Patreon special mentions**: Aemon Algiz; Talal Aujan; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad; senxiiz.
|
91 |
|
92 |
Thank you to all my generous patrons and donaters.
|
93 |
<!-- footer end -->
|
|
|
30 |
|
31 |
## Repositories available
|
32 |
|
33 |
+
* [elinas' 4-bit GPTQ models for GPU inference](https://huggingface.co/elinas/alpaca-30b-lora-int4)
|
34 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Alpaca-Lora-30B-GGML)
|
35 |
+
* [chansung's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chansung/alpaca-lora-30b)
|
36 |
|
37 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
38 |
|
|
|
87 |
* Patreon: https://patreon.com/TheBlokeAI
|
88 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
89 |
|
90 |
+
**Patreon special mentions**: Aemon Algiz; Johann-Peter Hartmann; Talal Aujan; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad; senxiiz; Sebastain Graf; Eugene Pentland; Nikolai Manek; Luke Pendergrass.
|
91 |
|
92 |
Thank you to all my generous patrons and donaters.
|
93 |
<!-- footer end -->
|