Initial GGUF model commit
Browse files
README.md
CHANGED
@@ -62,6 +62,7 @@ The clients and libraries below are expecting to add GGUF support shortly:
|
|
62 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-v2-13B-GPTQ)
|
63 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGUF)
|
64 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGML)
|
|
|
65 |
* [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko-v2-13B)
|
66 |
<!-- repositories-available end -->
|
67 |
|
|
|
62 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-v2-13B-GPTQ)
|
63 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGUF)
|
64 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGML)
|
65 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Kimiko-v2-13B-fp16)
|
66 |
* [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko-v2-13B)
|
67 |
<!-- repositories-available end -->
|
68 |
|