TheBloke commited on
Commit
07ebb7f
1 Parent(s): b819b84

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -56,6 +56,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
56
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-v2-13B-GPTQ)
57
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGUF)
58
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGML)
 
59
  * [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko-v2-13B)
60
 
61
  ## Prompt template: Vicuna
 
56
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-v2-13B-GPTQ)
57
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGUF)
58
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Kimiko-v2-13B-GGML)
59
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Kimiko-v2-13B-fp16)
60
  * [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko-v2-13B)
61
 
62
  ## Prompt template: Vicuna