Post
1605
Jamba GGUF!
Finally, thanks to the awesome work of the brilliant mind of Github user compilade (https://github.com/compilade) Jamba is now beginning to be supported in llama.cpp (just CPU inference at the moment). So far there are a few different versions I have been able to convert, mainly the Jamba-Bagel, Jamba-Claude, 900M Jamba-Small and a 1B Jamba
Severian/jamba-gguf-665884eb2ceef24c1a0547e0
Finally, thanks to the awesome work of the brilliant mind of Github user compilade (https://github.com/compilade) Jamba is now beginning to be supported in llama.cpp (just CPU inference at the moment). So far there are a few different versions I have been able to convert, mainly the Jamba-Bagel, Jamba-Claude, 900M Jamba-Small and a 1B Jamba
Severian/jamba-gguf-665884eb2ceef24c1a0547e0