Highly experimental, not for general consumption
The code needed to running this model, as well as the base model itself are not ready yet.
This is uploaded merely to help testing.
see https://github.com/ggerganov/llama.cpp/pull/7931
see https://github.com/ggerganov/llama.cpp/pull/8151 , the continued work by compilade, providing both 1.625bpw and 2bpw
This model is unsupported by the new
TQ1_0
andTQ2_0
quants and the old formats havebeen/willbe removed. This model is unfortunaly sized inconvenient and is currently not supported by the new quants. see https://huggingface.co/Green-Sky/TriLM_3.9B-GGUF for a more up-to-date model and quant
- Downloads last month
- 151
Model tree for Green-Sky/bitnet_b1_58-3B-GGUF
Base model
1bitLLM/bitnet_b1_58-3B