Llama-3 finetune and Moe quant
#28
by
raincandy-u
- opened
The result I converted using llama.cpp seems to be broken, please help
[https://huggingface.co/raincandy-u/Llama-3-8b.UNLEASHED]
[https://huggingface.co/raincandy-u/Aplite-Instruct-4x8B-Llama-3]
There might be an issue wiht llama.cpp itself, there are a number of open PRs, and we might all have to re-du all out quants when it settles.
See e.g.:
I think I will be holding myself back on llama-3 for a while until this is settled.
mradermacher
changed discussion status to
closed