How to run changed format?

#2
by supercharge19 - opened

It is not gguf it is ggufm.

It is running with llama.cpp version b1960 which I think came in 24 January, so any llama.cpp later than that would run it.

@supercharge19 This is strange! I think in my new llama.cpp the same script quantized to ggufm! I've never seen this format and I cannot find anything about it, could you please point me to any link that talks about this?

I found the issue, I just named it wrongly! Those are GGUF models, I just added an m at the end by mistake. I'll fix them

Sign up or log in to comment