The Q8_0 have mixed tensor types e.g. 'output.norm' is of type Q6_K.

#2
by mukel - opened

The regular and instruct models are fine, it only happens with this one (the python one).
F32 is fine, but not all consumers of the model support the more esoteric k-quants types.

OK, If you think that's a bug you'd need to report it to llama.cpp. I just told it to make Q8_0.

However I'm not sure what you mean by 'not all consumers' supporting this? This is GGUF, the new format. Currently most third party libraries do not support GGUF at all. But when they do, they will support everything, the same as llama.cpp. Maybe consumers of GGML don't support mixed tensors in q8_0 (I don't know), but this is not GGML any more.

From this PR: https://github.com/ggerganov/llama.cpp/pull/1684
It is stated in the description:
"Not mentioned explicitly above is the fact that with this PR, all quantization variants use 6-bit quantization for the output.weight tensor. This lowers the perplexity of, e.g., Q4_0 by about 0.03 at 7B."
This is a possible explanation why the conversion mix tensor types.

OK thanks, good to know. I do agree that it's a bit odd that q8_0 would not be 100% q8_0 in all tensors.

I just re-read your first message - are you saying that the Q8_0 models for the 7B and 7B-Instruct models are different? They don't have q6_k in their q8_0? Because they should all be the same, they were all created in exactly the same way.

The issue is fixed with the latest update, all tensors are either Q8_0 or F32.

mukel changed discussion status to closed

Sign up or log in to comment