Possible to rpcal a GGUF version?
Wow, this is fantastic. It's really impressive how the calibration with the dataset improved the outputs compared to the base model. Is it possible to replicate this process for formats other than exl2?
Sadly I don't know about GGUF, so not sure there how the quantization works :(
All good. I know how to convert base models to GGUF, it's pretty simple but I don't know about doing it whilst refining it with a dataset.
There are some good articles like this one: https://www.substratus.ai/blog/converting-hf-model-gguf-model/
Thanks! I went to check a little about gguf quanting and a calibration dataset is not needed, so there wouldn't be a "rpcal" version for gguf :(.
Damn, so it’d be a case then of further finetuning instead. Aah well, cheers anyway 👍