Safetensors
gemma2

Quantization bit

#7
by ritlab - opened

How many bits we should use for quantization for finetuning this model? 4, 8 or 16?

Sign up or log in to comment