Update config.json
#12
by
ndavidson
- opened
This change is intended for anyone trying to use llama.cpp to merge and quantize loras of this model. Thanks for creating such an awesome small model!
- Add optional parameter "rope_pct" (defaults to 1) to be compatible with https://github.com/ggerganov/llama.cpp/blob/master/convert-hf-to-gguf.py to allow for the conversion from safetensors to gguf.
ndavidson
changed pull request status to
closed
- This was incorrect, "partial_rotary_factor" now resolves to the same.