metadata
license: apache-2.0
tags:
- fire
- function
- firefunction
- firefunction-v1
- gguf
- GGUF
- firefunction-v1-GGUF
- firefunction-v1-gguf
- 4-bit precision
This is repo hosts quantized versions of the following models: https://huggingface.co/fireworks-ai/firefunction-v1
Quantization was done with this script: https://github.com/CharlesMod/quantizeHFmodel