license: apache-2.0 | |
tags: | |
- fire | |
- function | |
- firefunction | |
- firefunction-v1 | |
- gguf | |
- GGUF | |
- firefunction-v1-GGUF | |
- firefunction-v1-gguf | |
- 4-bit precision | |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/653760343af2f64a0d4b60c7/k72LAqG6svkOCOYm_eDsh.png) | |
This is repo hosts quantized versions of the following models: https://huggingface.co/fireworks-ai/firefunction-v1 | |
Quantization was done with this script: https://github.com/CharlesMod/quantizeHFmodel |