Edit model card

GGUF quants of https://huggingface.co/Shakker-Labs/AWPortrait-FL/ made using instructions from https://github.com/city96/ComfyUI-GGUF/ . This is my first time quantizing anything. I find the Q8_0 version of Flux Dev fp16 to be the best one to use, so I wanted to be able to have Q8_0 quants of finetuned models as they come out.

The K quants do not work with Forge as of 2024/09/06. I used a fork (https://github.com/mhnakif/ComfyUI-GGUF/) to make Q4_0, Q5_0, Q8_0 and F16, which do work in Forge.

Downloads last month
287
GGUF
Model size
11.9B params
Architecture
flux

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .