--- base_model: black-forest-labs/FLUX.1-schnell license: apache-2.0 language: - en pipeline_tag: text-to-image tags: - text-to-image - image-generation - flux --- Quantized versions of https://huggingface.co/black-forest-labs/FLUX.1-schnell Tools used for quantization: modded [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp), [LlamaQuantizer](https://github.com/aifoundry-org/LlamaQuantizer) **Work in progress, use at your own risk** ## How to: [WIP] 1. Dowload and build [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp) 2. Download one of the models from this repo and * Autoencoder https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors * CLIP_L https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/clip_l.safetensors * T5XXL https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/t5xxl_fp16.safetensors 3. Enter your `stable-diffusion.cpp` dir 4. Run the following command: ``` ./build/bin/sd --diffusion-model [path to gguf] --vae [path to ae.safetensors] --clip_l [path to clip_l.safetensors] --t5xxl [path to t5xxl_fp16.safetensors] -p "a frog holding a sign saying 'hi' " -o ../frog.png -v --cfg-scale 1.0 --sampling-method euler -v --seed 42 --steps 4 ``` ## Results:
Quant type Size Result (x0.5) Download link
default 23.8 GB flux_frog_default.png flux1-schnell.safetensors.gguf
FP16 23.8 GB flux_frog_F16.png flux1-schnell-F16.gguf
Q8_0 12.6 GB flux_frog_Q8_0.png flux1-schnell-Q8_0.gguf
Q5_0 8.18 GB flux_frog_Q5_0.png flux1-schnell-Q5_0.gguf
Q5_1 8.92 GB flux_frog_Q5_1.png flux1-schnell-Q5_1.gguf
Q4_0 6.69 GB flux_frog_Q4_0.png flux1-schnell-Q4_0.gguf
Q4_1 7.43 GB flux_frog_Q4_1.png flux1-schnell-Q4_1.gguf
Q4_K 6.69 GB flux_frog_Q4_K.png flux1-schnell-Q4_K.gguf
Q2_K 3.9 GB flux_frog_Q2_K.png flux1-schnell-Q2_K.gguf