Text-to-Image
Diffusers
Safetensors
StableDiffusionXLPipeline
stable-diffusion
sdxl
fluetnly-xl
fluently
trained
Inference Endpoints

pf16?

#1
by tintwotin - opened

Would it be possible to include a pf16 file in the UNET folder, so I'll be able to run it on 6 GB VRAM?

Project Fluently org

To run SDXL models on such a small amount of VRAM, you need to use an SSD-1B, as you will get very low performance on regular SDXL models. Thank you for your interest in our models!

ehristoforu changed discussion status to closed

I run most SDXL models just fine - if they're also shared in fp16.

ehristoforu changed discussion status to open
Project Fluently org

Good afternoon, if you work through AUTOMATIC1111, then in the extensions find “Model Converter”, download it, download the checkpoint from this repo, in the extension select this checkpoint and select the format for conversion fp16 or fp32. Have a good day!

Sign up or log in to comment