Edit model card

stable-diffusion-v-1-4-GGUF

Original Model

CompVis/stable-diffusion-v-1-4-original

Run with sd-api-server

Go to the sd-api-server repository for more information.

Quantized GGUF Models

Using formats of different precisions will yield results of varying quality.

f32 f16 q8_0 q5_0 q5_1 q4_0 q4_1
Downloads last month
750
GGUF
Model size
1.07B params
Architecture
undefined

4-bit

5-bit

8-bit

16-bit

32-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for second-state/stable-diffusion-v-1-4-GGUF

Quantized
this model