Edit model card

KOALA-700M Model Card

KOALA Model Cards

Abstract

TL;DR

We propose a fast text-to-image model, called KOALA, by compressing SDXL's U-Net and distilling knowledge from SDXL into our model. KOALA-700M can generate a 1024x1024 image in less than 1.5 seconds on an NVIDIA 4090 GPU, which is more than 2x faster than SDXL. KOALA-700M can be used as a decent alternative between SDM and SDXL in limited resources.

FULL abstract Stable diffusion is the mainstay of the text-to-image (T2I) synthesis in the community due to its generation performance and open-source nature. Recently, Stable Diffusion XL (SDXL), the successor of stable diffusion, has received a lot of attention due to its significant performance improvements with a higher resolution of 1024x1024 and a larger model. However, its increased computation cost and model size require higher-end hardware (e.g., bigger VRAM GPU) for end-users, incurring higher costs of operation. To address this problem, in this work, we propose an efficient latent diffusion model for text-to-image synthesis obtained by distilling the knowledge of SDXL. To this end, we first perform an in-depth analysis of the denoising U-Net in SDXL, which is the main bottleneck of the model, and then design a more efficient U-Net based on the analysis. Secondly, we explore how to effectively distill the generation capability of SDXL into an efficient U-Net and eventually identify four essential factors, the core of which is that self-attention is the most important part. With our efficient U-Net and self-attention-based knowledge distillation strategy, we build our efficient T2I models, called KOALA-1B &-700M, while reducing the model size up to 54% and 69% of the original SDXL model. In particular, the KOALA-700M is more than twice as fast as SDXL while still retaining a decent generation quality. We hope that due to its balanced speed-performance tradeoff, our KOALA models can serve as a cost-effective alternative to SDXL in resource-constrained environments.

These 1024x1024 samples are generated by KOALA-700M with 25 denoising steps.

Architecture

There are two two types of compressed U-Net, KOALA-1B and KOALA-700M, which are realized by reducing residual blocks and transformer blocks.

U-Net comparison

U-Net SDM-v2.0 SDXL-Base-1.0 KOALA-1B KOALA-700M
Param. 865M 2,567M 1,161M 782M
CKPT size 3.46GB 10.3GB 4.4GB 3.0GB
Tx blocks [1, 1, 1, 1] [0, 2, 10] [0, 2, 6] [0, 2, 5]
Mid block
Latency 1.131s 3.133s 1.604s 1.257s
  • Tx menans transformer block and CKPT means the trained checkpoint file.
  • We measured latency with FP16-precision, and 25 denoising steps in NVIDIA 4090 GPU (24GB).
  • SDM-v2.0 uses 768x768 resolution, while SDXL and KOALA models uses 1024x1024 resolution.

Latency and memory usage comparison on different GPUs

We measure the inference time of SDM-v2.0 with 768x768 resolution and the other models with 1024x1024 using a variety of consumer-grade GPUs: NVIDIA 3060Ti (8GB), 2080Ti (11GB), and 4090 (24GB). We use 25 denoising steps and FP16/FP32 precisions. OOM means Out-of-Memory. Note that SDXL-Base cannot operate in the 8GB-GPU.

Key Features

  • Efficient U-Net Architecture: KOALA models use a simplified U-Net architecture that reduces the model size by up to 54% and 69% respectively compared to its predecessor, Stable Diffusion XL (SDXL).
  • Self-Attention-Based Knowledge Distillation: The core technique in KOALA focuses on the distillation of self-attention features, which proves crucial for maintaining image generation quality.

Model Description

Usage with 🤗Diffusers library

The inference code with denoising step 25

import torch
from diffusers import StableDiffusionXLPipeline

pipe = StableDiffusionXLPipeline.from_pretrained("etri-vilab/koala-700m", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "A portrait painting of a Golden Retriever like Leonard da Vinci"
negative = "worst quality, low quality, illustration, low resolution"
image = pipe(prompt=prompt, negative_prompt=negative).images[0]

Uses

Direct Use

The model is intended for research purposes only. Possible research areas and tasks include

  • Generation of artworks and use in design and other artistic processes.
  • Applications in educational or creative tools.
  • Research on generative models.
  • Safe deployment of models which have the potential to generate harmful content.
  • Probing and understanding the limitations and biases of generative models.
  • Excluded uses are described below.

Out-of-Scope Use

The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Limitations and Bias

  • Text Rendering: The models face challenges in rendering long, legible text within images.
  • Complex Prompts: KOALA sometimes struggles with complex prompts involving multiple attributes.
  • Dataset Dependencies: The current limitations are partially attributed to the characteristics of the training dataset (LAION-aesthetics-V2 6+).

Citation

@misc{Lee@koala,
    title={KOALA: Self-Attention Matters in Knowledge Distillation of Latent Diffusion Models for Memory-Efficient and Fast Image Synthesis}, 
    author={Youngwan Lee and Kwanyong Park and Yoorhim Cho and Yong-Ju Lee and Sung Ju Hwang},
    year={2023},
    eprint={2312.04005},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
285
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for etri-vilab/koala-700m

Adapters
1 model
Finetunes
1 model

Spaces using etri-vilab/koala-700m 4