Edit model card

TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps

๐Ÿ“ƒ Paper โ€ข ๐Ÿค— Checkpoints ๐Ÿ“ฐ Github

Our method accelerates LDMs via data-free multistep latent consistency distillation (MLCD), and data-free latent consistency distillation is proposed to efficiently guarantee the inter-segment consistency in MLCD.

Furthermore, we introduce bags of techniques, e.g., distribution matching, adversarial learning, and preference learning, to enhance TLCMโ€™s performance at few-step inference without any real data.

TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared to full-step approaches.

Details are presented in the paper and Github.

Art Gallery

Here we present some examples with different sampling steps.

ๅ›พ็‰‡1
ๅ›พ็‰‡2
ๅ›พ็‰‡1
ๅ›พ็‰‡2

2-Steps Sampling.

ๅ›พ็‰‡1
ๅ›พ็‰‡2
ๅ›พ็‰‡1
ๅ›พ็‰‡2

3-Steps Sampling.

ๅ›พ็‰‡1
ๅ›พ็‰‡2
ๅ›พ็‰‡1
ๅ›พ็‰‡2

4-Steps Sampling.

ๅ›พ็‰‡1
ๅ›พ็‰‡2
ๅ›พ็‰‡1
ๅ›พ็‰‡2

8-Steps Sampling.

Citation


Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .