--- license: apache-2.0 --- # TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps

📃 Paper • 🤗 Checkpoints 📰 Github

Our method accelerates LDMs via data-free multistep latent consistency distillation (MLCD), and data-free latent consistency distillation is proposed to efficiently guarantee the inter-segment consistency in MLCD. Furthermore, we introduce bags of techniques, e.g., distribution matching, adversarial learning, and preference learning, to enhance TLCM’s performance at few-step inference without any real data. TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared to full-step approaches. Details are presented in the [paper](https://arxiv.org/html/2406.05768v5) and [Github](https://github.com/OPPO-Mente-Lab/TLCM). ## Art Gallery Here we present some examples with different sampling steps.
图片1
图片2
图片1
图片2

2-Steps Sampling.

图片1
图片2
图片1
图片2

3-Steps Sampling.

图片1
图片2
图片1
图片2

4-Steps Sampling.

图片1
图片2
图片1
图片2

8-Steps Sampling.

## Citation ``` ```