TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps
๐ Paper โข ๐ค Checkpoints ๐ฐ Github
Our method accelerates LDMs via data-free multistep latent consistency distillation (MLCD), and data-free latent consistency distillation is proposed to efficiently guarantee the inter-segment consistency in MLCD.
Furthermore, we introduce bags of techniques, e.g., distribution matching, adversarial learning, and preference learning, to enhance TLCMโs performance at few-step inference without any real data.
TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared to full-step approaches.
Details are presented in the paper and Github.
Art Gallery
Here we present some examples with different sampling steps.
2-Steps Sampling.
3-Steps Sampling.
4-Steps Sampling.
8-Steps Sampling.
Citation