SimianLuo patrickvonplaten commited on
Commit
c4c7950
1 Parent(s): 4c51dea

Deprecate old usage (#10)

Browse files

- Deprecate old usage (b62943da15bcc829961b8eae396e3cdbd4446964)


Co-authored-by: Patrick von Platen <[email protected]>

Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -37,6 +37,32 @@ You can try out Latency Consistency Models directly on:
37
  [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model)
38
 
39
  To run the model yourself, you can leverage the 🧨 Diffusers library:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  1. Install the library:
41
  ```
42
  pip install diffusers transformers accelerate
@@ -47,7 +73,7 @@ pip install diffusers transformers accelerate
47
  from diffusers import DiffusionPipeline
48
  import torch
49
 
50
- pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main")
51
 
52
  # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
53
  pipe.to(torch_device="cuda", torch_dtype=torch.float32)
 
37
  [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model)
38
 
39
  To run the model yourself, you can leverage the 🧨 Diffusers library:
40
+ 1. Install the library:
41
+ ```
42
+ pip install git+https://github.com/huggingface/diffusers.git
43
+ pip install transformers accelerate
44
+ ```
45
+
46
+ 2. Run the model:
47
+ ```py
48
+ from diffusers import DiffusionPipeline
49
+ import torch
50
+
51
+ pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
52
+
53
+ # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
54
+ pipe.to(torch_device="cuda", torch_dtype=torch.float32)
55
+
56
+ prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
57
+
58
+ # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
59
+ num_inference_steps = 4
60
+
61
+ images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
62
+ ```
63
+
64
+ ## Usage (Deprecated)
65
+
66
  1. Install the library:
67
  ```
68
  pip install diffusers transformers accelerate
 
73
  from diffusers import DiffusionPipeline
74
  import torch
75
 
76
+ pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main", revision="fb9c5d")
77
 
78
  # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
79
  pipe.to(torch_device="cuda", torch_dtype=torch.float32)