Spaces:
Runtime error
Runtime error
AYYasaswini
commited on
Commit
•
8f873c3
1
Parent(s):
71472f9
Update app.py
Browse files
app.py
CHANGED
@@ -40,13 +40,10 @@ scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedu
|
|
40 |
vae = vae.to(torch_device)
|
41 |
text_encoder = text_encoder.to(torch_device)
|
42 |
unet = unet.to(torch_device);
|
|
|
|
|
|
|
43 |
|
44 |
-
"""## A diffusion loop
|
45 |
-
|
46 |
-
If all you want is to make a picture with some text, you could ignore this notebook and use one of the existing tools (such as [DreamStudio](https://beta.dreamstudio.ai/)) or use the simplified pipeline from huggingface, as documented [here](https://huggingface.co/blog/stable_diffusion).
|
47 |
-
|
48 |
-
What we want to do in this notebook is dig a little deeper into how this works, so we'll start by checking that the example code runs. Again, this is adapted from the [HF notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) and looks very similar to what you'll find if you inspect [the `__call__()` method of the stable diffusion pipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L200).
|
49 |
-
"""
|
50 |
|
51 |
|
52 |
# Prep Scheduler
|
@@ -64,15 +61,7 @@ latents = latents.to(torch_device)
|
|
64 |
latents = latents * scheduler.init_noise_sigma # Scaling (previous versions did latents = latents * self.scheduler.sigmas[0]
|
65 |
|
66 |
# Loop
|
67 |
-
with autocast("cuda"): # will fallback to CPU if no CUDA; no autocast for MPS
|
68 |
-
for i, t in tqdm(enumerate(scheduler.timesteps), total=len(scheduler.timesteps)):
|
69 |
|
70 |
-
"""It's working, but that's quite a bit of code! Let's look at the components one by one.
|
71 |
-
|
72 |
-
## The Autoencoder (AE)
|
73 |
-
|
74 |
-
The AE can 'encode' an image into some sort of latent representation, and decode this back into an image. I've wrapped the code for this into a couple of functions here so we can see what this looks like in action:
|
75 |
-
"""
|
76 |
|
77 |
def pil_to_latent(input_im):
|
78 |
# Single image -> single latent in a batch (so size 1, 4, 64, 64)
|
@@ -92,46 +81,8 @@ def latents_to_pil(latents):
|
|
92 |
return pil_images
|
93 |
|
94 |
|
95 |
-
"""What does this look like at different timesteps? Experiment and see for yourself!
|
96 |
-
|
97 |
-
If you uncomment the cell below you'll see that in this case the `scheduler.add_noise` function literally just adds noise scaled by sigma: `noisy_samples = original_samples + noise * sigmas`
|
98 |
-
"""
|
99 |
-
#encoded = pil_to_latent(input_image)
|
100 |
-
#encoded.shape
|
101 |
-
#decoded = latents_to_pil(encoded)[0]
|
102 |
-
#decoded
|
103 |
-
# ??scheduler.add_noise
|
104 |
-
|
105 |
-
"""Other diffusion models may be trained with different noising and scheduling approaches, some of which keep the variance fairly constant across noise levels ('variance preserving') with different scaling and mixing tricks instead of having noisy latents with higher and higher variance as more noise is added ('variance exploding').
|
106 |
-
|
107 |
-
If we want to start from random noise instead of a noised image, we need to scale it by the largest sigma value used during training, ~14 in this case. And before these noisy latents are fed to the model they are scaled again in the so-called pre-conditioning step:
|
108 |
-
`latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)` (now handled by `latent_model_input = scheduler.scale_model_input(latent_model_input, t)`).
|
109 |
|
110 |
-
|
111 |
-
|
112 |
-
## Loop starting from noised version of input (AKA image2image)
|
113 |
-
|
114 |
-
Let's see what happens when we use our image as a starting point, adding some noise and then doing the final few denoising steps in the loop with a new prompt.
|
115 |
-
|
116 |
-
We'll use a similar loop to the first demo, but we'll skip the first `start_step` steps.
|
117 |
-
|
118 |
-
To noise our image we'll use code like that shown above, using the scheduler to noise it to a level equivalent to step 10 (`start_step`).
|
119 |
-
"""
|
120 |
-
|
121 |
-
# Settings (same as before except for the new prompt)
|
122 |
-
|
123 |
-
"""You can see that some colours and structure from the image are kept, but we now have a new picture! The more noise you add and the more steps you do, the further away it gets from the input image.
|
124 |
-
|
125 |
-
This is how the popular img2img pipeline works. Again, if this is your end goal there are tools to make this easy!
|
126 |
-
|
127 |
-
But you can see that under the hood this is the same as the generation loop just skipping the first few steps and starting from a noised image rather than pure noise.
|
128 |
-
|
129 |
-
Explore changing how many steps are skipped and see how this affects the amount the image changes from the input.
|
130 |
-
|
131 |
-
## Exploring the text -> embedding pipeline
|
132 |
-
|
133 |
-
We use a text encoder model to turn our text into a set of 'embeddings' which are fed to the diffusion model as conditioning. Let's follow a piece of text through this process and see how it works.
|
134 |
-
"""
|
135 |
|
136 |
# Our text prompt
|
137 |
prompt = 'A picture of a puppy'
|
|
|
40 |
vae = vae.to(torch_device)
|
41 |
text_encoder = text_encoder.to(torch_device)
|
42 |
unet = unet.to(torch_device);
|
43 |
+
token_emb_layer = text_encoder.text_model.embeddings.token_embedding
|
44 |
+
pos_emb_layer = text_encoder.text_model.embeddings.position_embedding
|
45 |
+
position_ids = text_encoder.text_model.embeddings.position_ids[:, :77]
|
46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
|
49 |
# Prep Scheduler
|
|
|
61 |
latents = latents * scheduler.init_noise_sigma # Scaling (previous versions did latents = latents * self.scheduler.sigmas[0]
|
62 |
|
63 |
# Loop
|
|
|
|
|
64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
def pil_to_latent(input_im):
|
67 |
# Single image -> single latent in a batch (so size 1, 4, 64, 64)
|
|
|
81 |
return pil_images
|
82 |
|
83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
|
85 |
+
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
|
87 |
# Our text prompt
|
88 |
prompt = 'A picture of a puppy'
|