The GaudiStableDiffusionPipeline
class enables to perform text-to-image generation on HPUs.
It inherits from the GaudiDiffusionPipeline
class that is the parent to any kind of diffuser pipeline.
To get the most out of it, it should be associated with a scheduler that is optimized for HPUs like GaudiDDIMScheduler
.
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True use_habana: bool = False use_hpu_graphs: bool = False gaudi_config: typing.Union[str, optimum.habana.transformers.gaudi_configuration.GaudiConfig] = None bf16_full_eval: bool = False )
Parameters
AutoencoderKL
) —
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
~transformers.CLIPTokenizer
) —
A CLIPTokenizer
to tokenize text.
UNet2DConditionModel
) —
A UNet2DConditionModel
to denoise the encoded image latents.
SchedulerMixin
) —
A scheduler to be used in combination with unet
to denoise the encoded image latents. Can be one of
DDIMScheduler
, LMSDiscreteScheduler
, or PNDMScheduler
.
StableDiffusionSafetyChecker
) —
Classification module that estimates whether generated images could be considered offensive or harmful.
Please refer to the model card for more details
about a model’s potential harms.
CLIPImageProcessor
to extract features from generated images; used as inputs to the safety_checker
.
False
) —
Whether to use Gaudi (True
) or CPU (False
).
False
) —
Whether to use HPU graphs or not.
None
) —
Gaudi configuration to use. Can be a string to download it from the Hub.
Or a previously initialized config can be passed.
False
) —
Whether to use full bfloat16 evaluation instead of 32-bit.
This will be faster and save memory compared to fp32/mixed precision but can harm generated images.
Extends the StableDiffusionPipeline
class:
mark_step()
were added to add support for lazy mode(
prompt: typing.Union[str, typing.List[str]] = None
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
negative_prompt: typing.Union[typing.List[str], str, NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
batch_size: int = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
guidance_rescale: float = 0.0
)
→
GaudiStableDiffusionPipelineOutput
or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds
.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor
) —
The height in pixels of the generated images.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor
) —
The width in pixels of the generated images.
int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference.
float
, optional, defaults to 7.5) —
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt
at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1
.
str
or List[str]
, optional) —
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
).
int
, optional, defaults to 1) —
The number of images to generate per prompt.
int
, optional, defaults to 1) —
The number of images in a batch.
float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) from the DDIM paper. Only applies
to the ~schedulers.DDIMScheduler
, and is ignored in other schedulers.
torch.Generator
or List[torch.Generator]
, optional) —
A torch.Generator
to make
generation deterministic.
torch.FloatTensor
, optional) —
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator
.
torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt
input argument.
torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds
are generated from the negative_prompt
input argument.
str
, optional, defaults to "pil"
) —
The output format of the generated image. Choose between PIL.Image
or np.array
.
bool
, optional, defaults to True
) —
Whether or not to return a GaudiStableDiffusionPipelineOutput
instead of a
plain tuple.
Callable
, optional) —
A function that calls every callback_steps
steps during inference. The function is called with the
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
.
int
, optional, defaults to 1) —
The frequency at which the callback
function is called. If not specified, the callback is called at
every step.
dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in
self.processor
.
float
, optional, defaults to 0.7) —
Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are
Flawed. Guidance rescale factor should fix overexposure when
using zero terminal SNR.
Returns
GaudiStableDiffusionPipelineOutput
or tuple
If return_dict
is True
, ~diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput
is returned,
otherwise a tuple
is returned where the first element is a list with the generated images and the
second element is a list of bool
s indicating whether the corresponding generated image contains
“not-safe-for-work” (nsfw) content.
The call function to the pipeline for generation.
( use_habana: bool = False use_hpu_graphs: bool = False gaudi_config: typing.Union[str, optimum.habana.transformers.gaudi_configuration.GaudiConfig] = None bf16_full_eval: bool = False )
Parameters
False
) —
Whether to use Gaudi (True
) or CPU (False
).
False
) —
Whether to use HPU graphs or not.
None
) —
Gaudi configuration to use. Can be a string to download it from the Hub.
Or a previously initialized config can be passed.
False
) —
Whether to use full bfloat16 evaluation instead of 32-bit.
This will be faster and save memory compared to fp32/mixed precision but can harm generated images.
Extends the DiffusionPipeline
class:
use_habana=True
.( pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] **kwargs )
More information here.
( save_directory: typing.Union[str, os.PathLike] safe_serialization: bool = True variant: typing.Optional[str] = None push_to_hub: bool = False **kwargs )
Parameters
str
or os.PathLike
) —
Directory to which to save. Will be created if it doesn’t exist.
bool
, optional, defaults to True
) —
Whether to save the model using safetensors
or the traditional PyTorch way (that uses pickle
).
str
, optional) —
If specified, weights are saved in the format pytorch_model.bool
, optional, defaults to False
) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id
(will default to the name of save_directory
in your
namespace).
Dict[str, Any]
, optional) —
Additional keyword arguments passed along to the ~utils.PushToHubMixin.push_to_hub
method.
Save the pipeline and Gaudi configurations. More information here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False )
Parameters
int
, defaults to 1000) —
The number of diffusion steps to train the model.
float
, defaults to 0.0001) —
The starting beta
value of inference.
float
, defaults to 0.02) —
The final beta
value.
str
, defaults to "linear"
) —
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
np.ndarray
, optional) —
Pass an array of betas directly to the constructor to bypass beta_start
and beta_end
.
bool
, defaults to True
) —
Clip the predicted sample for numerical stability.
float
, defaults to 1.0) —
The maximum magnitude for sample clipping. Valid only when clip_sample=True
.
bool
, defaults to True
) —
Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
there is no previous alpha. When this option is True
the previous alpha product is fixed to 1
,
otherwise it uses the alpha value at step 0.
int
, defaults to 0) —
An offset added to the inference steps. You can use a combination of offset=1
and
set_alpha_to_one=False
to make the last step use step 0 for the previous alpha product like in Stable
Diffusion.
str
, defaults to epsilon
, optional) —
Prediction type of the scheduler function; can be epsilon
(predicts the noise of the diffusion process),
sample
(directly predicts the noisy sample) or
v_prediction` (see section 2.4 of Imagen
Video paper).
bool
, defaults to False
) —
Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such
as Stable Diffusion.
float
, defaults to 0.995) —
The ratio for the dynamic thresholding method. Valid only when thresholding=True
.
float
, defaults to 1.0) —
The threshold value for dynamic thresholding. Valid only when thresholding=True
.
str
, defaults to "leading"
) —
The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and
Sample Steps are Flawed for more information.
bool
, defaults to False
) —
Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
dark samples instead of limiting it to samples with medium brightness. Loosely related to
--offset_noise
.
Extends Diffusers’ DDIMScheduler to run optimally on Gaudi:
(
model_output: FloatTensor
sample: FloatTensor
eta: float = 0.0
use_clipped_model_output: bool = False
generator = None
variance_noise: typing.Optional[torch.FloatTensor] = None
return_dict: bool = True
)
→
diffusers.schedulers.scheduling_utils.DDIMSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) —
The direct output from learned diffusion model.
float
) —
The current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
A current instance of a sample created by the diffusion process.
float
) —
The weight of noise for added noise in diffusion step.
bool
, defaults to False
) —
If True
, computes “corrected” model_output
from the clipped predicted original sample. Necessary
because predicted original sample is clipped to [-1, 1] when self.config.clip_sample
is True
. If no
clipping has happened, “corrected” model_output
would coincide with the one provided as input and
use_clipped_model_output
has no effect.
torch.Generator
, optional) —
A random number generator.
torch.FloatTensor
) —
Alternative to generating noise with generator
by directly providing the noise for the variance
itself. Useful for methods such as CycleDiffusion
.
bool
, optional, defaults to True
) —
Whether or not to return a DDIMSchedulerOutput
or tuple
.
Returns
diffusers.schedulers.scheduling_utils.DDIMSchedulerOutput
or tuple
If return_dict is True
, DDIMSchedulerOutput
is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise).