--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: other instance_prompt: an icon of trpfrog widget: - text: an icon of trpfrog eating ramen output: url: image_1.png - text: an icon of trpfrog eating ramen output: url: image_2.png - text: an icon of trpfrog eating ramen output: url: image_3.png - text: an icon of trpfrog eating ramen output: url: image_4.png - text: an icon of trpfrog eating ramen output: url: image_5.png - text: an icon of trpfrog eating ramen output: url: image_7.png tags: - text-to-image - diffusers-training - diffusers - sdxl - sdxl-diffusers datasets: - trpfrog/trpfrog-icons - Prgckwb/trpfrog-icons-dreambooth --- # SD3 DreamBooth - Prgckwb/trpfrog-sdxl ## Model description !! This is same as Prgckwb/trpfrog-sdxl-lora !! These are Prgckwb/trpfrog-sdxl DreamBooth weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `an icon of trpfrog` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( 'Prgckwb/trpfrog-sdxl', torch_dtype=torch.float16 ).to('cuda') image = pipeline('an icon of trpfrog').images[0] image.save('trpfrog.png') ```