MediumThick / README.md
TE2G's picture
End of training
c7bc8e2 verified
|
raw
history blame
1.88 kB
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- sd3
- sd3-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- sd3
- sd3-diffusers
- template:sd-lora
base_model: stabilityai/stable-diffusion-3-medium-diffusers
instance_prompt: a photo of Medium Thick knit pullover
widget:
- text: A photo of Medium Thick knit pullover on a mannequin or torso
output:
url: image_0.png
- text: A photo of Medium Thick knit pullover on a mannequin or torso
output:
url: image_1.png
- text: A photo of Medium Thick knit pullover on a mannequin or torso
output:
url: image_2.png
- text: A photo of Medium Thick knit pullover on a mannequin or torso
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - TE2G/MediumThick
<Gallery />
## Model description
These are TE2G/MediumThick DreamBooth weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
## Trigger words
You should use a photo of Medium Thick knit pullover to trigger the image generation.
## Download model
[Download](TE2G/MediumThick/tree/main) them in the Files & versions tab.
## License
Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]