add optimum examples
Browse files
README.md
CHANGED
@@ -88,6 +88,57 @@ instead of `.to("cuda")`:
|
|
88 |
```
|
89 |
|
90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
## Uses
|
92 |
|
93 |
### Direct Use
|
@@ -117,4 +168,4 @@ The model was not trained to be factual or true representations of people or eve
|
|
117 |
- The autoencoding part of the model is lossy.
|
118 |
|
119 |
### Bias
|
120 |
-
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
|
|
88 |
```
|
89 |
|
90 |
|
91 |
+
### Optimum
|
92 |
+
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
|
93 |
+
|
94 |
+
#### OpenVINO
|
95 |
+
|
96 |
+
To install Optimum with the dependencies required for OpenVINO :
|
97 |
+
|
98 |
+
```bash
|
99 |
+
pip install optimum[openvino]
|
100 |
+
```
|
101 |
+
|
102 |
+
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
|
103 |
+
|
104 |
+
```diff
|
105 |
+
- from diffusers import StableDiffusionPipeline
|
106 |
+
+ from optimum.intel import OVStableDiffusionPipeline
|
107 |
+
|
108 |
+
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
109 |
+
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
|
110 |
+
+ pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
|
111 |
+
prompt = "A majestic lion jumping from a big stone at night"
|
112 |
+
image = pipeline(prompt).images[0]
|
113 |
+
```
|
114 |
+
|
115 |
+
You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl).
|
116 |
+
|
117 |
+
|
118 |
+
#### ONNX
|
119 |
+
|
120 |
+
To install Optimum with the dependencies required for ONNX Runtime inference :
|
121 |
+
|
122 |
+
```bash
|
123 |
+
pip install optimum[onnxruntime]
|
124 |
+
```
|
125 |
+
|
126 |
+
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
|
127 |
+
|
128 |
+
```diff
|
129 |
+
- from diffusers import StableDiffusionPipeline
|
130 |
+
+ from optimum.onnxruntime import ORTStableDiffusionPipeline
|
131 |
+
|
132 |
+
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
133 |
+
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
|
134 |
+
+ pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
|
135 |
+
prompt = "A majestic lion jumping from a big stone at night"
|
136 |
+
image = pipeline(prompt).images[0]
|
137 |
+
```
|
138 |
+
|
139 |
+
You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl).
|
140 |
+
|
141 |
+
|
142 |
## Uses
|
143 |
|
144 |
### Direct Use
|
|
|
168 |
- The autoencoding part of the model is lossy.
|
169 |
|
170 |
### Bias
|
171 |
+
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|