SDXL
#104
by
JazzyCucumber
- opened
README.md
CHANGED
@@ -146,12 +146,12 @@ pip install optimum[openvino]
|
|
146 |
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
|
147 |
|
148 |
```diff
|
149 |
-
- from diffusers import
|
150 |
-
+ from optimum.intel import
|
151 |
|
152 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
153 |
-
- pipeline =
|
154 |
-
+ pipeline =
|
155 |
prompt = "A majestic lion jumping from a big stone at night"
|
156 |
image = pipeline(prompt).images[0]
|
157 |
```
|
@@ -170,12 +170,12 @@ pip install optimum[onnxruntime]
|
|
170 |
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
|
171 |
|
172 |
```diff
|
173 |
-
- from diffusers import
|
174 |
-
+ from optimum.onnxruntime import
|
175 |
|
176 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
177 |
-
- pipeline =
|
178 |
-
+ pipeline =
|
179 |
prompt = "A majestic lion jumping from a big stone at night"
|
180 |
image = pipeline(prompt).images[0]
|
181 |
```
|
|
|
146 |
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
|
147 |
|
148 |
```diff
|
149 |
+
- from diffusers import StableDiffusionPipeline
|
150 |
+
+ from optimum.intel import OVStableDiffusionPipeline
|
151 |
|
152 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
153 |
+
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
|
154 |
+
+ pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
|
155 |
prompt = "A majestic lion jumping from a big stone at night"
|
156 |
image = pipeline(prompt).images[0]
|
157 |
```
|
|
|
170 |
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
|
171 |
|
172 |
```diff
|
173 |
+
- from diffusers import StableDiffusionPipeline
|
174 |
+
+ from optimum.onnxruntime import ORTStableDiffusionPipeline
|
175 |
|
176 |
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
177 |
+
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
|
178 |
+
+ pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
|
179 |
prompt = "A majestic lion jumping from a big stone at night"
|
180 |
image = pipeline(prompt).images[0]
|
181 |
```
|