Optimum Intel can be used to load optimized models from the Hugging Face Hub and create pipelines to run inference with OpenVINO Runtime without rewriting your APIs.
You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors (see the full list of supported devices).
For that, just replace the AutoModelForXxx
class with the corresponding OVModelForXxx
class.
To load a Transformers model and convert it to the OpenVINO format on-the-fly, you can set export=True
when loading your model.
Here is an example on how to perform inference with OpenVINO Runtime for a text classification class:
- from transformers import AutoModelForSequenceClassification
+ from optimum.intel import OVModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
- model = AutoModelForSequenceClassification.from_pretrained(model_id)
+ model = OVModelForSequenceClassification.from_pretrained(model_id, export=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
outputs = cls_pipe("He's a dreadful magician.")
[{'label': 'NEGATIVE', 'score': 0.9919503927230835}]
To easily save the resulting model, you can use the save_pretrained()
method, which will save both the BIN and XML files describing the graph.
# Save the exported model
save_directory = "openvino_distilbert"
model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory)
By default, OVModelForXxx
support dynamic shapes, enabling inputs of every shapes. To speed up inference, static shapes can be enabled by giving the desired inputs shapes.
# Fix the batch size to 1 and the sequence length to 9
model.reshape(1, 9)
# Compile the model before the first inference
model.compile()
Currently, OpenVINO only supports static shapes when running inference on Intel GPUs. FP16 precision can also be enabled in order to further decrease latency.
# Fix the batch size to 1 and the sequence length to 9
model.reshape(1, 9)
# Enable FP16 precision
model.half()
model.to("gpu")
# Compile the model before the first inference
model.compile()
When fixing the shapes with the reshape()
method, inference cannot be performed with an input of a different shape. When instantiating your pipeline, you can specify the maximum total input sequence length after tokenization in order for shorter sequences to be padded and for longer sequences to be truncated.
from datasets import load_dataset
from transformers import AutoTokenizer, pipeline
from evaluate import evaluator
from optimum.intel import OVModelForQuestionAnswering
model_id = "distilbert-base-cased-distilled-squad"
model = OVModelForQuestionAnswering.from_pretrained(model_id, export=True)
model.reshape(1, 384)
tokenizer = AutoTokenizer.from_pretrained(model_id)
eval_dataset = load_dataset("squad", split="validation").select(range(50))
task_evaluator = evaluator("question-answering")
qa_pipe = pipeline(
"question-answering",
model=model,
tokenizer=tokenizer,
max_seq_len=384,
padding="max_length",
truncation=True,
)
metric = task_evaluator.compute(model_or_pipeline=qa_pipe, data=eval_dataset, metric="squad")
By default the model will be compiled when instantiating our OVModel
. In the case where the model is reshaped, placed to an other device or if FP16 precision is enabled, the model will need to be recompiled again, which will happen by default before the first inference (thus inflating the latency of the first inference). To avoid an unnecessary compilation, you can disable the first compilation by setting compile=False
. The model should also be compiled before the first inference with model.compile()
.
from optimum.intel import OVModelForSequenceClassification
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
# Load the model and disable the model compilation
model = OVModelForSequenceClassification.from_pretrained(model_id, export=True, compile=False)
model.half()
# Compile the model before the first inference
model.compile()
Sequence-to-sequence (Seq2Seq) models, that generate a new sequence from an input, can also be used when running inference with OpenVINO. When Seq2Seq models are exported to the OpenVINO IR, they are decomposed into two parts : the encoder and the “decoder” (which actually consists of the decoder with the language modeling head), that are later combined during inference.
To leverage the pre-computed key/values hidden-states to speed up sequential decoding, simply pass use_cache=True
to the from_pretrained()
method. An additional model component will be exported: the “decoder” with pre-computed key/values as one of its inputs.
This specific export comes from the fact that during the first pass, the decoder has no pre-computed key/values hidden-states, while during the rest of the generation past key/values will be used to speed up sequential decoding.
Here is an example on how you can run inference for a translation task using an MarianMT model and then export it to the OpenVINO IR:
from transformers import AutoTokenizer, pipeline
from optimum.intel import OVModelForSeq2SeqLM
model_id = "t5-small"
model = OVModelForSeq2SeqLM.from_pretrained(model_id, export=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
translation_pipe = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer)
text = "He never went out without a book under his arm, and he often came back with two."
result = translation_pipe(text)
# Save the exported model
save_directory = "openvino_t5"
model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory)
[{'translation_text': "Il n'est jamais sorti sans un livre sous son bras, et il est souvent revenu avec deux."}]
Stable Diffusion models can also be used when running inference with OpenVINO. When Stable Diffusion models are exported to the OpenVINO format, they are decomposed into three components that are later combined during inference:
Make sure you have 🤗 Diffusers installed.
To install diffusers
:
pip install optimum[diffusers]
from optimum.intel import OVStableDiffusionPipeline
model_id = "echarlaix/stable-diffusion-v1-5-openvino"
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Rembrandt"
images = pipeline(prompt).images
To load your PyTorch model and convert it to OpenVINO on-the-fly, you can set export=True
.
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
# Don't forget to save the exported model
pipeline.save_pretrained("openvino-sd-v1-5")
To further speed up inference, the model can be statically reshaped :
# Define the shapes related to the inputs and desired outputs
batch_size = 1
num_images_per_prompt = 1
height = 512
width = 512
# Statically reshape the model
pipeline.reshape(batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt)
# Compile the model before the first inference
pipeline.compile()
# Run inference
images = pipeline(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images
In case you want to change any parameters such as the outputs height or width, you’ll need to statically reshape your model once again.
import requests
import torch
from PIL import Image
from io import BytesIO
from optimum.intel import OVStableDiffusionImg2ImgPipeline
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = OVStableDiffusionImg2ImgPipeline.from_pretrained(model_id, export=True)
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))
prompt = "A fantasy landscape, trending on artstation"
image = pipeline(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images[0]
image.save("fantasy_landscape.png")
Before using OVtableDiffusionXLPipeline
make sure to have diffusers
and invisible_watermark
installed. You can install the libraries as follows:
pip install diffusers pip install invisible-watermark>=2.0
Here is an example of how you can load a PyTorch SDXL model, convert it to the adapted format on-the-fly and run inference with OpenVINO Runtime:
from optimum.intel import OVtableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
base = OVtableDiffusionXLPipeline.from_pretrained(model_id, export=True)
prompt = "train station by Caspar David Friedrich"
image = base(prompt).images[0]
# Don't forget to save your OpenVINO model
base.save_pretrained("openvino-sd-xl-base-1.0")
![]() |
![]() |
You can use SDXL as follows for image-to-image:
from optimum.intel import OVStableDiffusionXLImg2ImgPipeline
from diffusers.utils import load_image
model_id = "stabilityai/stable-diffusion-xl-refiner-1.0"
pipeline = OVStableDiffusionXLImg2ImgPipeline.from_pretrained(model_id, export=True)
url = "https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/sd_xl/castle_friedrich.png"
image = load_image(url).convert("RGB")
prompt = "medieval castle by Caspar David Friedrich"
image = pipeline(prompt, image=image).images[0]
image.save("medieval_castle.png")
The image can be refined by making use of a model like stabilityai/stable-diffusion-xl-refiner-1.0. In this case, you only have to output the latents from the base model.
from optimum.intel import OVStableDiffusionXLImg2ImgPipeline
model_id = "stabilityai/stable-diffusion-xl-refiner-1.0"
refiner = OVStableDiffusionXLImg2ImgPipeline.from_pretrained(model_id, export=True)
image = base(prompt=prompt, output_type="latent").images[0]
image = refiner(prompt=prompt, image=image[None, :]).images[0]
As shown in the table below, each task is associated with a class enabling to automatically load your model.
Task | Auto Class |
---|---|
text-classification |
OVModelForSequenceClassification |
token-classification |
OVModelForTokenClassification |
question-answering |
OVModelForQuestionAnswering |
audio-classification |
OVModelForAudioClassification |
image-classification |
OVModelForImageClassification |
feature-extraction |
OVModelForFeatureExtraction |
fill-mask |
OVModelForMaskedLM |
text-generation |
OVModelForCausalLM |
text2text-generation |
OVModelForSeq2SeqLM |
text-to-image |
OVStableDiffusionPipeline |
text-to-image |
OVStableDiffusionXLPipeline |
image-to-image |
OVStableDiffusionImg2ImgPipeline |
image-to-image |
OVStableDiffusionXLImg2ImgPipeline |
inpaint |
OVStableDiffusionInpaintPipeline |