You can export a PyTorch model to Neuron with 🤗 Optimum to run inference on AWS Inferntia 1 and Inferentia 2.
There is an export function for each generation of the Inferentia accelerator, export_neuron
for INF1 and export_neuronx
on INF2, but you will be able to use directly the export function export
, which will select the proper
exporting function according to the environment.
Besides, you can check if the exported model is valid via validate_model_outputs
, which compares
the compiled model’s output on Neuron devices to the PyTorch model’s output on CPU.
Exporting a PyTorch model to a Neuron compiled model involves specifying:
Depending on the choice of model and task, we represent the data above with configuration classes. Each configuration class is associated with
a specific model architecture, and follows the naming convention ArchitectureNameNeuronConfig
. For instance, the configuration which specifies the Neuron
export of BERT models is BertNeuronConfig
.
Since many architectures share similar properties for their Neuron configuration, 🤗 Optimum adopts a 3-level class hierarchy:
BertNeuronConfig
mentioned above. These are the ones actually used to export models.Architecture | Task |
---|---|
ALBERT | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
BERT | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
CamemBERT | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
ConvBERT | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
DeBERTa (INF2 only) | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
DeBERTa-v2 (INF2 only) | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
DistilBERT | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
ELECTRA | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
FlauBERT | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
GPT2 | text-generation |
MobileBERT | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
MPNet | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
RoBERTa | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
RoFormer | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
XLM | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
XLM-RoBERTa | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification |
Stable Diffusion | text-to-image, image-to-image, inpaint |
Stable Diffusion XL | text-to-image |
More details for checking supported tasks here.
More architectures coming soon, stay tuned! 🚀