Exporting a model from one framework to some format (also called backend here) involves specifying inputs and outputs information that the export function needs. The way optimum.exporters
is structured for each backend is as follows:
The role of the TasksManager is to be the main entry-point to load a model given a name and a task, and to get the proper configuration for a given (architecture, backend) couple. That way, there is a centralized place to register the task -> model class
and (architecture, backend) -> configuration
mappings. This allows the export functions to use this, and to rely on the various checks it provides.
The tasks supported might depend on the backend, but here are the mappings between a task name and the auto class for both PyTorch and TensorFlow.
It is possible to know which tasks are supported for a model for a given backend, by doing:
>>> from optimum.exporters.tasks import TasksManager
>>> model_type = "distilbert"
>>> # For instance, for the ONNX export.
>>> backend = "onnx"
>>> distilbert_tasks = list(TasksManager.get_supported_tasks_for_model_type(model_type, backend).keys())
>>> print(distilbert_tasks)
['default', 'fill-mask', 'text-classification', 'multiple-choice', 'token-classification', 'question-answering']
Task | Auto Class |
---|---|
text-generation , text-generation-with-past |
AutoModelForCausalLM |
feature-extraction , feature-extraction-with-past |
AutoModel |
fill-mask |
AutoModelForMaskedLM |
question-answering |
AutoModelForQuestionAnswering |
text2text-generation , text2text-generation-with-past |
AutoModelForSeq2SeqLM |
text-classification |
AutoModelForSequenceClassification |
token-classification |
AutoModelForTokenClassification |
multiple-choice |
AutoModelForMultipleChoice |
image-classification |
AutoModelForImageClassification |
object-detection |
AutoModelForObjectDetection |
image-segmentation |
AutoModelForImageSegmentation |
masked-im |
AutoModelForMaskedImageModeling |
semantic-segmentation |
AutoModelForSemanticSegmentation |
automatic-speech-recognition |
AutoModelForSpeechSeq2Seq |
Task | Auto Class |
---|---|
text-generation , text-generation-with-past |
TFAutoModelForCausalLM |
default , default-with-past |
TFAutoModel |
fill-mask |
TFAutoModelForMaskedLM |
question-answering |
TFAutoModelForQuestionAnswering |
text2text-generation , text2text-generation-with-past |
TFAutoModelForSeq2SeqLM |
text-classification |
TFAutoModelForSequenceClassification |
token-classification |
TFAutoModelForTokenClassification |
multiple-choice |
TFAutoModelForMultipleChoice |
semantic-segmentation |
TFAutoModelForSemanticSegmentation |
Handles the task name -> model class
and architecture -> configuration
mappings.
(
backend: str
overwrite_existing: bool = False
)
→
Callable[[str, Tuple[str, ...]], Callable[[Type], Type]]
Parameters
str
) —
The name of the backend that the register function will handle.
bool
, defaults to False
) —
Whether or not the register function is allowed to overwrite an already existing config.
Returns
Callable[[str, Tuple[str, ...]], Callable[[Type], Type]]
A decorator taking the model type and a the supported tasks.
Creates a register function for the specified backend.
(
model_name_or_path: typing.Union[str, pathlib.Path]
subfolder: str = ''
framework: typing.Optional[str] = None
cache_dir: str = '/root/.cache/huggingface/hub'
)
→
str
Parameters
Union[str, Path]
) —
Can be either the model id of a model repo on the Hugging Face Hub, or a path to a local directory
containing a model.
str
, optional, defaults to ""
) —
In case the model files are located inside a subfolder of the model directory / repo on the Hugging
Face Hub, you can specify the subfolder name here.
Optional[str]
, optional) —
The framework to use for the export. See above for priority if none provided.
Returns
str
The framework to use for the export.
Determines the framework to use for the export.
The priority is in the following order:
framework
.Retrieves all the possible tasks.
(
exporter: str
model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel'), NoneType] = None
task: str = 'feature-extraction'
model_type: typing.Optional[str] = None
model_name: typing.Optional[str] = None
exporter_config_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
)
→
ExportConfigConstructor
Parameters
str
) —
The exporter to use.
Optional[Union[PreTrainedModel, TFPreTrainedModel]]
, defaults to None
) —
The instance of the model.
str
, defaults to "feature-extraction"
) —
The task to retrieve the config for.
Optional[str]
, defaults to None
) —
The model type to retrieve the config for.
Optional[str]
, defaults to None
) —
The name attribute of the model object, only used for the exception message.
, defaults to
None`) —
Arguments that will be passed to the exporter config class when building the config constructor.
Returns
ExportConfigConstructor
The ExportConfig
constructor for the requested backend.
Gets the ExportConfigConstructor
for a model (or alternatively for a model type) and task combination.
( task: str framework: str = 'pt' model_type: typing.Optional[str] = None model_class_name: typing.Optional[str] = None )
Parameters
str
) —
The task required.
str
, defaults to "pt"
) —
The framework to use for the export.
Optional[str]
, defaults to None
) —
The model type to retrieve the model class for. Some architectures need a custom class to be loaded,
and can not be loaded from auto class.
Optional[str]
, defaults to None
) —
A model class name, allowing to override the default class that would be detected for the task. This
parameter is useful for example for “automatic-speech-recognition”, that may map to
AutoModelForSpeechSeq2Seq or to AutoModelForCTC.
Attempts to retrieve an AutoModel class from a task name.
( task: str model_name_or_path: typing.Union[str, pathlib.Path] subfolder: str = '' revision: typing.Optional[str] = None framework: typing.Optional[str] = None cache_dir: typing.Optional[str] = None torch_dtype: typing.Optional[ForwardRef('torch.dtype')] = None device: typing.Union[ForwardRef('torch.device'), str, NoneType] = None **model_kwargs )
Parameters
str
) —
The task required.
Union[str, Path]
) —
Can be either the model id of a model repo on the Hugging Face Hub, or a path to a local directory
containing a model.
str
, optional, defaults to ""
) —
In case the model files are located inside a subfolder of the model directory / repo on the Hugging
Face Hub, you can specify the subfolder name here.
Optional[str]
, optional) —
Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
Optional[str]
, optional) —
The framework to use for the export. See TasksManager.determine_framework
for the priority should
none be provided.
Optional[str]
, optional) —
Path to a directory in which a downloaded pretrained model weights have been cached if the standard cache should not be used.
Optional[torch.dtype]
, defaults to None
) —
Data type to load the model on. PyTorch-only argument.
Optional[torch.device]
, defaults to None
) —
Device to initialize the model on. PyTorch-only argument. For PyTorch, defaults to “cpu”.
Dict[str, Any]
, optional) —
Keyword arguments to pass to the model .from_pretrained()
method.
Retrieves a model from its name and the task to be enabled.
Returns the list of supported architectures by the exporter for a given task.
(
model_type: str
exporter: str
model_name: typing.Optional[str] = None
)
→
TaskNameToExportConfigDict
Parameters
str
) —
The model type to retrieve the supported tasks for.
str
) —
The name of the exporter.
Optional[str]
, defaults to None
) —
The name attribute of the model object, only used for the exception message.
Returns
TaskNameToExportConfigDict
The dictionary mapping each task to a corresponding ExportConfig
constructor.
Retrieves the task -> exporter backend config constructors
map from the model type.
(
model: typing.Union[str, ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel'), typing.Type]
subfolder: str = ''
revision: typing.Optional[str] = None
)
→
str
Parameters
str
) —
The model to infer the task from. This can either be the name of a repo on the HuggingFace Hub, an
instance of a model, or a model class.
str
, optional, defaults to ""
) —
In case the model files are located inside a subfolder of the model directory / repo on the Hugging
Face Hub, you can specify the subfolder name here.
Optional[str]
, optional) —
Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
Returns
str
The task name automatically detected from the model repo.
Infers the task from the model repo.