(
model: TFPreTrainedModel
config: TFLiteConfig
output: Path
task: typing.Optional[str] = None
preprocessor: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, NoneType] = None
quantization_config: typing.Optional[ForwardRef('TFLiteQuantizationConfig')] = None
)
→
Tuple[List[str], List[str]]
Parameters
TFPreTrainedModel
) —
The model to export.
Path
) —
Directory to store the exported TFLite model.
Optional[str]
, defaults to None
) —
The task of the model. If left unspecified the task will be inferred automatically. Only needed for static
quantization.
Optional[Preprocessor]
, defaults to None
) —
The preprocessor associated to the model. This is used for preprocessing the dataset before feeding data to
the model during calibration.
Optional[TFLiteQuantizationConfig]
, defaults to None
) —
The dataclass containing all the needed information to perform quantization.
Returns
Tuple[List[str], List[str]]
A tuple with an ordered list of the model’s inputs, and the named inputs from the TFLite configuration.
Exports a TensorFlow model to a TensorFlow Lite model.
( config: TFLiteConfig reference_model: TFPreTrainedModel tflite_model_path: Path tflite_named_outputs: typing.List[str] atol: typing.Optional[float] = None )
Parameters
~TFPreTrainedModel
) —
The model used for the export.
Path
) —
The path to the exported model.
List[str]
) —
The names of the outputs to check.
Optional[float]
, defaults to None
) —
The absolute tolerance in terms of outputs difference between the reference and the exported model.
Raises
ValueError
ValueError
— If the outputs shapes or values do not match between the reference and the exported model.Validates the export by checking that the outputs from both the reference and the exported model match.