( onnx_model_path: Path config: Optional = None )
Handles the RyzenAI quantization process for models shared on huggingface.co/models.
( model_or_path: Union file_name: Optional = None )
Parameters
Union[str, Path]
) —
Can be either:Optional[str]
, defaults to None
) —
Overwrites the default model file name from "model.onnx"
to file_name
.
This allows you to load different model files from the same repository or directory. Instantiates a RyzenAIOnnxQuantizer
from an ONNX model file.
( dataset_name: str num_samples: int = 100 dataset_config_name: Optional = None dataset_split: Optional = None preprocess_function: Optional = None preprocess_batch: bool = True seed: Optional = 2016 token: bool = None streaming: bool = False )
Parameters
str
) —
The dataset repository name on the Hugging Face Hub or path to a local directory containing data files
to load to use for the calibration step. int
, defaults to 100) —
The maximum number of samples composing the calibration dataset. Optional[str]
, defaults to None
) —
The name of the dataset configuration. Optional[str]
, defaults to None
) —
Which split of the dataset to use to perform the calibration step. Optional[Callable]
, defaults to None
) —
Processing function to apply to each example after loading dataset. bool
, defaults to True
) —
Whether the preprocess_function
should be batched. int
, defaults to 2016) —
The random seed to use when shuffling the calibration dataset. bool
, defaults to False
) —
Whether to use the token generated when running transformers-cli login
(necessary for some datasets
like ImageNet). Creates the calibration datasets.Dataset
to use for the post-training static quantization calibration step.
( quantization_config: QuantizationConfig dataset: Dataset save_dir: Union batch_size: int = 1 file_suffix: Optional = 'quantized' )
Parameters
QuantizationConfig
) —
The configuration containing the parameters related to quantization. Union[str, Path]
) —
The directory where the quantized model should be saved. Optional[str]
, defaults to "quantized"
) —
The file_suffix used to save the quantized model. Optional[Dict[str, Tuple[float, float]]]
, defaults to None
) —
The dictionary mapping the nodes name to their quantization ranges, used and required only when applying static quantization. Quantizes a model given the optimization specifications defined in quantization_config
.