Optimum documentation

Models

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v1.23.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Models

Generic model classes

The following Furiosa classes are available for instantiating a base model class without a specific head.

FuriosaAIModel

class optimum.furiosa.FuriosaAIModel

< >

( model config: PretrainedConfig = None compute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict]] = None label_names: typing.Optional[typing.List[str]] = None **kwargs )

evaluation_loop

< >

( dataset: Dataset )

Parameters

  • dataset (datasets.Dataset) — Dataset to use for the evaluation step.

Run evaluation and returns metrics and predictions.

to

< >

( device: str )

Use the specified device for inference. For example: “cpu” or “gpu”. device can be in upper or lower case. To speed up first inference, call .compile() after .to().

Computer vision

The following classes are available for the following computer vision tasks.

FuriosaAIModelForImageClassification

class optimum.furiosa.FuriosaAIModelForImageClassification

< >

( model = None config = None **kwargs )

Parameters

  • model (furiosa.runtime.model) — is the main class used to run inference.
  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the ~furiosa.modeling.FuriosaAIBaseModel.from_pretrained method to load the model weights.
  • device (str, defaults to "CPU") — The device type for which the model will be optimized for. The resulting compiled model will contains nodes specific to this device.
  • furiosa_config (Optional[Dict], defaults to None) — The dictionnary containing the informations related to the model compilation.
  • compile (bool, defaults to True) — Disable the model compilation during the loading step when set to False.

FuriosaAI Model with a ImageClassifierOutput for image classification tasks.

This model inherits from optimum.furiosa.FuriosaAIBaseModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

forward

< >

( pixel_values: typing.Union[torch.Tensor, numpy.ndarray] **kwargs )

Parameters

  • pixel_values (torch.Tensor) — Pixel values corresponding to the images in the current batch. Pixel values can be obtained from encoded images using AutoFeatureExtractor.

The FuriosaAIModelForImageClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of image classification using transformers.pipelines:

>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.furiosa import FuriosaAIModelForImageClassification

>>> preprocessor = AutoFeatureExtractor.from_pretrained("microsoft/resnet50")
>>> model = FuriosaAIModelForImageClassification.from_pretrained("microsoft/resnet50", export=True, input_shape_dict="dict('pixel_values': [1, 3, 224, 224])", output_shape_dict="dict("logits": [1, 1000])",)
>>> pipe = pipeline("image-classification", model=model, feature_extractor=preprocessor)
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> outputs = pipe(url)