Models
Generic model classes
The following Furiosa classes are available for instantiating a base model class without a specific head.
FuriosaAIModel
class optimum.furiosa.FuriosaAIModel
< source >( model config: PretrainedConfig = None compute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict]] = None label_names: typing.Optional[typing.List[str]] = None **kwargs )
evaluation_loop
< source >( dataset: Dataset )
Run evaluation and returns metrics and predictions.
Use the specified device
for inference. For example: “cpu” or “gpu”. device
can
be in upper or lower case. To speed up first inference, call .compile()
after .to()
.
Computer vision
The following classes are available for the following computer vision tasks.
FuriosaAIModelForImageClassification
class optimum.furiosa.FuriosaAIModelForImageClassification
< source >( model = None config = None **kwargs )
Parameters
- model (
furiosa.runtime.model
) — is the main class used to run inference. - config (
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the~furiosa.modeling.FuriosaAIBaseModel.from_pretrained
method to load the model weights. - device (
str
, defaults to"CPU"
) — The device type for which the model will be optimized for. The resulting compiled model will contains nodes specific to this device. - furiosa_config (
Optional[Dict]
, defaults toNone
) — The dictionnary containing the informations related to the model compilation. - compile (
bool
, defaults toTrue
) — Disable the model compilation during the loading step when set toFalse
.
FuriosaAI Model with a ImageClassifierOutput for image classification tasks.
This model inherits from optimum.furiosa.FuriosaAIBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
forward
< source >( pixel_values: typing.Union[torch.Tensor, numpy.ndarray] **kwargs )
Parameters
- pixel_values (
torch.Tensor
) — Pixel values corresponding to the images in the current batch. Pixel values can be obtained from encoded images usingAutoFeatureExtractor
.
The FuriosaAIModelForImageClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of image classification using transformers.pipelines
:
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.furiosa import FuriosaAIModelForImageClassification
>>> preprocessor = AutoFeatureExtractor.from_pretrained("microsoft/resnet50")
>>> model = FuriosaAIModelForImageClassification.from_pretrained("microsoft/resnet50", export=True, input_shape_dict="dict('pixel_values': [1, 3, 224, 224])", output_shape_dict="dict("logits": [1, 1000])",)
>>> pipe = pipeline("image-classification", model=model, feature_extractor=preprocessor)
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> outputs = pipe(url)