Inference Endpoints (dedicated) documentation

Add custom Dependencies

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Add custom Dependencies

Inference Endpoints’ base image includes all required libraries to run inference on Transformers models, but it also supports custom dependencies. This is useful if you want to:

  • customize your inference pipeline and need additional Python dependencies
  • run a model which requires special dependencies like the newest or a fixed version of a library (for example, tapas (torch-scatter)).

To add custom dependencies, add a requirements.txt file with the Python dependencies you want to install in your model repository on the Hugging Face Hub. When your Endpoint and Image artifacts are created, Inference Endpoints checks if the model repository contains a requirements.txt file and installs the dependencies listed within.

optimum[onnxruntime]==1.2.3
mkl-include
mkl

Check out the requirements.txt files in the following model repositories for examples:

For more information, take a look at how you can create and install dependencies when you use your own custom container for inference.

< > Update on GitHub