How to run an AI model: local vs remote

In this game, we want to run a sentence similarity model, I’m going to use all-MiniLM-L6-v2.

It’s a BERT Transformer model. It’s already trained so we can use it directly.

But here, I have two solutions to run it, I can:

Both are valid solutions, but they have advantages and disadvantages.

Running the model remotely

I run the model on a remote server, and send API calls from the game. I can use an API service to help deploy the model.

Running AI model remotely

For instance, Hugging Face provides an API service called Inference API (free for prototyping and experimentation) that allows you to use AI models via simple API calls. And we have a Unity plugin to access and use Hugging Face AI models from within Unity projects.

Advantages

Disadvantages

Usually, you use an API if you use a very big model that couldn’t run on a player machine. For instance if you use big models like Llama 2.

Running the model locally

I run the model locally: on the player machine. To be able to do that I use two libraries.

  1. Unity Sentis: the neural network inference library that allow us to run our AI model directly inside our game.

  2. The Hugging Face Sharp Transformers library: a Unity plugin of utilities to run Transformer 🤗 models in Unity games.

Running AI model locally

Advantages

Disadvantages

Since the sentence similarity model we’re going to use is small, we decided to run it locally.

< > Update on GitHub