Deploying a llama.cpp Container

You can deploy any llama.cpp compatible GGUF on the Hugging Face Endpoints. When you create an endpoint with a GGUF model, a llama.cpp container is automatically selected using the latest image built from the master branch of the llama.cpp repository. Upon successful deployment, a server with an OpenAI-compatible endpoint becomes available.

Llama.cpp supports multiple endpoints like /tokenize, /health, /embedding and many more. For a comprehensive list of available endpoints, please refer to the API documentation.

Deployment Steps

To deploy an endpoint with a llama.cpp container, follow these steps:

  1. Create a new endpoint and select a repository containing a GGUF model. The llama.cpp container will be automatically selected.
Select model
  1. Choose the desired GGUF file, noting that memory requirements will vary depending on the selected file. For example, an F16 model requires more memory than a Q4_K_M model.
Select GGUF file
  1. Select your desired hardware configuration.
Select hardware
  1. Optionally, you can customize the container’s configuration settings like Max Tokens, Number of Concurrent Requests. For more information on those, please refer to the Configurations section below.

  2. Click the Create Endpoint button to complete the deployment.

Alternatively, you can follow the video tutorial below for a step-by-step guide on deploying an endpoint with a llama.cpp container:

Configurations

The llama.cpp container offers several configuration options that can be adjusted. After deployment, you can modify these settings by accessing the Settings tab on the endpoint details page.

Basic Configurations

Advanced Configurations

In addition to the basic configurations, you can also modify specific settings by setting environment variables. A list of available environment variables can be found in the API documentation.

Please note that the following environment variables are reserved by the system and cannot be modified:

Troubleshooting

In case the deployment fails, please watch the log output for any error messages.

You can access the logs by clicking on the Logs tab on the endpoint details page. To learn more, refer to the Logs documentation.

< > Update on GitHub