You can deploy any llama.cpp compatible GGUF on the Hugging Face Endpoints. When you create an endpoint with a GGUF model, a llama.cpp container is automatically selected using the latest image built from the master
branch of the llama.cpp repository. Upon successful deployment, a server with an OpenAI-compatible endpoint becomes available.
Llama.cpp supports multiple endpoints like /tokenize
, /health
, /embedding
and many more. For a comprehensive list of available endpoints, please refer to the API documentation.
To deploy an endpoint with a llama.cpp container, follow these steps:
Optionally, you can customize the container’s configuration settings like Max Tokens
, Number of Concurrent Requests
. For more information on those, please refer to the Configurations section below.
Click the Create Endpoint button to complete the deployment.
Alternatively, you can follow the video tutorial below for a step-by-step guide on deploying an endpoint with a llama.cpp container:
The llama.cpp container offers several configuration options that can be adjusted. After deployment, you can modify these settings by accessing the Settings tab on the endpoint details page.
In addition to the basic configurations, you can also modify specific settings by setting environment variables. A list of available environment variables can be found in the API documentation.
Please note that the following environment variables are reserved by the system and cannot be modified:
LLAMA_ARG_MODEL
LLAMA_ARG_HTTP_THREADS
LLAMA_ARG_N_GPU_LAYERS
LLAMA_ARG_EMBEDDINGS
LLAMA_ARG_HOST
LLAMA_ARG_PORT
LLAMA_ARG_NO_MMAP
LLAMA_ARG_CTX_SIZE
LLAMA_ARG_N_PARALLEL
LLAMA_ARG_ENDPOINT_METRICS
In case the deployment fails, please watch the log output for any error messages.
You can access the logs by clicking on the Logs tab on the endpoint details page. To learn more, refer to the Logs documentation.
Malloc failed: out of memory
If you see this error message in the log:
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 67200.00 MiB on device 0: cuda
Malloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
...
That means the selected hardware configuration does not have enough memory to accommodate the selected GGUF model. You can try to:
Workload evicted, storage limit exceeded
This error message indicates that the hardware has too little memory to accommodate the selected GGUF model. Try selecting a smaller model or select a larger hardware configuration.
Other problems
For other problems, please refer to the llama.cpp issues page. In case you want to create a new issue, please also include the full log output in your bug report.