DeepSparse Server
DeepSparse is an inference runtime taking advantage of sparsity with neural networks offering GPU-class performance on CPUs.
Deploy vLLM to easily run inference on self-hosted AI models
vLLM is a Python library that functions as a hosted LLM inference platform. It can download models from Hugging Face and run them seamlessly on local GPUs. vLLM can run batched offline inference on datasets, provides an OpenAI-compatible API to respond to client requests, and can be tuned according to the hardware available and the performance characteristics you require.
This Starter deploys vLLM to Koyeb in one click. By default, it deploys on an Nvidia RTX 4000 SFF Ada GPU Instance using Google's Gemma 2B model. You can change the model during deployment by modifying the Command args in the Deployment section.
You must run vLLM on a GPU Instance type.
During initialization, vLLM will download the specified model from Hugging Face. In the Health checks section of the configuration page, set the Grace period to 300 to allow time to download large models.
To change the deployed model, in the Deployment section, modify the selected model in the Command args field.
When deploying vLLM on Koyeb, the following environment variables are used for configuration. Take care to set the required variables with the appropriate values if not set:
HF_TOKEN
: An API token to authenticate to Hugging Face. This app only requires a read-only API token and is used to verify that you have accepted the model's usage license.VLLM_API_KEY
(Optional): An API key you can set to limit access to the server. When an API key is set, every request must provide it as an authorization bearer token.VLLM_DO_NOT_TRACK
(Optional): Set to "1" to disable sending usage statistics to the vLLM project.DeepSparse is an inference runtime taking advantage of sparsity with neural networks offering GPU-class performance on CPUs.
Deploy Fooocus, a powerful AI image generation tool, on Koyeb
LangServe makes it easy to deploy LangChain applications as RESTful APIs.