Koyeb Launch Week: Round 2
4 minWelcome to our second launch week! After the success of our very first Launch Week in June, we’re back with another batch of announcements!
Last Launch Week, we released:
- Autoscaling in GA
- GPUs in public preview & access to H100, A100, and more
- Volumes in technical preview
- AWS Regions on Koyeb
- Koyeb Startup Program
We had a blast rolling out so many new features and announcements at once, we decided we wanted to do it again! So here we are, gearing up for our second launch week in 3 months.
If you follow our changelog updates on the Koyeb Community, you’ve seen what we’ve been working on and delivering since then. You might even have a few good guesses about what we have in store for this second edition of launch week.
We can’t wait to share everything the team has been working on with you next week! We’ll be updating this post with a recap of everything we share during Launch Week #2.
Monday: New Dashboard - Build, Run, and Scale Apps in Minutes
To kick off Launch Week Round 2, we announced our new dashboard!
The new dashboard makes it easier than ever to build, run, and scale your apps in minutes with a simple and elegant interface. The new dashboard is designed to help you get started quickly and easily, so you can focus on building your app, not managing infrastructure.
Want to put the new control panel to the test? Check out our latest tutorial showcasing how to deploy Portkey Gateway to Koyeb and start streamlining requests to 200+ LLMs.
Waiting for Tuesday: Deploy Ollama and Open WebUI to run a private ChatGPT
Ollama is a self-hosted solution to run open-source LLMs on your own infrastructure. Open WebUI is a feature-rich and user friendly WebUI for LLMs.
Special Event #1 - Intel AI Summit - September 17 - Paris
If you’re in Paris next Tuesday, join us for Intel AI Summit: Bringing AI Everywhere. 🇫🇷
We’ll be there discussing how we’re bringing AI everywhere with high-performance infrastructure. Hope to see you there!
Waiting for Wednesday: Generate high-quality AI images
Can’t wait to see what we have in store? Hang tight and get comfy… ComfyUI that is!
Learn how to deploy ComfyUI, Comfy UI Manger, and Flux to generate high-quality images. Flux is just one of many advanced image generation AI models you can use in your workflow with ComfyUI.
Waiting for Thursday: Run inference on self-hosted AI models
Looking for a simpler way to run inference on your self-hosted AI models? Deploy vLLM in one click. vLLM is a Python library that functions as a hosted LLM inference platform. With vLLM, you can download models from Hugging Face and run them them on your own infrastructure.
Special Event #2 - AI Camp in Paris w/ Koyeb and Weaviate - September 19
If you’re in Paris next Thursday, join us for Building with AI: Navigate Scaling 🇫🇷 , an AI Camp meetup that we are co-organizing with Weaviate.
Waiting for Friday: Build production-ready LLM applications
LlamaIndex is a data framework that makes it simple to build production-ready applications from your data using LLMs. Providing an entire suite of packages and classes for loading, indexing, querying, and evaluating data, LlamaIndex specializes in context augmentation so that you can safely and reliably optimize your queries with custom data.
Our one-click application lets you deploy LlamaIndex on high-performance infrastructure in seconds.
What’s next?
Our launch week is just a few days away! We can’t wait to update you with all the exciting news that we’ve been working on behind the scenes!
If you want to know what’s up ahead, our roadmap is full of exciting features, new locations, and more. By the way, if there is a feature you’d like to see on the platform, request it on our feature request platform and vote for it to track its progress.
Follow us on X @gokoyeb and Koyeb's Linkedin to stay tuned for more updates, content, and announcements about your favorite serverless platform! 🚀