Welcome to Kedify!
Kedify autoscales any cluster workload to optimize performance and reduce cost by 20% or more.
Getting Started in <2 Minutes
- 🚀 Complete the Kedify quickstart to install KEDA + Kedify in your cluster
- 📈 Learn how to scale AI Inference Workloads to Reduce Cost and Complexity
- 🧪 Try in-browser tutorials to explore key features (gRPC autoscaling or Gateway API integration, etc)
- 📝 Sign up for a free trial of Kedify’s dashboard and managed service offering
For a complete and detailed dashboard-based installation guide, please visit Installation page.
Kedify Overview
Kedify is a managed service, powered by the KEDA open source project and Kubernetes’ built-in Horizontal Pod Autoscaler, that scales cluster resources up or down (including down to zero resources) based on pre-defined external events. This enables flexibility to reduce the cost and optimize the performance of any type of workload.
You can use Kedify to:
- Scale your workloads based on HTTP requests, messages in a queue, or any other event source
- Securely install KEDA with no CVEs in less than 90 seconds
- Manage KEDA across multiple clusters and types of workloads
- Monitor and visualize your workload autoscaling
- Get resource and configuration recommendations
- Dynamically autoscale policies based on the cron schedule
The Kedify is made up of 3 separate but interrelated components:
- Kedify Agent: a secure gRPC-based agent service that manages KEDA, provides telemetry and maintains security settings
- Kedify Custom Resource Definitions: YAML-based custom resource definitions that provide the foundation for how and when to scale deployments as well as defining event sources
- Kedify Dashboard: an intuitive user interface to monitor resources and autoscaling activities as well as managing KEDA installations across clusters