KEDA 2.0 scales up the scale of its Kubernetes scaling-up

KEDA 2.0 scales up the scale of its Kubernetes scaling-up

A year after the release of version 1.0 of the KEDA Kubernetes scaling manager, version 2.0 is now in widespread availability. KEDA 2.0 continues its antecedent’s central role of autoscaling Kubernetes deployments through a greater variety of triggers than native Kubernetes supports, adding many new modes, capabilities, compatibilities and options.

KUDA 2.0 now autoscales both deployment and job workloads through a resource that specifies both types instead of one or the other. It now accepts multiple triggers for autoscaling, deciding on scale according to which trigger has activated, and can scale StatefulSets or anything that implements /scale subresource, such as Argo Rollouts – KEDA 2.0 is no longer limited to Kubernetes deployments.

It also adds concepts such as separate strategies for job-type deployments, and much finer control of scaling behaviour overall. New CPU and Memory scalers bring those capabilities within KUDA instead of needing to be mixed with Kubernetes native scaling.

A new Metrics API can accept autoscaling information from in-house or other system metric sources, while KEDA’s own metrics are available to Liveness and Readiness probes on both Operator and Metrics server pods. And the KEDA Metrics Server now exposes Prometheus metrics about each used scaler, although serve metrics for ScaledJobs won’t be available until a future release.

Started by Microsoft and Red Hat, KEDA has since been accepted as a Cloud Native Computing Federation (CNCF) Sandbox project to underline its intended vendor neutrality and commitment to industry-wide open principles. KEDA says it hopes to graduate to CNCF Incubation status later this year or early next, a progression that signifies a higher level of maturity and suitability for use at greater scale.