Kubernetes is hot in the enterprise, yet received little attention in the keynotes at the AWS re:Invent conference last week where the focus was on AI. However, Kubernetes is where a lot of that AI processing actually runs, says Barry Cooks, AWS VP of Kubernetes and a governing board member at CNCF (Cloud Native Computing Foundation).
“We’ve had a great re:Invent,” he tells Dev Class. “We’ve launched quite a bit more than last year. We’ve had demand for individual briefings and discussions, more than my entire staff can handle, on the roadmap and things.”
Has the dominance of AI in the keynotes squeezed out other important things, such as Kubernetes? “It may,” says Crooks. “The interesting thing about AI is, it’s probably the fastest growing Kubernetes workload we have, by a good margin … we’ve learned a lot about how people scale and perform operations for AI training.
“It’s not just us as a vendor touting AI capabilities … our customers, especially in the EKS world, that is where they’re spending a lot of time as well, they all want to innovate in this space.”
One newish feature (previewed in October) is extended support for Kubernetes versions, which means that starting with Kubernetes 1.23, a version is supported for up to 26 months from when it was first available in EKS. In the case of version 1.23, that was August 2022, suggesting support until October 2024. The support table is here.
“Kubernetes has had massive adoption in the enterprise, and the enterprise loves to upgrade pretty much never,” Cooks tells us. “Our goal is always to ship versions as quick after the community launches as possible, so that our bleeding-edge customers can take advantage. But other customers are saying that three times a year, which is the community rate, is too fast for an enterprise. So we have launched an additional 12 months of full support. And that comes with CVE patching, because one of the challenges in the Kubernetes world is that if you fall off the back end of the release train, you’re exposed from a security perspective,” said Cooks.
The aim is to get customers onto “a once a year cycle, which is pretty reasonable,” Cooks adds. The new support cycle is “in preview today, it will go live in February.”
What are the challenges around upgrading? “The complexity comes down to two things,” he says. “One is in the large enterprise space, it’s internal teams leveraging specific aspects of Kubernetes versions that are changing over time. They have to talk to those teams. The second challenge is they’ll have various third-party dependencies, either open source or paid third-party components that they put into their stack.” Each of these needs to be validated on the new stack.
“We’re just in the process of launching upgrade readiness checks, which provides a report from the version you’re on to every future supported version, which shows what is not going to work or can give you a clean bill of health,” Cooks says.
A distinctive feature of EKS, as one might expect, is that it hooks easily to other services. “We have ACK, Amazon Controllers for Kubernetes. We now have 23 generally available services in there. It allows you to use familiar Kubernetes patterns and APIs to talk to managed AWS services,” he adds.
What has been the impact of EKS Anywhere, which allows EKS to be used on-premises? “It was designed to help customers on a modernization journey,” says Cooks. “The most common thing we’ve seen is customers trying to figure out if they should lift and shift to the cloud, or modernize in place and then move.
“We also have use cases where they want to have AWS support backing their Kubernetes efforts in a fully air-gapped environment that’s never going to move to the cloud. The primary driver of EKS Anywhere initially was air-gapped environments.
“The other places we’ve seen success is with EKS Anywhere and telcos, where they are trying to deploy lots of small clusters. In the EKS cloud you’ll sometimes have 10 or 15 thousand-node clusters. In these evironments you’ll have two-node clusters, but a lot of them,” Cook tells us.
AWS has just donated the Karpenter project to the CNCF. It is an auto-scaling tool. “Karpenter is going to bin pack your workloads on the optimized sized instances to support you. For example, if you have a set of Kubernetes pods and you have more capacity deployed in EC2 than you require, Karpenter can turn off instances, move pods onto other instances, and pack things tighter, and you’ve now turned several EC2 instances off,” Cooks says. According to VP, Anthropic, doing large model training on EKS, used Karpenter to “drop their usage by 40 percent.”
Google and Microsoft push the idea of managing Kubernetes across different clouds, but on their cloud. Does AWS promote anything similar? “I’m a little sceptical,” says Cooks. “Managing let’s say, something in GKE from AWS. Am I going to have the level of depth and knowledge of GKE to provide you the best possible management experience? I’m not.”
“We have things like the EKS Connector which will connect to any Kubernetes cluster anywhere in the word, on premises, in another cloud. It will give you the data, it will give you the basic views of what’s going on in that cluster. There are things like that, that we think are totally valuable and useful and relatively sane for us to implement … who’s going to provide a better experience in GKE, me, or GKE? It’s going to be GKE … our focus is that we want to be the place you want to run your Kubernetes applications and we put every ounce of effort into that,” he says.