Interview: DevClass spoke to Deepak Singh, VP Developers, Events, Containers and Serverless at AWS.
How many ways are there to run a container on AWS? “If I had my way they’d be infinite.” Singh tells us.
“The container is a standard packaging format, and the reason I say infinite is the assumption that everything will use it as the way to package code and take it from point A to point B.
“The runtimes will be more disparate and some may be container based and others may not.”
The reason for the question is that the choice between EKS (Elastic Kubernetes Service), ECS (Elastic Container Service), App Runner, or even Lambda, can be confusing to navigate. A further choice is whether to use Fargate, serverless compute for either EKS or ECS.
“We have two core container services that are general purpose,” says Singh, “which are EKS and ECS.”
The way Singh puts it, EKS is for “a Kubernetes customer” while “65 percent of new customers running containers start on ECS because they just want to run a container and not learn a lot of new stuff … ECS uses a lot of AWS semantics, like EC2 (Elastic Compute Cloud) APIs, IAM (Identity and Access Management), they just start there.
“Most ECS customers, if they are starting today, start with Fargate, and even people who had started with ECS on EC2 (Elastic Compute Cloud) are migrating to Fargate.”
So where does running a container on Lambda fit in? The fact that Lambda can run a container does not make it a container platform, Singh suggests; it is better to think of it the other way round, that code to run on Lambda can be packaged in the container format. “All the Lambda thing does is that instead of putting a zip file in S3 (Simple Storage Service), you are putting the same function and software in a container image in ECR (Elastic Container Registry). People have lots of good continuous deployment tooling around that.
“The actual execution is very different. It has nothing to do with containers the way the container world thinks about it,” Singh tells Dev Class.
In support of the notion of packaging code in a container, AWS last week introduced Finch, an open source tool for building and running containers, initially for MacOS only. “It’s actually built on a number of open source projects like nerdctl and containerd. It is a way for you to go, ‘container build’ on a Mac. The artefact that you get out is an OCI (Open Container Initiative) compatible container image.”
What does AWS think about the notion of Kubernetes as standardised infrastructure? AWS does not build its own services on it, Singh says.
“It has properties that would make it hard for us to do it. It makes complete sense for customers to do it on top of AWS, so that gives them that common [platform], especially if they are also running on premise. A study done by Accenture recently looked at the complexity of moving containers on ECS versus EKS and they found there’s no difference in time, but from a management perspective EKS does provide benefits because you’re using the same tooling. Is it standardised infrastructure for customers to build applications on AWS? It’s the best one that they have. Is it standardized infrastructure for all of us to build on, [such as for] a cloud? No, it’s not even close to that.”
We also asked Singh about the new SnapStart feature in Lambda, which caches the runtime environment for a Java function to speed up cold starts. Developers are asking, will this be available for other runtimes such as .NET?
“Java was a clear place to start,” says Singh, “because the cold start problem with Java functions is significantly more pronounced than with other ones, but SnapStart is not a Java-specific technology. Now that it’s out there, once we get a better understanding of how people are using it, we’ll find out which is the next runtime they want us to add support for and we’ll do that.
“We want to get to a world where you don’t have to think about cold starts.”