Containers for all – are we there yet?

Containers for all – are we there yet?

If you’re trying to catchup with the modern software development life cycle, container technology seems to almost be a given. The reality however looks slightly different, since most businesses still have to evaluate containerization for their purposes and only slowly migrate, if at all. Many companies are still busy setting up (partly) automated, continuous workflows.

Everyone has to start somewhere, so there’s nothing wrong with taking your time to look into your organisation’s needs before deciding on a new technology or tool. And when the marketing of certain projects does too good a job at making people want to include stuff they might not actually need, just to feel part of a larger community, it might indeed be more important than ever.

Sebastian Scheele is CEO of German company Loodse, an expert for container and cloud native solutions,  and knows the problem well. “It is really important to get a clear picture of your project’s requirements, and to think about the problem you want to solve. I feel like we’re again at a point where people try to solve problems with a tool, while the problems aren’t rooted in a technology but a process or a lack of inclusion of all members of a team. If you start using Kubernetes for example but aren’t updating your processes, things often get worse instead of better.”

“Organisations have to be very aware of the fact that they’ll need new processes for development and operations and that everybody has to be on the same side. For example people will have to learn that they can’t just connect to a container in production and run commands on it, because if the container has been rebooted, those won’t be available anymore.”

Containerization also often means getting used to an infrastructure as code approach, which requires skills not everyone might have yet. “Old-school system administrators, who are used to do everything manually, will have to look into programming and automating most things, which can be quite challenging.”

Quicker than anyone thought

Despite those issues, the number of organisations willing to dip their toes has come to grow quite quickly – something not a lot would have predicted. “I think the most surprising thing wasn’t in terms of technology, but simply how fast everyone got on-board in the last couple of years. If events can be an indicator, a look at the attendance figures of Linux Foundation’s KubeCon is enough to see the immense growth of interest in the topic.”

And it wasn’t only because of the tech kids from the valley either: “Europe got in on it pretty early as well. It doesn’t matter where you are, everyone talks about Docker, Containerization, and lately Kubernetes as well. It isn’t even associated to any particular industry, all larger organisations are jumping on this train. But so are start-ups. I for one wouldn’t have thought the adoption of a new technology would spread that fast.”

In the public perception, Kubernetes has outstripped Docker as THE container hype topic, which leads many to believe it’s the new cure-all. Scheele comes across this pretty often. “Sometimes people don’t see the complexity of Kubernetes and that it would be easier to just start a few containers to solve their problems, than to spin up a whole cluster.”

Alongside Kubernetes, service meshes are gaining traction at the moment, which doesn’t help the matter, if you ask Scheele. “Clients tend to ask about service meshes pretty early in the conversation nowadays. But once we get into the details of what they think they need it for, it’s more that they hear everyone else talking about it. They want to be part of that discussion, which is understandable.”

But it’s not as simple as that, as anyone who tried setting up a Kubernetes cluster will know. “Service meshes add a whole other layer of complexity to any project. Kubernetes itself is pretty complicated already, so understanding how it works first can only be recommended before tackling anything else in that context.”

Once you’ve gotten over this initial hurdle however, mesh technology might come in handy – depending on your use case. “In general, service meshes are nice for tasks that need a certain level of workload control, A/B testing for example. Of course those things can be done with Kubernetes as well, but depending on the number of containers running, it can be helpful to get a mesh into the mix. Having an API and layer that is able to extract information passing through a network out of the box also helps with things like end-to-end encryption.”

According to Scheele, you don’t have to rush, though. “I also think service meshes still need a bit more time to be really useful. Istio for example just reached version 1.0 earlier this year and the number of people that felt the need to integrate it even before this level of stability was reached is staggering. For most use cases there are simpler alternatives, which is why you don’t have to involve such a level of complexity.”

Still some way to go

A whole list of problems yet to solve when it comes to containers was discussed at this year’s KubeCon Europe. Continuous Integration for example. “I guess the interesting thing is, that most CI/CD systems aren’t designed in a cloud native way. People only now start to understand how Kubernetes clusters and containers in general can be used in production.”

“If you compare it to three years ago, most users thought they’ll only have three to four big Kubernetes clusters. Now you generally work with a lot more smaller clusters, the number might even be in the hundreds, which changes the requirements you set for your CI pipeline. It’s one thing to have to populate one cluster, but if there are several that need to host a large variety of applications, it really is a whole new ball game.”

Monitoring, logging, and tracing are also topics that still need some discussing, since they tend to get rather hairy with distributed architectures. “Some are not aware that containers are just a kind of distributed architecture, and that those in turn make quite a few things more complicated rather than easier. In the early stage we’re in at the moment, most aren’t even at a point where they have a sound monitoring in place.“

“The kind of architecture Kubernetes champions is quite complicated: containers can get deleted which makes looking into logs afterwards at least tricky and if you use Istio on top, a lot of logic is outsourced into the network layer. Therefore people will have to think about proper ways to log and monitor their applications first, to have a way of finding out what has happened if anything goes awry.”

Boring should become the new thing

With all that in mind, it’s no wonder that Scheele right now only really wants one thing from Kubernetes. “What I really hope for, is for Kubernetes to become more ‘boring’. I think most of the functions are in place now. If you look at other large open source projects, take the Linux kernel for example, there are regular releases, but no one goes ‘oh this is just the new killer function I was looking for’ anymore. It still evolves continuously, but on a more stable basis. That’s what I wish for in Kubernetes: It should become something for others to build upon, and maybe offer new features as add-ons and not necessarily within Kubernetes itself. Like that, everyone would know the base is extremely stable and ‘just works’, without having to fear breaking changes in new releases.”

This wouldn’t necessarily mean stagnation. “Development on the Linux kernel hasn’t ceased either, there still are regular improvements., But end-users don’t have to worry anymore because the userspace API doesn’t change anymore. They only become active when driver updates or something become necessary. I’d like to see Kubernetes get to a similar point and that there’ll be tools on top to make life easier. This will also show that we’re at a point where there’s a certain understanding of Kubernetes.”

Getting to such a place is something many developers struggle with, since most of them are there to implement functionality, with no time to play around with something, that is more about infrastructure and architecture. This brings managers and team leaders into the picture.

“I think support from the management side is extremely important. It has to be clearly expressed, that people are welcome to try new things and experiment. But they also have to acknowledge that experiments can fail, but that this is also another way of learning if you look into what has gone wrong and how this can be avoided.”

A mere paying of lip service won’t do. “People also need time to experiment, so it’s necessary to either give them some percentage of time during the week to do that or have regular hackathons to try things and look into what could work for them. During those reserved slots, developers should be excused from the day to day business, so that they don’t have to think about implementing a certain feature, but are free to test something new.”

“It is also helpful to encourage them to try things that aren’t part of their usual scope, so maybe looking into new developments within their field or go all out and have a go at something from another area that always seemed interesting to them.” And as always, leaders have to keep everyone on board. “You just have to make sure, that it’s not only the loud ones who get to choose what to experiment with, but that quieter types also get heard to have them fully involved.”

From one hype to the next

But what if they just want to use the time, to look into another hype topic…serverless for example? “I really like the idea of serverless, but I believe it’s still very much in its early stages. In my opinion, it should go hand in hand with container technologies, e.g. serverless frameworks that run on top of containers.”

It could help with all that pesky complexity after all: “If used as just mentioned, serverless would help getting rid of one abstraction layer. One project that does that is Knative, which has been introduced by Google earlier this year. Underneath there is container technology, but from a developer’s point of view, they don’t have to care anymore but can concentrate on their applications. We’re still at the beginning of that evolution though.”

“There certainly are use cases that are simply about calling an API and would therefore be good for serverless, but the technology comes with its own set of challenges.” One of them would be CI/CD systems – the thing that also troubles the cloud native world. “Faced with an application that uses maybe 500 serverless functions, there’ll always be the questions of how to deploy, manage and orchestrate those. There are still components missing to handle these sorts of things easily and work with sServerless only.”

Those shortcomings might not yet be apparent to everyone, since the examples shown at the moment are mostly rather simple. “Use cases that involve converting a picture from a to b of course don’t need a server to be run. For those you can build a serverless function, call it, make comments and write the picture somewhere. But I’m curious to see how this will get established for more complex applications.”

“We’re only at the beginning, so it will be exciting to see how things have evolved in two or three year’s time. On top of it all, distributed systems will become even more complex, so we will be back at the questions of logging and tracing, which might be even more critical because of the high number of functions compared to the number of containers today.”