It wasn’t that long ago that desktop applications were the thing everybody was spending their resources on. However, with the development of cloud services at an all time high, it seems like companies that go down the route of local deployment have become if not the minority, at least a lot less visible nowadays.
And with one of your subscribed offerings down again, you could start to wonder, how we got here in the first place.
Tim Hockin, principal software engineer at Google, who some might know as one of the first people on the Kubernetes team, has spent quite some time pondering the same question. Trying to find an explanation, while sitting down at KubeCon Europe he told Devclass, “I think the delivery is just so much easier. I have worked on boxed software and it is a totally different experience in terms of software development, testing, QA, and updates. It was never pleasant, but it was the thing we had to do, because it was the way it was done.”
While those developers sighing at the thought of the good old times where stuff could just be thrown over non-existent walls and be done with might beg to differ, Hockin enjoys the newly found satisfaction of a continuous approach. “Now that everything is live and updated on the fly, it’s just so much more satisfactory. The happiness that people get from just updating – ‘I’ve got a new version, cool’ – it’s so much better. There are still things that are hard to do in this world, but look at Office 365 and Google Docs and now the Adobe Creative Suite as a SaaS application – that’s intense.”
To Hockin, one of the reasons big companies opt for a service approach is clearly control. “If you’re Adobe, the biggest problem with boxed software is ‘what version are you using’ or ‘did you update it to this patch release’, you know, ‘what environment are you running it in’ – all of these things are really impossible to debug. The consistency you have of ‘I control everything and all you get is a web interface’ is just awesome. I think that that becomes so overwhelmingly attractive and the fact that it is possible because of bandwidth and browsers and [because] embedded languages are able to do these things is just too good not to do.”
How do you know what’s impossible?
Come to think of it, some of the architectural change we see today is also down to Hockin and Kubernetes – at least to an extent. “I was talking to someone about the concept of the adjacency of the possible the other day – you don’t really know what’s possible until you crossed the next step and the next step. Kubernetes is in the same lane! People are doing things now that they didn’t think were possible before because they didn’t see the intermediate steps.”
As good as this may sound, with all of those people turning to Kubernetes comes a certain level of responsibility. Listening to their wants and needs has driven stability, a less steep onramp, better documentation and a more gradual introduction of concepts high up on Hockin’s priority list.
Being that receptive however, also comes with the awareness, that intent and use are quite different stories. “Little things make a huge difference,” Hockin muses. “Just that things work the way people expect them to, which is surprising sometimes. As an engineer I intended Kubernetes to do this, but people are using it to do that, because that is what they really want.” Making use of a real-world analogy he adds, “If I built a hammer and everyone wants to pound screws, then I better make my hammer good at pounding screws.”
If that sounds like you could just throw anything in the general direction of the Kubernetes team and wait for your wish to be granted, you might be in for a disappointment. One example that keeps coming up a lot are pod IPs – IP addresses that are stable and move around with the pod.
Though Hockin is keen to listen to what people may need it for, to him it’s also “antithetical to the idea we want to push”.
“It doesn’t map to what Kubernetes is at the bottom. Technically it would be hard to deliver, the platforms couldn’t do it well, it would just be a bad user experience. I understand why people would want it and I’d rather solve the whys than the hows.”
Easy, ey? Wrong! “We’re always wrong, in software people are always wrong” Hockin claims. “Whenever we think about how people are doing things, we’re always off by a little bit. Or we think oh they won’t need that, but they will eventually. And whenever I’m thinking forward I’ll probably be wrong about it.” Which is why having a strategy, leaving room in the API for expansion and making sure that it can be grown is really important. And really hard, adds Hockin, which is why “we don’t always get that right.”
To him, a prominent example for getting it not exactly right are some of Kubernetes’ networking aspects. “They are simultaneously too hard to use and not feature-full enough” feels Hockin. “People are trying to express things that don’t map well. Those use cases are coming to the front now.”
But since networking is “fundamental to what people are doing with Kubernetes” it means that at least some of the reworking action coming up will be focused on exactly that. “I think that some of the primitives we have are wrong or are not appropriate anymore,” Hockin emphasises “and we can do better. So we need to make that better API, we have to make it more composable, more orthogonal and basically build down.”
Another thing that didn’t pan out as planned is the Ingress API. According to Hockin, the problem here was that it ended up as some sort of abstraction over an abstraction, making it a lowest common denominator API, which isn’t exactly ideal.
To conquer that, the Kubernetes team has put out a proposal based on ideas that can be found in projects such as ingressroute and a couple of other APIs, which will hopefully help find a feasible way to approach an Ingress 2.0 of sorts.
“Everybody who uses Ingress is doing something that is specific to their implementation – I want to be careful that service meshes don’t end up in the same space or dumbing it down so that it isn’t useful.”
Filling in holes in mesh
To help them succeed and impact something that “users are really experiencing”, as Hockin puts it, the Kubernetes team is working to get rid of some problems service mesh projects have run into recently. Once that’s done, they are meant to be “more awesome, easier, streamlined to use” so that it becomes easier to “build up again to multi-cluster or multi-tenancy, cluster sharing and isolation”.
“The reality is that anyone who is using Kubernetes for real is going to have more than one cluster,” says Hockin. Those have to find each other, interact and properly integrate, which Hockin finds challenging, to say the least. “Multi-cluster is probably not part of Kubernetes but it’s part of the ecosystem and I think Kubernetes can do things to make problems like that easier to solve – without actually solving the problem. We provide the primitives to make the higher-level building awesome.”
The last year also saw a surge in security related initiatives, which can be seen as an indication of Kubernetes slowly making its way beyond the hype into regular enterprises. But as with quite a few projects, security wasn’t the top concern for Hockin and Co when Kubernetes was started, which means they have to work harder to retrofit it now.
Having a whole community to help with that, comes in handy. “We have amazing security people who are doing incredible work in Google and the community thinking about how to make it more secure and easier to use which are not always the same thing and getting rid of old bad ideas and deprecating and end of life-ing less secure versions of things. So I think we’re making real progress.”
Progress is also being made in consistency and portability – luckily, one could say, given that more and more products claim to support Kubernetes – sometimes without the user experience to back it up. “I think we’re trying to strike a balance between ‘let’s write a full abstract specification that anybody can go off and implement’ and ‘let’s instead look at what we’re building and define what is the expected portable subset – in terms of tests and conformance suites’.”
This may sound like a backwards take on classic standardisation, but having a robust verification platform in place is essential for the orchestrator. Or to put it more bluntly, “If your users have to know the difference between their Amazon cluster and their Google cluster, we have failed. […] In the past, it has been a little bit generous, it’s been easier to pass than it should have been, but we’re going to continue to iterate and make it more specific to the features that we really want.”
Yes, there is lots of responsibility. But despite the last five years of that, there’s still excitement in it for Hockin. “I don’t have the sort of wanderlust some people have. I enjoy finishing things and putting in the details.” Which doesn’t mean there’s an end in sight when it comes to Kubernetes, he laughingly admits.
“I can always put a little bit of shine on it or one little extra detail. I am a painter by education, so one thing I had to learn about painting is when to stop. Finding the point where you don’t make it better anymore but make it worse. I’m trying to pay attention to that with Kubernetes. I don’t think we’re there yet.”