I node what you’ll do next summer: Red Hat drops veil on OpenShift 4.5, pushing virtualization and edge use-cases

OpenShift 4.5 edge

Just in time for KubeCon Europe, Red Hat has decided to update its Kubernetes distribution OpenShift to v4.5, pushing three-node clusters and OpenShift Virtualization from preview to general availability. 

It also introduces a read-only operator API, a new descheduler strategy, and some service monitoring improvements to the platform.

The latter made their way into OpenShift 4.5 as technology previews, so while they might not be feasible to use in production environments yet, they are well worth a look, just to know what’s coming and maybe help it get finished sooner. 

Most experimental features seem to be connected to pod handling, such as the vertical pod autoscaler (VPA) operator or the new scheduler strategy. The new strategy is meant to ensure that pods that have been restarted too often will be removed from a node, while the VPA looks into “historic and current CPU and memory resources for containers in Pods“ and updates resource limits and requests accordingly. 

Also in preview is an operator API which is supposed to make the discovery and management of the operators in an OpenShift cluster easier once it’s completely finished. Currently it seems to only be able to list the previously installed Operators via the CLI and has to be enabled manually, but it helps to convey the idea of users being able to interact with Operators as they would with other regular API objects.

In OpenShift 4.5, the monitoring options for a user’s own services have been extended to provide multi-tenancy support for the Alertmanager API, introspection for Thanos Stores using the Thanos Querier, and ways of deploying user recording and alerting rules with higher availability. Metrics collected for the services can now also be accessed through the web console. 

One of the more highlighted features of the release is the option to run a three-node cluster on OpenShift. 

This is mainly down to the growing interest in edge computing, as Joe Fernandes, VP and GM, Core Cloud Platforms for Red Hat confirmed to DevClass in a call. “A standard cluster is typically [made up of] five nodes: three [for the] control plane and then at least two worker nodes, so you have higher than ability for your applications.” 

“What we’re seeing in the edge deployments is customers want to go smaller because they’re constrained in terms of the amount of infrastructure and the ability to run at these edge locations,” Fernandes told DevClass. “So our first step was with essentially three node clusters, which means the control plane, those first three machines, run both the control plane and the worker nodes.”

Moving forward, the company is looking to go even smaller to support the use case further. “For this upcoming year we’re working towards just one independent machine or what we call a single-node cluster,” Fernandes added. “This is sort of Kubernetes at the edge, where it just all runs on one machine. And so you have to have control and support for the applications on that machine.”

Of course this leads to other predicaments that need figuring out, such as availability, which is an aspect Red Hat currently works on (together with its user base, of course).

The latter also has a keen interest in machine learning and data analytics, which is something that will also help to decide on the next steps. “What’s driving edge is the proliferation of data and the need to analyze it and make decisions closer to where that data is sourced,” Fernandes said. Red Hat is therefore looking to make Kubernetes (and therefore OpenShift) more capable of running data intensive operations. 

“It kind of just goes to this point that we’re trying to make that Kubernetes is no longer just about running cloud native apps, or even just applications at all, even if it’s traditional apps. It’s moving beyond that into the realm of data services, analytics, AI and machine learning, databases, messaging and everything else,” Fernandes mused.

“[This], combined with smaller deployments at the cluster level, are kind of the two key enablers for what customers need out at the edge as they look to containerise more workloads.”

Speaking of containerising workloads, OpenShift Virtualization, a way to run virtual machines along containers and manage them as Kubernetes objects, also made it to general availability, after being in technical preview for a while already. According to Fernandes, this is hoped to provide enterprises with a common platform for the various workloads since “the majority of apps still don’t run in containers, they run VMs – and some of them may never move to containers, containers aren’t necessarily a fit for all workloads.” 

Other virtualization improvements include the completion of installer provisioned infrastructure for VMware vSphere. This is meant to facilitate the use of OpenShift on the server virtualization platform without admins having to provision VMs and install Red Hat Enterprise Linux CoreOS, or manage nodes manually.

To make installation easier for the rest of the user base, the new command openshift-install explain lists “all the fields for supported install-config.yaml file versions including a short description explaining each resource”. And last but not least, Version 4.5 is the first version of the platform to allow users to configure the Ingress Controller to enable access logs and specify a wildcard route policy through said Controller if needed.
A complete list of changes can be found in the OpenShift documentation.