Ten release candidates in, Rancher 2.6 is now good to go. The first big release since Rancher was acquired by Linux distributor SUSE last winter focuses on the Kubernetes management platform’s UI, while making progress on hosted cluster provisioning and overall security.
The latter is especially important, given that Rancher is mainly aiming to reach an enterprise clientele. In version 2.6, the Rancher team for instance added Rancher and identity provider usernames in the Kubernetes and Rancher API audit logs. This was requested to make sure admins have a way of finding out who did what and thus strengthening the case for a self-service model. The update also contains a feature flag to activate one-way hashing of Rancher tokens and will generate a random bootstrap password for when Rancher is started for the first time.
In terms of cluster provisioning, version 2.6 is the first release to let users register AKS clusters provisioned with other tools in Rancher, and manage upgrades and configurations from there. Capabilities to provision private AKS endpoints, multiple node pools, and make use of Rancher cloud credentials for authentication are also available. Teams preferring GKE have gained the tools to add network tags to their freshly provisioned node pools and make use of project network isolation to restrict network access between projects on imported GKE clusters.
The reworked UI sees the Cluster Manager and Cluster Explorer components merged into a new Cluster Explorer, which houses things like the cluster dashboard, features for project and namespace management, and an apps and marketplace view. Logging and monitoring functionality have been compiled into a Cluster Tools section, and the navigation has been rearranged and simplified to help users find what they need.
Other than that, Rancher 2.6 offers organisations the option to change the logo image, display name, and primary colour of the UI to fit their brand, and let admins direct users to a custom support page.
To get feedback on new features in early, Rancher 2.6 includes three tech previews. One of those is a framework for provisioning RKE2 (RKE Government) clusters directly from inside the Rancher UI or API. The addition sports node driver support, so that users can create VMs on major cloud providers and SSH into nodes via the UI as needed. Rancher also comes with cluster templates that admins can prepare to define node pools, cloud credentials, and tools used in new clusters. Another preview brings a way of provisioning custom RKE2 clusters with Windows nodes.
Behavioural changes you need to expect, especially those updating from earlier versions, include that local Kubernetes clusters can no longer be hidden. Rancher recommends using the restricted-admin role introduced in version 2.5 to counteract this and make sure selected team members still have permission to access downstream clusters but not everyone can change the local Kubernetes cluster.
Another adjustment means that, to roll back Rancher, admins will now have to scale it down to zero replica before jumping onto an earlier version. You also might have to get familiar with Rancher displaying machines (cluster’s definition of what should be running) and kube nodes (Kubernetes node objects only accessible when the K8s API server is running) instead of nodes only.
Longhorn project bumps version up to 1.2
The team behind Rancher-initiated Kubernetes storage project Longhorn has been busy as well and just made version 1.2 available. Besides the more obvious addition of support for the latest Kubernetes version, the update helped the project bulk up on safety and availability with new options to encrypt and duplicate volumes. It also provides users with backup target, backup volume, and backup custom resource and controllers which allow asynchronous backup operations and are therefore hoped to improve performance.
In version 1.1.1, Longhorn re-introduced a base image feature to let users set QCOW2 or RAW images as the backing image of a Longhorn volume so that the storage project can be integrated with VMs. The initial implementation only included functionality to download backing images from remote, however. Version 1.2 adds ways to upload backing images locally or create them from existing volumes, which should be helpful with common use cases like generating VMs from a freshly updated volume.