Service mesh Istio 1.11 reworks gateway management, experiments with multi-cluster services

Service mesh

The team behind service mesh Istio now offers version 1.11 of the project, which features gateway injection along with an experimental implementation of multi-cluster Kubernetes services.

The introduction of gateway injection into the project promises admins an easier time managing and upgrading gateways, which are Istio’s interface with other instances. Once updated, they can be managed from the same method as sidecar proxies, so that global proxy configuration will also apply to gateways, which is useful to reduce drift between the components.

Multi-cluster service support orientates itself by the Kubernetes project’s API of the same name. If the new feature is enabled via the ENABLE_MCS_SERVICE_DISCOVERY flag, service endpoints can only be discovered from the same cluster by default. Should they be accessible throughout a mesh, the endpoints need to be exported first (an automation flag for this is available though).

Other than that, the Istio team did a lot of testing and documentation work on the CNI plugin since the last release. It is meant to replace the istio-init container that is currently used for setting up pod network traffic redirection and has now reached beta status. The external control plane introduced last year to improve separation of concerns between cluster admins and users has also been promoted to beta.

Istio’s command line tool istioctl now includes features like auto-completion for istioctl namespaces, Kubernetes pods and services, and a –dry-run flag for the uninstall command to get a better idea what will be deleted before it is too late. A new –workloadIP flag can help to set the configuration for the workload IP a sidecar proxy uses to automatically register a workload entry.

Experienced Istio users will have to change their workflow for installing the mesh on remote clusters with an external control plane slightly. Since the istiodRemote component has been recently fitted with all the charts needed for any cluster, users can enable the resources needed for a config cluster through a new values.global.configCluster variable.

Teams relying on host header fallback for labeling destination_service in Prometheus metrics for inbound traffic must enable this feature manually, since it no longer is the default behaviour. A new optimisation also means that multiple domains may now share a virtual host which might change the output of filters matching specific virtual hosts. If this poses a problem, setting PILOT_ENABLE_ROUTE_COLLAPSE_OPTIMIZATION=false on the Istiod deployment is meant to temporarily help.

Release notes with additional information can be found on the Istio website. The project was created as a collaboration between teams at Google, IBM and Lyft and saw its first public release in 2017. Last year Google drew some attention to the project by giving Istio’s trademarks to Open Usage Commons. This pretty much buried hopes it might soon become a CNCF project, which was something IBM thought was already agreed upon. The OUC was founded by Google only shortly before with the stated goal of providing “management and guidance of open source project trademarks”.