Linkerd 2.9 arrives on Kubernetes wave, becomes more multicluster-savvy

Linkerd 2.9 arrives on Kubernetes wave, becomes more multicluster-savvy
istio

Kubernetes service mesh Linkerd 2.9 has made it across the finish line, featuring zero-config mutual TLS for all TCP connections and support for ARM, EndpointSlice and topology-aware service routing.

The latter two have been implemented in the project’s control plane to play to the additions Kubernetes introduced in the last couple of months. For Linkerd, topology-aware service routing support means that the destination controller will filter endpoints according to a service’s topology preferences when service discovery updates are presented to a proxy. To make use of the new EndpointSlice resource, Linkerd has to be installed with the enable-endpoint-slices flag activated.

Linkerd also works closely with monitoring tool Prometheus, which now comes as a separate add-on to make it easier to disable if unwanted and has the option to persist data to a volume instead of memory. Teams that prefer to use the service mesh with an external Prometheus instance can specify their Helm configuration accordingly with a new global.prometheusUrl variable.

Multicluster handling was another area of improvements in the run-up to the Linkerd 2.9 release, which is why teams who have installed the corresponding component should look into the upgrade guide before updating their systems. Amongst other things, the Linkerd team changed the approach used for mirroring services so that source clusters can use a label selector to specify which services should be exported from the target rather than relying on annotations.

Instead of using a single service-mirror controller, Linkerd now installs separate controllers for each target cluster and comes with an unlink command to remove multicluster links. Admins using Helm to set up a mesh have the option to create multiple service accounts to make systems more controllable. In addition, a new section for multicluster gateway metrics on the tool’s dashboard has been designed to signify a system’s status. 

Under the hood the Linkerd team was also busy reworking the project’s proxy, aiming at lower latencies under high concurrency and less performance impact when logging. It also learned to better handle a class of DNS errors that are commonly encountered during node outages, making the project more suitable for high availability scenarios.

Better usability could also help with Linkerd adoption, so the team added fish shell completions to the linkerd command and renamed the –addon-config flag of its command line interface to –config. This is meant to clarify the fact that it isn’t for add-on configuration only but can be used to set any Helm value. 

In terms of security the Linkerd team highlighted that the mesh’s zero configuration mTLS has been extended to all TCP connections to authenticate all cluster connections and that Linkerd can now work with authenticated Docker registries.

Linkerd was initially developed at Buoyant and is the fifth project to be hosted by the Cloud Native Computing Foundation. Almost four years in, it remains a CNCF incubating project and heavily competes with Google-bred service mesh Istio.