etcd strikes a balance, bolsters backend with 3.4 release

etcd strikes a balance, bolsters backend with 3.4 release

etcd hit v3.4 late last week, promising an improved storage backend, revamped raft voting process, and whole new client balancer, amongst a host of other changes.

Etcd describes itself as a distributed, reliable key-value store for the most critical data of a distributed systems. So, unsurprisingly, it is intimately bound up with Kubernetes, and was donated to the CNCF by Red Hat and the CoreOS team late last year.

The latest release promises “a number of performance improvements for large scale Kubernetes workloads” and top of the list is a better storage backend.

These include changes to how it handles large numbers of concurrent read transactions, “even when there is no write”. Previously “a storage backend commit operation on pending writes blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance.”

Backend read transactions have also been made fully concurrent, which should increase throughput by 70 per cent, while P99 write latency is reduced by 90 percent, in the presence of long-running reads. Changes have been made to lease storage to make operations more efficient. The new release also includes an experimental lease checkpoint feature, “to persist remaining time-to-live values through consensus”.

The team have also made changes to the Raft voting process – Etcd uses the Raft consensus algorithm for data replication. These include a pre-vote feature, which should address possible disruption in situations where there is a networked partition. 

Last up is a new client balancer. Historically, etcd used an “old gRPC interface”, but “every gRPC dependency upgrade broke client behavior” sucking up time on development and debugging efforts to fix them. The new client balancer will “simply roundrobin to the next endpoint whenever client gets disconnected from the current endpoint. It does not assume endpoint status. Thus, no more complicated status tracking is needed.” The new balancer also creates its own credential bundle to fix balancer failover against secure endpoints, which resolves a bug where kube-apiserver loses its connectivity to etcd cluster when the first etcd server becomes unavailable.

You can get further details, and see the full list of changes here.