Fresh ECK sample out with toys for cross-cluster busters: Elastic Cloud on K8s 1.1

Fresh ECK sample out with toys for cross-cluster busters: Elastic Cloud on K8s 1.1

Elastic has pushed out a fresh release of its cloud native product, Elastic Cloud on Kubernetes. Version 1.1 introduces remote cluster support as well as a way to specify users and roles as code. 

The new remote feature has been added to facilitate the set-up of search or replication mechanisms across a number of Elasticsearch clusters within the same Kubernetes environment, which can be useful when searching across distributed data centres or multiple versions of the search engine. ECK is meant to help with configurations and authentication/authorisation issues during composition, with users only providing the name and namespace of the remote cluster. 

Once updated, the service also provides basic instrumentation for the application performance monitoring agent, which will supposedly make it easier to stay on top of requests making their way through the Elasticsearch client. 

Configurations a user provided take precedence over the ones included by the operator in ECK 1.1, and there are ways to customise the transport service.

Admins looking for ways to restrict associations of backend Elasticsearch resources with Kibana or APM resources can try specifying a service account the system can use for those connections to improve access control. 

Other than that, the ECK team improved secure string generation, added validation webhook configurations for all resource types to flag up modification errors early, and included an operator flag allowing users to define a default container registry. 

It also took a look at worst case scenarios and added a tool called reattach-pv to the hack folder of the project. If a cluster was accidentally removed so that PersistentVolumeClaims don’t exist anymore, users can try reattach-pv to get them back. 

However, its creators caution admins to back up data and perform a dry-run, noting it is to be used at users’ own risk. Those who want to try it nevertheless will have to make sure that PersistentVolumes of the lost cluster still exist with the Released status and that the current default kubectl context targets the desired Kubernetes cluster. 

Breaking changes included in the release are the removal of the operator roles, and a new convention to name container ports after the protocol, which could stop services referring to named ports instead of their number from working properly. Details can be found in the Elastic documentation.