Red Hat’s Ansible team has pushed out version 2.0 of its Content Collection for Kubernetes, which sees the project getting a new home, becoming less dependent on the OpenShift client, and improving on performance and patching.
The Collection is meant to provide Ansible users with plugins and modules to manage applications running on OpenShift or Kubernetes clusters, as well as facilitate cluster provisioning and maintenance.
Since it started off as a community initiative but is now supported by Red Hat, the company decided to use the 2.0 release to move the Collection somewhere more official-sounding to reflect this change. It can now be found in the kubernetes.core sub-repo of the collections repository, as opposed to the community.kubernetes namespace.
Teams referencing the old repository in their playbooks are encouraged to make their adjustments — though the community.kubernetes namespace is planned to redirect to the new home soon anyway.
To make sure the community still feels included, no matter where the code is stored, Red Hat made the alignment of the Collection “with the latest technologies and communal efforts” into one of the objectives of the current release. Amongst other things, this led to replacing the OpenShift client with the official Kubernetes Python one, since concerns of the Collection becoming too dependent on another Red Hat project kept being raised.
Regarding tech efforts, the Ansible Content Collection for Kubernetes has mainly learned to support more patching use cases. Namely, version 2.0 includes a patched state which can be used to only patch an object instead of creating a new one, and a ks8_json_patch module to modify objects.
Another major area of improvement for the release was performance, to “enhance the experience with Ansible when automating changes across a large number of resources”. To speed up the process for Ansible Playbooks manipulating many (as in hundreds of) Kubernetes objects, the Collection was fitted with what is called Ansible turbo mode.
Once the cloud.common Collection is installed and enabled, the system won’t create a new Kubernetes API connection for each request. Instead, it will reuse the ones already there — which removes overhead.
Version 2.0 of the Collection also puts an end to workarounds thought up to process multiple template resource definitions and not run into performance issues in the process. Developers can add a list of templates to the template parameter of the k8s module. Should one template fail, the task will fail by default, though this behaviour can be changed by setting continue_on_error to true. In that case, execution would continue and an error message would be added for the failing item.