HAProxy Ingress Controller 1.5 introduces mTLS support, gives load balancing experts more power

Forklift and container, image via Shutterstock
Forklift and container, image via Shutterstock

Version 1.5 of the HAProxy ingress controller for Kubernetes has landed, providing users with extra security measures and a way to run ingress controllers outside of a Kubernetes cluster without losing track of pod changes.

The new release comes with new annotations, server-ca and server-crt, to allow the ingress controller to enter a mutual TLS authentication process with the backend servers it looks to route traffic to. While server-ca contains a Kubernetes secret with a CA certificate for verifying the backend server’s certificate, the secret in server-crt holds a client certificate it can present to the server in return. If this isn’t wanted, the server-ssl annotation is still available to establish a TLS connection without the verification step.

Starting in v1.5, the HAProxy ingress controller can be used for basic HTTP authentication. Users can try the new functionality by setting the auth-type to basic-auth on a ConfigMap or Ingress definition, adding an auth-secret annotation with the Kubernetes secret holding the credentials, and enabling TLS encryption for communication to keep passwords safe. 

With regard to HTTP, admins now also have a –configmap-errorfile controller argument at their disposal, which they can use to let a system return a custom message for certain HTTP error codes. Additionally, new annotations src-ip-header and request-redirect allow the setting of a source IP from a HTTP request header, and redirect requests via HTTP location header updates.

To make the controller a bit more useful to teams migrating to a Kubernetes based architecture, HAProxy has devised a way to run the ingress controller outside a cluster. The approach is somewhat akin to that of a reverse proxy, as it sees the controller launching on a separate server and accessing the pods network from there. 

To ensure that works, users will need binaries for HAProxy and the Kubernetes Ingress Controller, a Kubeconfig file for Kubernetes cluster access, and suitable network configuration, although adding a route to the server should do.

The external approach is considered to be of interest to teams slowly transitioning towards a Kubernetes-based infrastructure, as it isn’t able to scale as well as the regular setup in which the controller runs as a Kubernetes service itself. This might, however, also be worth considering for debugging or prototyping purposes.

Organisations further along the cloud native journey might, on the other hand, be interested in the new option to directly add HAProxy directives which aren’t available as annotations into a configuration. Directives need to be initiated via global-config-snippet in the ingress controller’s ConfigMap to be globally applied or backend-config-snippet in a service definition if something is only meant to be valid in a backend.

Under the hood, the ingress controller team prepared the modularisation of its code base for better maintainability and added various tests to the project. It also reworked its code to use HAProxy Map files more often, which is meant to simplify some configurations. Details can be found in the project’s release notes.