Tired of DIYing multi-cloud Kafka deployments? Confluent Cloud might be able to help

Confluent Cloud Q3 '21

Version Q3 ‘21 of managed event streaming platform Confluent Cloud is now available, kicking off the newly decided on quarterly release cycle with a major improvement for those working with global architectures or just a variety of cloud providers.

Cloud Cluster Linking has been around as a preview feature for a while already but only hit general availability now, which is reason enough for the Confluent Cloud team to highlight it as the major addition of the release. Cluster linking is a fully-managed service meant to provide teams with an easy way for moving data between Confluent clusters and build globally connected or multi-cloud Kafka deployments for better reliability and reduced access times.

To implement such a system, users can set up links between clusters in different geographical regions, clouds, or organisations and select the Kafka topics they want to see replicated. The topics, Kafka and therefore Confluent’s flavour of ordered event logs, will then be copied by the service, which will also make sure data stays synchronised across clusters. Users need to be aware however, that destination clusters can only aggregate data from up to five different sources at the moment.

According to the documentation, Cluster Linking for Confluent Cloud is available on all dedicated, internet networked clusters across all cloud providers. The Cluster Linking REST APIs for creating and updating cluster links however can only be found on clusters created after 16 August, 2021 for now and will only slowly be rolled out to the ones previously created. Customers who can’t wait that long should therefore get in touch with the company. Same goes for those who wish to use Cluster Linking with privately networked clusters, as those with Transit Gateway, VPC Peering, Privatelink or VNet Peering networking types aren’t currently supported

Other features that matured into GA with the release include multi-tenant cluster provisioning APIs, Admin REST APIs for Apache Kafka, and the ksqlDB pull queries. KsqlDB is a database for stream processing applications built by the Confluent team, however, up until now Confluent Cloud was only able to use push queries when working with it.

Developers creating applications that needed to look up static information from time to time therefore always had to maintain a second data. Since the new pull query functionality however realises “point-in-time lookups directly on derived tables and streams” this shouldn’t be necessary anymore and also make app infrastructure a bit easier to manage.

Users who rely on especially large amounts of data for their use case and tend to overprovision to prevent data loss have been able to opt for Infinite Storage for AWS for a while already. The feature is meant to help keep costs low, as it promises users to only invoice storage used and not the one provisioned. Confluent’s latest version adds a bit of choice to that by now also supporting Infinite Storage for Google Cloud for standard and dedicated clusters.

More choice is also available when it comes to Confluent’s fully managed source and sink connectors. Thanks to two additions, app developers using Confluent Cloud can subscribe to Salesforce Platform events and write them to Apache Kafka topics or stream Kafka events to Azure Cosmos DB containers respectively.

Additional details and documentation links can be found on the Confluent blog. Confluent is a commercial distribution of the Apache Kafka platform for data streaming. Confluent Cloud is this distribution’s fully-managed version.