The MongoDB Connector for Apache Kafka is now GA

Seth Payne and Tifani Ramic

#Kafka

Today we are very happy to announce the general availability of the MongoDB Connector for Apache Kafka. The Connector allows you to easily build robust and reactive data pipelines that take advantage of stream processing between datastores, applications, and services in real-time.

Kafka has become extremely popular in the past several years. And for good reason. It provides a flexible and powerful streaming platform; enabling standardized communication between a wide range of data platforms and systems.

At MongoDB World last June, we announced the availability of the MongoDB Connector for Apache Kafka for beta testing and use. Since that time, we have worked closely with users to identify Connector enhancements that are required to better support Kafka-enabled use cases such as both simple & complex ETL tasks and building applications powered by MongoDB change data.

Most notably, the Connector now provides source-side support for publishing existing collection data to Kafka topics. This allows you to load an entire collection into a Kafka topic at the onset, and then publish changes (powered by MongoDB change streams) thereafter. This combined with improved handling of the deletion, creation, or emptying of collections provides a robust MongoDB source to easily filter and move data between MongoDB and many other Kafka sinks.

All of this is made possible by building on the excellent work of Hans-Peter Grahsl, the author of a popular community-driven Kafka sink connector for MongoDB. The connector developed by Hans-Peter has provided solid support for MongoDB as a sink and served as the foundation for the development of the official MongoDB Connector for Apache Kafka that we are proud to announce as generally available today.

Check out the interview below where Hans-Peter talks about our next-generation connector.

To learn more, and to download the connector, visit the Kafka Connector product page.