Learn How to Leverage MongoDB Data Within Kafka with New Tutorials!
Rate this article
The MongoDB Connector for Apache Kafka documentation now includes new tutorials! These tutorials introduce you to key concepts behind the connector and by the end, you’ll have an understanding of how to move data between MongoDB and Apache Kafka. The tutorials are as follows:
Change streams is a MongoDB server feature that provides change data capture (CDC) capabilities for MongoDB collections. The source connector relies on change streams to move data from MongoDB to a Kafka topic. In this tutorial, you will explore creating a change stream and reading change stream events all through a Python application.
In this tutorial, you will configure a source connector to read data from a MongoDB collection into an Apache Kafka topic and examine the content of the event messages.
In this tutorial, you will configure a sink connector to copy data from a Kafka topic into a MongoDB cluster and then write a Python application to write data into the topic.
Configure both a MongoDB source and sink connector to replicate data between two collections using the MongoDB CDC handler.
Time series collections efficiently store sequences of measurements over a period of time, dramatically increasing the performance of time-based data. In this tutorial, you will configure both a source and sink connector to replicate the data from a collection into a time series collection.
These tutorials run locally within a Docker Compose environment that includes Apache Kafka, Kafka Connect, and MongoDB. Before starting them, follow and complete the Tutorial Setup. You will work through the steps using a tutorial shell and containers available on Docker Hub. The tutorial shell includes tools such as the new Mongo shell, KafkaCat, and helper scripts that make it easy to configure Kafka Connect from the command line.
If you have any questions or feedback on the tutorials, please post them on the MongoDB Community Forums.