Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Streaming Data from MongoDB to BigQuery Using Confluent Connectors

Venkatesh Shanbhag, Ozan Güzeldereli4 min read • Published Jan 24, 2023 • Updated Jul 11, 2023
Google CloudAIAtlas
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Many enterprise customers of MongoDB and Google Cloud have the core operation workload running on MongoDB and run their analytics on BigQuery. To make it seamless to move the data between MongoDB and BigQuery, MongoDB introduced Google Dataflow templates. Though these templates cater to most of the common use cases, there is still some effort required to set up the change stream (CDC) Dataflow template. Setting up the CDC requires users to create their own custom code to monitor the changes happening on their MongoDB Atlas collection. Developing custom codes is time-consuming and requires a lot of time for development, support, management, and operations.
MongoDB to BigQuery pipeline using Dataflow templates
Overcoming the additional effort required to set up CDCs for MongoDB to BigQuery Dataflow templates can be achieved using Confluent Cloud. Confluent is a full-scale data platform capable of continuous, real-time processing, integration, and data streaming across any infrastructure. Confluent provides pluggable, declarative data integration through its connectors. With Confluent’s MongoDB source connectors, the process of creating and deploying a module for CDCs can be eliminated. Confluent Cloud provides a MongoDB Atlas source connector that can be easily configured from Confluent Cloud, which will read the changes from the MongoDB source and publish those changes to a topic. Reading from MongoDB as source is the part of the solution that is further enhanced with a Confluent BigQuery sink connector to read changes that are published to the topic and then writing to the BigQuery table.
Architecture for MongoDB to BigQuery pipeline using Confluent connectors
This article explains how to set up the MongoDB cluster, Confluent cluster, and Confluent MongoDB Atlas source connector for reading changes from your MongoDB cluster, BigQuery dataset, and Confluent BigQuery sink connector.
As a prerequisite, we need a MongoDB Atlas cluster, Confluent Cloud cluster, and Google Cloud account. If you don’t have the accounts, the next sections will help you understand how to set them up.

Set up your MongoDB Atlas cluster

To set up your first MongoDB Atlas cluster, you can register for MongoDB either from Google Marketplace or from the registration page. Once registered for MongoDB Atlas, you can set up your first free tier Shared M0 cluster. Follow the steps in the MongoDB documentation to configure the database user and network settings for your cluster.
MongoDB Atlas Sandbox cluster set up
Once the cluster and access setup is complete, we can load some sample data to the cluster. Navigate to “browse collection” from the Atlas homepage and click on “Create Database.” Name your database “Sample_company” and collection “Sample_employee.”
Insert your first document into the database:
1{
2"Name":"Jane Doe",
3"Address":{
4"Phone":{"$numberLong":"999999"},
5"City":"Wonderland"
6}
7}
8}

Set up a BigQuery dataset on Google Cloud

As a prerequisite for setting up the pipeline, we need to create a dataset in the same region as that of the Confluent cluster. Please go through the Google documentation to understand how to create a dataset for your project. Name your dataset “Sample_Dataset.”

Set up the Confluent Cloud cluster and connectors

After setting up the MongoDB and BigQuery datasets, Confluent will be the platform to build the data pipeline between these platforms.
To sign up using Confluent Cloud, you can either go to the Confluent website or register from Google Marketplace. New signups receive $400 to spend during their first 30 days and a credit card is not required. To create the cluster, you can follow the first step in the documentation. One important thing to consider is that the region of the cluster should be the same region of the GCP BigQuery cluster.

Set up your MongoDB Atlas source connector on Confluent

Depending on the settings, it may take a few minutes to provision your cluster, but once the cluster has provisioned, we can get the sample data from MongoDB cluster to the Confluent cluster.
Confluent’s MongoDB Atlas Source connector helps to read the change stream data from the MongoDB database and write it to the topic. This connector is fully managed by Confluent and you don’t need to operate it. To set up a connector, navigate to Confluent Cloud and search for the MongoDB Atlas source connector under “Connectors.” The connector documentation provides the steps to provision the connector.
Below is the sample configuration for the MongoDB source connector setup.
  1. For Topic selection, leave the prefix empty.
  2. Generate Kafka credentials and click on “Continue.”
  3. Under Authentication, provide the details:
    1. Connection host: Only provide the MongoDB Hostname in format “mongodbcluster.mongodb.net.”
    2. Connection user: MongoDB connection user name.
    3. Connection password: Password of the user being authenticated.
    4. Database name: sample_database and collection name: sample_collection.
  4. Under configuration, select the output Kafka record format as JSON_SR and click on “Continue.”
  5. Leave sizing to default and click on “Continue.”
  6. Review and click on “Continue.”
Confluent connector configuration for MongoDB source connector
1{
2 "name": "MongoDbAtlasSourceConnector",
3 "config": {
4 "connector.class": "MongoDbAtlasSource",
5 "name": "MongoDbAtlasSourceConnector",
6 "kafka.auth.mode": "KAFKA_API_KEY",
7 "kafka.api.key": "****************",
8 "kafka.api.secret": "****************************************************************",
9 "connection.host": "mongodbcluster.mongodb.net",
10 "connection.user": "testuser",
11 "connection.password": "*********",
12 "database": "Sample_Company",
13 "collection": "Sample_Employee",
14 "output.data.format": "JSON_SR",
15 "publish.full.document.only": "true",
16 "tasks.max": "1"
17 }
18}

Set up Confluent Cloud: BigQuery sink connector

After setting up our BigQuery, we need to provision a sink connector to sink the data from Confluent Cluster to Google BigQuery. The Confluent Cloud to BigQuery Sink connector can stream table records from Kafka topics to Google BigQuery. The table records are streamed at high throughput rates to facilitate analytical queries in real time.
To set up the Bigquery sink connector, follow the steps in their documentation.
1{
2 "name": "BigQuerySinkConnector_0",
3 "config": {
4 "topics": "AppEngineTest.emp",
5 "input.data.format": "JSON_SR",
6 "connector.class": "BigQuerySink",
7 "name": "BigQuerySinkConnector_0",
8 "kafka.auth.mode": "KAFKA_API_KEY",
9 "kafka.api.key": "****************",
10 "kafka.api.secret": "****************************************************************",
11 "keyfile": "******************************************************************************
12—--
13***************************************",
14 "project": "googleproject-id",
15 "datasets": "Sample_Dataset",
16 "auto.create.tables": "true",
17 "auto.update.schemas": "true",
18 "tasks.max": "1"
19 }
20}
To see the data being loaded to BigQuery, make some changes on the MongoDB collection. Any inserts and updates will be recorded from MongoDB and pushed to BigQuery.
Insert below document to your MongoDB collection using MongoDB Atlas UI. (Navigate to your collection and click on “INSERT DOCUMENT.”)
1{
2"Name":"John Doe",
3"Address":{
4"Phone":{"$numberLong":"8888888"},
5"City":"Narnia"
6}
7}
8}

Summary

MongoDB and Confluent are positioned at the heart of many modern data architectures that help developers easily build robust and reactive data pipelines that stream events between applications and services in real time. In this example, we provided a template to build a pipeline from MongoDB to Bigquery on Confluent Cloud. Confluent Cloud provides more than 200 connectors to build such pipelines between many solutions. Although the solutions change, the general approach is using those connectors to build pipelines.

What's next?

  1. To understand the features of Confluent Cloud managed MongoDB sink and source connectors, you can watch this webinar.
  2. Learn more about the Bigquery sink connector.
  3. A data pipeline for MongoDB Atlas and BigQuery using Dataflow.
  4. Set up your first MongoDB cluster using Google Marketplace.
  5. Run analytics using BigQuery using BigQuery ML.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

How to Develop a Web App With Netlify Serverless Functions and MongoDB


Aug 30, 2024 | 6 min read
Tutorial

Using the Confluent Cloud With Atlas Stream Processing


Nov 19, 2024 | 5 min read
Tutorial

Simplify Semantic Search With LangChain and MongoDB


Oct 28, 2024 | 4 min read
Tutorial

Migrate from Azure CosmosDB to MongoDB Atlas Using Apache Kafka


May 09, 2022 | 3 min read
Table of Contents