EventGet 50% off your ticket to MongoDB.local NYC on May 2. Use code Web50!Learn more >>
MongoDB Developer
MongoDB
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
MongoDBchevron-right

Local Development with the MongoDB Atlas CLI and Docker

Nic Raboy9 min read • Published Feb 03, 2023 • Updated Feb 06, 2023
DockerNode.jsMongoDBCLIBashJavaScript
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Need a consistent development and deployment experience as developers work across teams and use different machines for their daily tasks? That is where Docker has you covered with containers. A common experience might include running a local version of MongoDB Community in a container and an application in another container. This strategy works for some organizations, but what if you want to leverage all the benefits that come with MongoDB Atlas in addition to a container strategy for your application development?
In this tutorial we'll see how to create a MongoDB-compatible web application, bundle it into a container with Docker, and manage creation as well as destruction for MongoDB Atlas with the Atlas CLI during container deployment.
It should be noted that this tutorial was intended for a development or staging setting through your local computer. It is not advised to use all the techniques found in this tutorial in a production setting. Use your best judgment when it comes to the code included.
If you’d like to try the results of this tutorial, check out the repository and instructions on GitHub.

The prerequisites

There are a lot of moving parts in this tutorial, so you'll need a few things prior to be successful:
The Atlas CLI can create an Atlas account for you along with any keys and ids, but for the scope of this tutorial you'll need one created along with quick access to the "Public API Key", "Private API Key", "Organization ID", and "Project ID" within your account. You can see how to do this in the documentation.
Docker is going to be the true star of this tutorial. You don't need anything beyond Docker because the Node.js application and the Atlas CLI will be managed by the Docker container, not your host computer.
On your host computer, create a project directory. The name isn't important, but for this tutorial we'll use mongodbexample as the project directory.

Create a simple Node.js application with Express Framework and MongoDB

We're going to start by creating a Node.js application that communicates with MongoDB using the Node.js driver for MongoDB. The application will be simple in terms of functionality. It will connect to MongoDB, create a database and collection, insert a document, and expose an API endpoint to show the document with an HTTP request.
Within the project directory, create a new app directory for the Node.js application to live. Within the app directory, using a command line, execute the following:
If you don't have Node.js installed, just create a package.json file within the app directory with the following contents:
Next, we'll need to define our application logic. Within the app directory we need to create a main.js file. Within the main.js file, add the following JavaScript code:
There's a lot happening in the few lines of code above. We're going to break it down!
Before we break down the pieces, take note of the environment variables used throughout the JavaScript code. We'll be passing these values through Docker in the end so we have a more dynamic experience with our local development.
The first important snippet of code to focus on is the start of our application service:
Using the client that was configured near the top of the file, we can connect to MongoDB. Once connected, we can get a reference to a database and collection. This database and collection doesn't need to exist before that because it will be created automatically when data is inserted. With the reference to a collection, we insert a document and begin listening for API requests through HTTP.
This brings us to our one and only endpoint:
When the /data endpoint is consumed, the first five documents in our collection are returned to the user. Otherwise if there was some issue, an error message would be returned.
This brings us to something optional, but potentially valuable when it comes to a Docker deployment for local development:
The above code says that when a termination event is sent to the application, drop the database we had created and close the connection to MongoDB as well as the Express Framework service. This could be useful if we want to undo everything we had created when the container stops. If you want your changes to persist, it might not be necessary. For example, if you want your data to exist between container deployments, persistence would be required. On the other hand, maybe you are using the container as part of a test pipeline and you want to clean up when you’re done, the termination commands could be valuable.
So we have an environment variable heavy Node.js application. What's next?

Deploying a MongoDB Atlas cluster with network rules, user roles, and sample data

While we have the application, our MongoDB Atlas cluster may not be available to us. For example, maybe this is our first time being exposed to Atlas and nothing has been created yet. We need to be able to quickly and easily create a cluster, configure our IP access rules, specify users and permissions, and then connect with our Node.js application.
This is where the MongoDB Atlas CLI does the heavy lifting!
There are many different ways to create a script. Some like Bash, some like ZSH, some like something else. We're going to be using ZX which is a JavaScript wrapper for Bash.
Within your project directory, not your app directory, create a docker_run_script.mjs file with the following code:
Once again, we're going to break down what's happening!
Like with the Node.js application, the ZX script will be using a lot of environment variables. In the end, these variables will be passed with Docker, but you can hard-code them at any time if you want to test things outside of Docker.
The first important thing to note is the defaulting of environment variables:
The above snippet isn't a requirement, but if you want to avoid setting or passing around variables, defaulting them could be helpful. In the above example, the use of runtimeTimestamp will allow us to create a unique database and collection should we want to. This could be useful if numerous developers plan to use the same Docker images to deploy containers because then each developer would be in a sandboxed area. If the developer chooses to undo the deployment, only their unique database and collection would be dropped.
Next we have the following:
We have something similar in the Node.js application as well. We have it in the script because eventually the script controls the application. So when we (or Docker) stops the script, the same stop event is passed to the application. If we didn't do this, the application would not have a graceful shutdown and the drop logic wouldn't be applied.
Now we have three try / catch blocks, each focusing on something particular.
The first block is responsible for creating a cluster with sample data:
If the cluster already exists, an error will be caught. We have three blocks because in our scenario, it is alright if certain parts already exist.
Next we worry about users and access:
We want our local IP address to be added to the access list and we want a user to be created. In this example, we are creating a user with extensive access, but you may want to refine the level of permission they have in your own project. For example, maybe the container deployment is meant to be a sandboxed experience. In this scenario, it makes sense that the user created access only the database and collection in the sandbox. We sleep after these commands because they are not instant and we want to make sure everything is ready before we try to connect.
Finally we try to connect:
After the first try / catch block finishes, we'll have a connection string. We can finalize our connection string with a Node.js URL object by including the username and password, then we can run our Node.js application. Remember, the environment variables and any manipulations we made to them in our script will be passed into the Node.js application.

Transition the MongoDB Atlas workflow to containers with Docker and Docker Compose

At this point, we have an application and we have a script for preparing MongoDB Atlas and launching the application. It's time to get everything into a Docker image to be deployed as a container.
At the root of your project directory, add a Dockerfile file with the following:
The custom Docker image will be based on a Node.js image which will allow us to run our Node.js application as well as our ZX script.
After our files are copied into the image, we run a few commands to download and extract the MongoDB Atlas CLI.
Finally, we install ZX and our application dependencies and run the ZX script. The CMD command for running the script is done when the container is run. Everything else is done when the image is built.
We could build our image from this Dockerfile file, but it is a lot easier to manage when there is a Compose configuration. Within the project directory, create a docker-compose.yml file with the following YAML:
You'll want to swap the environment variable values with your own. In the above example, the database and collection variables are commented out so the defaults would be used in the ZX script.
To see everything in action, execute the following from the command line on the host computer:
The above command will use the docker-compose.yml file to build the Docker image if it doesn't already exist. The build process will bundle our files, install our dependencies, and obtain the MongoDB Atlas CLI. When Compose deploys a container from the image, the environment variables will be passed to the ZX script responsible for configuring MongoDB Atlas. When ready, the ZX script will run the Node.js application, further passing the environment variables. If the CLEANUP_ONDESTROY variable was set to true, when the container is stopped the database and collection will be removed.

Conclusion

The MongoDB Atlas CLI can be a powerful tool for bringing MongoDB Atlas to your local development experience on Docker. Essentially you would be swapping out a local version of MongoDB with Atlas CLI logic to manage a more feature-rich cloud version of MongoDB.
MongoDB Atlas enhances the MongoDB experience by giving you access to more features such as Atlas Search, Charts, and App Services, which allow you to build great applications with minimal effort.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

MongoDB Network Compression: A Win-Win


Oct 12, 2022 | 6 min read
Tutorial

Real-Time Location Tracking with Change Streams and Socket.io


Feb 09, 2023 | 8 min read
Tutorial

Building a Python Data Access Layer


Jan 04, 2024 | 12 min read
Tutorial

Analyze Time-Series Data with Python and MongoDB Using PyMongoArrow and Pandas


Sep 21, 2023 | 6 min read
Table of Contents