Local Development with the MongoDB Atlas CLI and Docker
Rate this tutorial
Need a consistent development and deployment experience as developers work across teams and use different machines for their daily tasks? That is where Docker has you covered with containers. A common experience might include running a local version of MongoDB Community in a container and an application in another container. This strategy works for some organizations, but what if you want to leverage all the benefits that come with MongoDB Atlas in addition to a container strategy for your application development?
It should be noted that this tutorial was intended for a development or staging setting through your local computer. It is not advised to use all the techniques found in this tutorial in a production setting. Use your best judgment when it comes to the code included.
There are a lot of moving parts in this tutorial, so you'll need a few things prior to be successful:
On your host computer, create a project directory. The name isn't important, but for this tutorial we'll use mongodbexample as the project directory.
We're going to start by creating a Node.js application that communicates with MongoDB using the . The application will be simple in terms of functionality. It will connect to MongoDB, create a database and collection, insert a document, and expose an API endpoint to show the document with an HTTP request.
Within the project directory, create a new app directory for the Node.js application to live. Within the app directory, using a command line, execute the following:
If you don't have Node.js installed, just create a package.json file within the app directory with the following contents:
There's a lot happening in the few lines of code above. We're going to break it down!
The first important snippet of code to focus on is the start of our application service:
Using the client that was configured near the top of the file, we can connect to MongoDB. Once connected, we can get a reference to a database and collection. This database and collection doesn't need to exist before that because it will be created automatically when data is inserted. With the reference to a collection, we insert a document and begin listening for API requests through HTTP.
This brings us to our one and only endpoint:
/dataendpoint is consumed, the first five documents in our collection are returned to the user. Otherwise if there was some issue, an error message would be returned.
This brings us to something optional, but potentially valuable when it comes to a Docker deployment for local development:
The above code says that when a termination event is sent to the application, drop the database we had created and close the connection to MongoDB as well as the Express Framework service. This could be useful if we want to undo everything we had created when the container stops. If you want your changes to persist, it might not be necessary. For example, if you want your data to exist between container deployments, persistence would be required. On the other hand, maybe you are using the container as part of a test pipeline and you want to clean up when you’re done, the termination commands could be valuable.
So we have an environment variable heavy Node.js application. What's next?
While we have the application, our MongoDB Atlas cluster may not be available to us. For example, maybe this is our first time being exposed to Atlas and nothing has been created yet. We need to be able to quickly and easily create a cluster, configure our IP access rules, specify users and permissions, and then connect with our Node.js application.
This is where the MongoDB Atlas CLI does the heavy lifting!
Within your project directory, not your app directory, create a docker_run_script.mjs file with the following code:
Once again, we're going to break down what's happening!
Like with the Node.js application, the ZX script will be using a lot of environment variables. In the end, these variables will be passed with Docker, but you can hard-code them at any time if you want to test things outside of Docker.
The first important thing to note is the defaulting of environment variables:
The above snippet isn't a requirement, but if you want to avoid setting or passing around variables, defaulting them could be helpful. In the above example, the use of
runtimeTimestampwill allow us to create a unique database and collection should we want to. This could be useful if numerous developers plan to use the same Docker images to deploy containers because then each developer would be in a sandboxed area. If the developer chooses to undo the deployment, only their unique database and collection would be dropped.
Next we have the following:
We have something similar in the Node.js application as well. We have it in the script because eventually the script controls the application. So when we (or Docker) stops the script, the same stop event is passed to the application. If we didn't do this, the application would not have a graceful shutdown and the drop logic wouldn't be applied.
Now we have three try / catch blocks, each focusing on something particular.
The first block is responsible for creating a cluster with sample data:
If the cluster already exists, an error will be caught. We have three blocks because in our scenario, it is alright if certain parts already exist.
Next we worry about users and access:
We want our local IP address to be added to the access list and we want a user to be created. In this example, we are creating a user with extensive access, but you may want to refine the level of permission they have in your own project. For example, maybe the container deployment is meant to be a sandboxed experience. In this scenario, it makes sense that the user created access only the database and collection in the sandbox. We
sleepafter these commands because they are not instant and we want to make sure everything is ready before we try to connect.
Finally we try to connect:
After the first try / catch block finishes, we'll have a connection string. We can finalize our connection string with a Node.js URL object by including the username and password, then we can run our Node.js application. Remember, the environment variables and any manipulations we made to them in our script will be passed into the Node.js application.
At this point, we have an application and we have a script for preparing MongoDB Atlas and launching the application. It's time to get everything into a Docker image to be deployed as a container.
At the root of your project directory, add a Dockerfile file with the following:
The custom Docker image will be based on a Node.js image which will allow us to run our Node.js application as well as our ZX script.
After our files are copied into the image, we run a few commands to download and extract the MongoDB Atlas CLI.
Finally, we install ZX and our application dependencies and run the ZX script. The
CMDcommand for running the script is done when the container is run. Everything else is done when the image is built.
We could build our image from this Dockerfile file, but it is a lot easier to manage when there is a Compose configuration. Within the project directory, create a docker-compose.yml file with the following YAML:
You'll want to swap the environment variable values with your own. In the above example, the database and collection variables are commented out so the defaults would be used in the ZX script.
To see everything in action, execute the following from the command line on the host computer:
The above command will use the docker-compose.yml file to build the Docker image if it doesn't already exist. The build process will bundle our files, install our dependencies, and obtain the MongoDB Atlas CLI. When Compose deploys a container from the image, the environment variables will be passed to the ZX script responsible for configuring MongoDB Atlas. When ready, the ZX script will run the Node.js application, further passing the environment variables. If the
CLEANUP_ONDESTROYvariable was set to
true, when the container is stopped the database and collection will be removed.
MongoDB Atlas enhances the MongoDB experience by giving you access to more features such as Atlas Search, Charts, and App Services, which allow you to build great applications with minimal effort.