Developing Your Applications More Efficiently with MongoDB Atlas Serverless Instances
Nic RaboyPublished Oct 17, 2022 • Updated Feb 03, 2023
Rate this tutorial
If you're a developer, worrying about your database is not necessarily something you want to do. You likely don't want to spend your time provisioning or sizing clusters as the demand of your application changes. You probably also don't want to worry about breaking the bank if you've scaled something incorrectly.
With MongoDB Atlas, you have a few deployment options to choose from when it comes to your database. While you could choose a pre-provisioned shared or dedicated cluster, you're still stuck having to size and estimate the database resources you will need and subsequently managing your cluster capacity to best fit demand. While a pre-provisioned cluster isn’t necessarily a bad thing, it might not make sense if your development becomes idle or you’re expecting frequent periods of growth or decline. Instead, you can opt for a serverless instance to help remove the capacity management burden and free up time to dedicate to writing code. Serverless instances provide an on-demand database endpoint for your application that will automatically scale up and down to zero with application demand and only charge you based on your usage.
In this short and sweet tutorial, we'll see how easy it is to get started with a MongoDB Atlas serverless instance and how to begin to develop an application against it.
We're going to start by deploying a new MongoDB Atlas serverless instance. There are numerous ways to accomplish deploying MongoDB, but for this example, we'll stick to the web dashboard and some point and click.
From the MongoDB Atlas dashboard, click the "Create" button.
Choose "Serverless" as well as a cloud vendor where this instance should live.
If possible, choose a cloud vendor that matches where your application will live. This will give you the best possible latency between your database and your application.
Once you choose to click the "Create Instance" button, your instance is ready to go!
You're not in the clear yet though. You won't be able to use your Atlas serverless instance outside of the web dashboard until you create some database access and network access rules.
We'll start with a new database user.
Choose the type of authentication that makes the most sense for you. To keep things simple for this tutorial, I recommend choosing the "Password" option.
While you could use a "Built-in Role" when it comes to user privileges, your best bet for any application is to define "Specific Privileges" depending on what the user should be allowed to do. For this project, we'll be using an "example" database and a "people" collection, so it makes sense to give only that database and collection readWrite access.
Use your best judgment when creating users and defining access.
With a user created, we can move onto the network access side of things. The final step before we can start developing against our database.
In the "Network Access" tab, add the IP addresses that should be allowed access. If you're developing and testing locally like I am, just add your local IP address. Just remember to add the IP range for your servers or cloud vendor when the time comes. You can also take advantage of private networking if needed.
With the database and network access out of the way, let's grab the URI string that we'll be using in the next step of the tutorial.
From the Database tab, click the "Connect" button for your serverless instance.
Choose the programming language you wish to use and make note of the URI.
Need more help getting started with serverless instances? Check out this video that can walk you through it.
At this point, you should have an Atlas serverless instance deployed. We're going to take a moment to connect to it from application code and do some interactions, such as basic CRUD.
On your local computer, create a project directory and navigate into it with your command line. You'll want to execute the following commands once it becomes your working directory:
With the above commands, we've initialized a Node.js project, installed the MongoDB Node.js driver, and created a main.js file to contain our code.
So, what's happening in the above code?
First, we define our client with the URI string for our serverless instance. This is the same string that you took note of earlier in the tutorial and it should contain a username and password.
With the client, we can establish a connection and get a reference to a database and collection that we want to use. The database and collection does not need to exist prior to running your application.
Next, we are doing three different operations with the MongoDB Query API. First, we are inserting a new document into our collection. After the insert is complete, assuming our try/catch block didn't find an error, we find all documents where the lastname matches. For this example, there should only ever be one document, but you never know what your code looks like. If a document was found, it will be printed to the console. Finally, we are deleting any document where the lastname matches.
By the end of this, no documents should exist in your collection, assuming you are following along with my example. However, a document did (at some point in time) exist in your collection — we just deleted it.
Alright, so we have a basic example of how to build an application around an on-demand database, but it didn’t really highlight the benefit of why you’d want to. So, what can we do about that?
We know that pre-provisioned and serverless clusters work well and from a development perspective, you’re going to end up with the same results using the same code.
Let’s come up with a scenario where a serverless instance in Atlas might lower your development costs and reduce the scaling burden to match demand. Let’s say that you have an online store, but not just any kind of online store. This online store sees mild traffic most of the time and a 1000% spike in traffic every Friday between the hours of 9AM and 12PM because of a lightning type deal that you run.
We’ll leave mild traffic up to your imagination, but a 1000% bump is nothing small and would likely require some kind of scaling intervention every Friday on a pre-provisioned cluster. That, or you’d need to pay for a larger sized database.
Let’s visualize this example with the following Node.js code:
In the above example, we have an Express Framework-powered web application with two endpoint functions. We have an endpoint for getting the deal and we have an endpoint for creating a purchase. The rest can be left up to your imagination.
To load test this application with bursts and simulate the potential value of a serverless instance, we can use a tool like Apache JMeter.
With JMeter, you can define the number of threads and iterations it uses when making HTTP requests.
Remember, we’re simulating a burst in this example. If you do decide to play around with JMeter and you go excessive on the burst, you could end up with an interesting bill. If you’re interested to know how serverless is billed, check out the pricing page in the documentation.
Inside your JMeter Thread Group, you’ll want to define what is happening for each thread or iteration. In this case, we’re doing an HTTP request to our Node.js API.
Since the API expects JSON, we can define the header information for the request.
Once you have the thread information, the HTTP request information, and the header information, you can run JMeter and you’ll end up with a lot of activity against not only your web application, but also your database.
Again, a lot of this example has to be left to your imagination because to see the scaling benefits of a serverless instance, you’re going to need a lot of burst traffic that isn’t easily simulated during development. However, it should leave you with some ideas.
You just saw how quick it is to develop on MongoDB Atlas without having to burden yourself with sizing your own cluster. With a MongoDB Atlas serverless instance, your database will scale to meet the demand of your application and you'll be billed for that demand. This will protect you from paying for improperly sized clusters that are running non-stop. It will also save you the time you would have spent making size related adjustments to your cluster.
The code in this example works regardless if you are using an Atlas serverless instance or a pre-provisioned shared or dedicated cluster.
Got a question regarding this example, or want to see more? Check out the MongoDB Community Forums to see what's happening.