EventGet 50% off your ticket to MongoDB.local NYC on May 2. Use code Web50!Learn more >>
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

How to Optimize Your Serverless Instance Bill with Indexing

Vishal Dhiman6 min read • Published Jul 29, 2022 • Updated Sep 13, 2023
ServerlessAtlas
Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
Serverless solutions are quickly gaining traction among developers and organizations alike as a means to move fast, minimize overhead, and optimize costs. But shifting from a traditional pre-provisioned and predictable monthly bill to a consumption or usage-based model can sometimes result in confusion around how that bill is generated. In this article, we’ll take you through the basics of our serverless billing model and give you tips on how to best optimize your serverless database for cost efficiency.

What are serverless instances?

MongoDB Atlas serverless instances, recently announced as generally available, provide an on-demand serverless endpoint for your application with no sizing required. You simply choose a cloud provider and region to get started, and as your app grows, your serverless database will seamlessly scale based on demand and only charge for the resources you use.
Unlike our traditional clusters, serverless instances offer a fundamentally different pricing model that is primarily metered on reads, writes, and storage with automatic tiered discounts on reads as your usage scales. So, you can start small without any upfront commitments and never worry about paying for unused resources if your workload is idle.

Serverless Database Pricing

Pay only for the operations you run.
ItemDescriptionPricing
Read Processing Unit (RPU)Number of read operations and documents scanned* per operation

*Number of documents read in 4KB chunks and indexes read in 256 byte chunks
$0.10/million for the first 50 million per day*

*Daily RPU tiers: Next 500 million: $0.05/million Reads thereafter: $0.01/million
Write Processing Unit (WPU)Number of write operations* to the database

*Number of documents and indexes written in 1KB chunks
$1.00/million
StorageData and indexes stored on the database$0.25/GB-month
Standard BackupDownload and restore of backup snapshots*

*2 free daily snapshots included per serverless instance*
$2.50/hour*

*To download or restore the data*
Serverless Continuous Backup35-day backup retention for daily snapshots$0.20/GB-month
Data TransferInbound/outbound data to/from the database$0.015 - $0.10/GB*

*Depending on traffic source and destination
At first glance, read processing units (RPU) and write processing units (WPU) might be new units to you, so let’s quickly dig into what they mean. We use RPUs and WPUs to quantify the amount of work the database has to do to service a query, or to perform a write. To put it simply, a read processing unit (RPU) refers to the read operations to the database and is calculated based on the number of operations run and documents scanned per operation. Similarly, a write processing unit (WPU) is a write operation to the database and is calculated based on the number of bytes written to each document or index. For further explanation of cost units, please refer to our documentation.
Now that you have a basic understanding of the pricing model, let’s go through an example to provide more context and tips on how to ensure your operations are best optimized to minimize costs.
For this example, we’ll be using the sample dataset in Atlas. To use sample data, simply go to your serverless instance deployment and select “Load Sample Dataset” from the dropdown as seen below.
Load sample dataset This will load a few collections, such as weather data and Airbnb listing data. Note that loading the sample dataset will consume approximately one million WPUs (less than $1 in most supported regions), and you will be billed accordingly
Now, let’s take a look at what happens when we interact with our data and do some search queries.

Scenario 1: Query on unindexed fields

For this exercise, I chose the sample_weatherdata collection. While looking at the data in the Atlas Collections view, it’s clear that the weather data collection has information from various places and that most locations have a call letter code as a convenient way to identify where this weather reading data was taken.
For this example, let’s simulate what would happen if a user comes to your weather app and does a lookup by a geographic location. In this weather data collection, geographic locations can be identified by callLetters, which are specific codes for various weather stations across the world. I arbitrarily picked station code “ESVJ,” which is a weather buoy in the Atlantic Ocean. 
Here is what we see when we run this query in Atlas Data Explorer: 
Document returned in sample_weatherdata.data collection
We can see this query returns three records. Now, let’s take a look at how many RPUs this query would cost me. We should remember that RPUs are calculated based on the number of read operations and the number of documents scanned per operation.
To execute the previous query, a full collection scan is required, which results in approximately 1,000 RPUs.
I took this query and ran this nearly 3,000 times through a shell script. This will simulate around 3,000 users coming to an app to check the weather in a day. Here is the code behind the script:
As expected, 3,000 iterations will be 1,000 * 3,000 = 3,000,000 RPUs = 3MM RPUs = $0.30.
Based on this, the cost per user for this application would be $0.01 per user (calculated as: 3,000,000 / 3,000 = 1,000 RPUs = $0.01).
The cost of $0.01 per user seems to be very high for a database lookup, because if this weather app were to scale to reach a similar level of activity to Accuweather, who sees about 9.5B weather requests in a day, you’d be paying close to around $1 million in database costs per day. By leaving your query this way, it’s likely that you’d be faced with an unexpectedly high bill as your usage scales — falling into a common trap that many new serverless users face.
To avoid this problem, we recommend that you follow MongoDB best practices and index your data to optimize your queries for both performance and cost. Indexes are special data structures that store a small portion of the collection's data set in an easy-to-traverse form.
Without indexes, MongoDB must perform a collection scan—i.e., scan every document in a collection—to select those documents that match the query statement (something you just saw in the example above). By adding an index to appropriate queries, you can limit the number of documents it must inspect, significantly reducing the operations you are charged for.
Let’s look at how indexing can help you reduce your RPUs significantly.

Scenario two: Querying with indexed fields

First, let’s create a simple index on the field ‘callLetters’:
Create index on sample_weatherdata.data collection
This operation will typically finish within 2-3 seconds. For reference, we can see the size of the index created on the index tab:
Index size of callLetters_1 index
Due to the data structure of the index, the exact number of index reads is hard to compute. However, we can run the same script again for 3,000 iterations and compare the number of RPUs.
The 3,000 queries on the indexed field now result in approximately 6,500 RPUs in contrast to the 3 million RPUs from the un-indexed query, which is a 99.8% reduction in RPUs.
We can see that by simply adding the above index, we were able to reduce the cost per user to roughly $0.000022 (calculated as: 6,500/3,000 = 2.2 RPUs = $0.000022), which is a huge cost saving compared to the previous cost of $0.01 per user.
Therefore, indexing not only helps with improving the performance and scale of your queries, but it can also reduce your consumed RPUs significantly, which reduces your costs. Note that there can be rare scenarios where this is not true (where the size of the index is much larger than the number of documents). However, in most cases, you should see a significant reduction in cost and an improvement in performance.

Take action to optimize your costs today

As you can see, adopting a usage-based pricing model can sometimes require you to be extra diligent in ensuring your data structure and queries are optimized. But when done correctly, the time spent to do those optimizations often pays off in more ways than one. 
If you’re unsure of where to start, we have built-in monitoring tools available in the Atlas UI that can help you. The performance advisor automatically monitors your database for slow-running queries and will suggest new indexes to help improve query performance. Or, if you’re looking to investigate slow-running queries further, you can use query profiler to view a breakdown of all slow-running queries that occurred in the last 24 hours. If you prefer a terminal experience, you can also analyze your query performance in the MongoDB Shell or in MongoDB Compass. 
If you need further assistance, you can always contact our support team via chat or the MongoDB support portal

Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Article

Atlas Search from Soup to Nuts: The Restaurant Finder Demo App


May 09, 2022 | 3 min read
Article

Best Practices and a Tutorial for Using Google Cloud Functions with MongoDB Atlas


Jun 13, 2023 | 13 min read
Tutorial

Next Gen Web Apps with Remix and MongoDB Atlas Data API


Feb 11, 2024 | 10 min read
Article

Querying the MongoDB Atlas Price Book with Atlas Data Federation


Jun 15, 2023 | 4 min read
Table of Contents