Build a Resilient Application with MongoDB Atlas
On this page
- Cluster Resilience
- Improved Memory Management
- Operation Rejection Filters
- Cluster-level Timeouts for Read Operations
- Isolate the Impact of Busy, Unsharded Collections
- Application and Client-Side Best Practices
- Install the Latest Drivers
- Connection Strings
- Retryable Writes and Reads
- Write and Read Concern
- Error Handling
- Test Failover
- Resilient Example Application
When building a mission-critical application, it's important to prepare for unexpected events that may happen in production. This includes unexpected slow queries, missing indexes, or a sharp rise in workload volume.
MongoDB Atlas helps you build a resilient application by equipping you with out-of-the-box features that allow you to prepare proactively and respond reactively to situations. To build a resilient application, we recommend that you configure your MongoDB deployment with the following cluster resilience and application and client-side best practices.
Cluster Resilience
To improve the resiliency of your cluster, upgrade your cluster to MongoDB 8.0. MongoDB 8.0 introduces the following performance improvements and new features related to resilience:
Operation rejection filters to reactively mitigate expensive queries
Cluster-level timeouts for proactive protection against expensive read operations
Better workload isolation with the moveCollection command
Improved Memory Management
To run your application safely in production, it's important to ensure
that your memory utilization allows for headroom. If a node runs out of
available memory, it can become susceptible to the Linux Out of Memory
Killer that terminates the
mongod
process.
MongoDB 8.0 uses an upgraded TCMalloc for all deployments automatically, which reduces average memory fragmentation growth over time. This lower fragmentation improves operational stability during peak loads and results in overall improved memory utilization.
Operation Rejection Filters
An unintentional resource-intensive operation can cause problems in production if you don't handle it promptly.
MongoDB 8.0 allows you to minimize the impact of these operations by using operation rejection filters. Operation rejection filters allow you to configure MongoDB to reject queries from running until you re-enable queries with that query shape.
In other words, once you identify a slow query, you don't need to wait for application teams to fix their queries to contain the slow query's impact. Instead, once you notice a poorly performing query in either your Query Profiler, Real Time Performance Panel, or query logs, you can set a rejection filter on that query shape. MongoDB then prevents new instances of that incoming query shape from executing. Once you fix the query, you can re-enable the query shape.
You should use an operation rejection filter if you want to:
Contain the impact of slow queries quickly while the fix is in progress.
Prioritize more important workloads during times of overload by rejecting less important queries.
Give the cluster time to recover if it's close to max resource utilization.
Identify and Reject Slow Queries in the Atlas UI
To use an operation rejection filter in the Atlas UI:
In Atlas, go to the Clusters page for your project.
If it's not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.
If it's not already displayed, select your desired project from the Projects menu in the navigation bar.
If the Clusters page is not already displayed, click Database in the sidebar.
The Clusters page displays.
Go to the Query Profiler for the specified cluster within the current project.
Click View Monitoring for that instance in the project panel.
Click the Query Insights tab.
Click the Query Profiler tab.
Reject operations of a specific query shape.
Use setQuerySettings
in your db.adminCommand()
function.
For an example, see Block Slow Queries with Operation Rejection Filters.
Monitor Your Queries After Rejection or Timeout
You can monitor how your queries run afterwards in the Metrics tab:
Select Operation Throttling.
Under MongoDB Metrics, click Operation Throttling.
With this metric, the MongoDB chart shows the following:
Killed, which shows the number of read operations that MongoDB kills over time due to exceeding the default cluster timeout.
Rejected, which shows the number of operations that MongoDB rejects over time because the query matches the user-defined rejection filter.
Cluster-level Timeouts for Read Operations
It's important to ensure that your development process carefully considers the efficiency of queries before they reach production. Exceptions may always occur, but having a proactive mitigation against inefficient queries can help prevent cluster performance issues.
With MongoDB 8.0, you can protect your queries from unindexed operations
with the server-side defaultMaxTimeMS
coming into the cluster. If an operation exceeds this timeout,
MongoDB cancels the operation to prevent queries from running too long
and holding on to resources. This allows you to:
Shift the responsibility of setting timeouts from individual application teams to database focused teams.
Minimize the impact of a collection scan if the query is missing an index.
Have a last-round mitigation against expensive operations that make it to production.
If you have queries that require a different timeout, such as analytics queries, you can override them by setting the operation-level timeout with the maxTimeMS method.
Set the Default Timeout for Read Operations in the Atlas Administration API
To set the defaultMaxTimeMS
parameter through the
Atlas Administration API, see Update Advanced Configuration
Options for One Cluster.
Set the Default Timeout for Read Operations in the Atlas UI
To set the defaultMaxTimeMS
parameter in the Atlas UI:
Navigate to your configuration options.
If you have an existing cluster, navigate to the Edit Cluster page.
If you are creating a new cluster, from the Select a version dropdown, select MongoDB 8.0.
Click Additional Settings.
Scroll down and click More Configuration Options.
To view the behavior of killed operations, see
Monitor Your Queries After Rejection or Timeout. To learn more,
see defaultMaxTimeMS
and
Set Default Timeout for Read Operations.
Isolate the Impact of Busy, Unsharded Collections
Sharding allows you to scale your cluster horizontally. With MongoDB, you can shard some collections, while allowing other collections in the same cluster to remain unsharded. When you create a new database, the shard in the cluster with the least amount of data is picked as that database's primary shard by default. All of the unsharded collections of that database live in that primary shard by default. This can cause increased traffic to the primary shard as your workload grows, especially if the workload growth focuses on the unsharded collections on the primary shard.
To distribute this workload better, MongoDB 8.0 allows you to move an
unsharded collection to other shards from the primary shard with the
moveCollection
command. This allows you to place active,
busy collections onto shards with less expected resource usage. With
this, you can:
Optimize performance on larger, complex workloads.
Achieve better resource utilization.
Distribute date more evenly across shards.
We recommended to isolate your collection in the following circumstances:
If your primary shard experiences significant workload due to the presence of multiple high-throughput unsharded collections.
You anticipate an unsharded collection to experience future growth, which could become a bottleneck for other collections.
You are running a one-collection-per-cluster deployment design and you want to isolate those customers based on priority or workloads.
Your shards have more than a proportional amount of data due to the number of unsharded collections located on them.
To learn how to move an unsharded collection with mongosh
, see
Move a Collection.
Application and Client-Side Best Practices
You can configure features of your MongoDB deployments and the driver libraries to create a resilient application that can withstand network outages and failover events. To write application code that takes full advantage of the always-on capabilities of MongoDB Atlas, you should perform the following tasks:
Use a
majority
write concern and a read concern that makes sense for your application.Handle errors in your application.
Install the Latest Drivers
Install the latest drivers for your language from MongoDB Drivers. Drivers connect and relay queries from your application to your database. Using the latest drivers enables the latest MongoDB features.
Then, in your application, import the dependency:
If you are using Maven, add the
following to your pom.xml
dependencies list:
<dependencies> <dependency> <groupId>org.mongodb</groupId> <artifactId>mongodb-driver-sync</artifactId> <version>4.0.1</version> </dependency> </dependencies>
If you are using Gradle, add the
following to your build.gradle
dependencies list:
dependencies { compile 'org.mongodb:mongodb-driver-sync:4.0.1' }
// Latest 'mongodb' version installed with npm const MongoClient = require('mongodb').MongoClient;
# Install the latest 'pymongo' version with pip and # import MongoClient from the package to establish a connection. from pymongo import MongoClient
Connection Strings
Note
Atlas provides a pre-configured connection string. For steps to copy the pre-configured string, see Atlas-Provided Connection Strings.
Use a connection string that specifies all the nodes in your Atlas cluster to connect your application to your database. If your cluster performs a replica set election and a new primary is elected, a connection string that specifies all the nodes in your cluster discovers the new primary without application logic.
You can specify all the nodes in your cluster using either:
the DNS Seedlist Connection Format (recommended with Atlas).
The connection string can also specify options, notably retryWrites and writeConcern.
Atlas can generate an optimized SRV connection string for sharded
clusters using the load balancers from your private endpoint
service. When you use an optimized connection string, Atlas limits
the number of connections per mongos
between your application and
your sharded cluster. The limited connections per mongos
improve performance during spikes in connection counts.
Note
Atlas doesn't support optimized connection strings for clusters that run on Google Cloud or Azure.
To learn more about optimized connection strings for sharded clusters behind a private endpoint, see Improve Connection Performance for Sharded Clusters Behind a Private Endpoint.
Atlas-Provided Connection Strings
If you copy your connection string from your Atlas cluster
interface, the connection string is pre-configured for your cluster,
uses the DNS Seedlist format, and includes the recommended
retryWrites
and w
(write concern) options for resilience.
To copy your connection string URI from Atlas:
In Atlas, go to the Clusters page for your project.
If it's not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.
If it's not already displayed, select your desired project from the Projects menu in the navigation bar.
If the Clusters page is not already displayed, click Database in the sidebar.
The Clusters page displays.
Copy the connection string URI.
Copy the connection string or full driver example into your application code. You must provide database user credentials.
Note
This guide uses SCRAM authentication through a connection string. To learn about using X.509 certificates to authenticate, see X.509.
Use your connection string to instantiate a MongoDB client in your application:
// Copy the connection string provided by Atlas String uri = <your Atlas connection string>; // Instantiate the MongoDB client with the URI MongoClient client = MongoClients.create(uri);
// Copy the connection string provided by Atlas const uri = <your Atlas connection string>; // Instantiate the MongoDB client with the URI const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });
# Copy the connection string provided by Atlas uri = <your Atlas connection string> # Pass your connection string URI to the MongoClient constructor client = MongoClient(uri)
Retryable Writes and Reads
Note
Starting in MongoDB version 4.0 and with 4.2-compatible drivers, MongoDB retries both writes and reads once by default.
Retryable Writes
Use retryable writes to retry
certain write operations a single time if they fail. If you
copied your connection string from Atlas, it includes
"retryWrites=true"
. If you are providing your own connection string,
include "retryWrites=true"
as a query parameter.
Retrying writes exactly once is the best strategy for handling transient network errors and replica set elections in which the application temporarily cannot find a healthy primary node. If the retry succeeds, the operation as a whole succeeds and no error is returned. If the operation fails, it is likely due to:
A lasting network error
An invalid command
When an operation fails, your application needs to handle the error itself.
Retryable Reads
Read operations are automatically retried a single time if they fail starting in MongoDB version 4.0 and with 4.2-compatible drivers. No additional configuration is required to retry reads.
Write and Read Concern
You can tune the consistency and availability of your application using write concerns and read concerns. Stricter concerns imply that database operations wait for stronger data consistency guarantees, whereas loosening consistency requirements provides higher availability.
Example
If your application handles monetary balances, consistency is
extremely important. You might use majority
write and read
concerns to ensure you never read from stale data or data that may
be rolled back.
Alternatively, if your application records temperature data from hundreds of sensors every second, you may not be concerned if you read data that does not include the most recent readouts. You can loosen consistency requirements and provide faster access to that data.
Write Concern
You can set the
write concern level
of your Atlas replica set through the connection string URI. Use a
majority
write concern to ensure your data is successfully written
to your database and persisted. This is the recommended default and
sufficient for most use cases. If you copied your connection string
from Atlas, it includes "w=majority"
.
When you use a write concern that requires acknowledgement, such as
majority
, you may also specify a maximum time limit for writes
to achieve that level of acknowledgement:
The wtimeoutMS connection string parameter for all writes, or
The wtimeout option for a single write operation.
Whether or not you use a time limit and the value you use depend on your application context.
Important
If you do not specify a time limit for writes and the level of write concern is unachievable, the write operation will hang indefinitely.
Read Concern
You can set the read concern level of your Atlas replica set through the connection string URI. The ideal read concern depends on your application requirements, but the default is sufficient for most use cases. No connection string parameter is required to use default read concerns.
Specifying a read concern can improve guarantees around the data your application receives from Atlas.
Note
The specific combination of write and read concern your application uses has an effect on order-of-operation guarantees. This is called causal consistency. For more information on causal consistency guarantees, see Causal Consistency and Read and Write Concerns.
Error Handling
Invalid commands, network outages, and network errors that are not handled by retryable writes return errors. Refer to your driver's API documentation for error details.
For example, if an application tries to insert a document that contains an
_id
value that is already used in the database's collection, your driver
returns an error that includes:
Unable to insert due to an error: com.mongodb.MongoWriteException: E11000 duplicate key error collection: <db>.<collection> ...
{ "name": : "MongoError", "message": "E11000 duplicate key error collection on: <db>.<collection> ... ", ... }
pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: <db>.<collection> ...
Without proper error handling, an error may block your application from processing requests until it is restarted.
Your application should handle errors without crashing or side
effects. In the previous example of an application inserting a
duplicate _id
into a collection, that application could handle errors as
follows:
// Declare a logger instance from java.util.logging.Logger private static final Logger LOGGER = ... ... try { InsertOneResult result = collection.insertOne(new Document() .append("_id", 1) .append("body", "I'm a goofball trying to insert a duplicate _id")); // Everything is OK LOGGER.info("Inserted document id: " + result.getInsertedId()); // Refer to the API documentation for specific exceptions to catch } catch (MongoException me) { // Report the error LOGGER.severe("Failed due to an error: " + me); }
... collection.insertOne({ _id: 1, body: "I'm a goofball trying to insert a duplicate _id" }) .then(result => { response.sendStatus(200) // send "OK" message to the client }, err => { response.sendStatus(400); // send "Bad Request" message to the client });
... try: collection.insert_one({ "_id": 1, "body": "I'm a goofball trying to insert a duplicate _id" }) return {"message": "User successfully added!"} except pymongo.errors.DuplicateKeyError as e: print ("The insert operation failed:", e)
The insert operation in this example throws a "duplicate key"
error the second time it's invoked because the _id
field must be
unique. The error is caught, the client is notified, and the app
continues to run. The insert operation fails, however, and it is
up to you to decide whether to show the user a message, retry the
operation, or do something else.
You should always log errors. Common strategies for further processing errors include:
Return the error to the client with an error message. This is a good strategy when you cannot resolve the error and need to inform a user that an action cannot be completed.
Write to a backup database. This is a good strategy when you cannot resolve the error but don't want to risk losing the request data.
Retry the operation beyond the single default retry. This is a good strategy when you can solve the cause of an error programmatically, then retry it.
You must select the best strategies for your application context.
Example
In the example of a duplicate key error, you should log the error but not retry the operation because it will never succeed. Instead, you could write to a fallback database and review the contents of that database at a later time to ensure that no information is lost. The user doesn't need to do anything else and the data is recorded, so you can choose not to send an error message to the client.
Planning for Network Errors
Returning an error can be desirable behavior when an operation would otherwise hang indefinitely and block your application from executing new operations. You can use the maxTimeMS method to place a time limit on individual operations, returning an error for your application to handle if that time limit is exceeded.
The time limit you place on each operation depends on the context of that operation.
Example
If your application reads and displays simple product information
from an inventory
collection, you can be reasonably confident
that those read operations only take a moment. An unusually
long-running query is a good indicator that there is a lasting
network problem. Setting maxTimeMS
on that operation to 5000, or
5 seconds, means that your application receives feedback as soon as
you are confident there is a network problem.
Test Failover
In the spirit of chaos testing, Atlas will perform replica set elections automatically for periodic maintenance and certain configuration changes.
To check if your application is resilient to replica set elections, test the failover process by simulating a failover event.
Resilient Example Application
The example application brings together the following recommendations to ensure resilience against network outages and failover events:
Use the Atlas-provided connection string with retryable writes, majority write concern, and default read concern.
Specify an operation time limit with the maxTimeMS method. For instructions on how to set
maxTimeMS
, refer to your specific Driver Documentation.Handle errors for duplicate keys and timeouts.
The application is an HTTP API that allows clients to create or list user records. It exposes an endpoint that accepts GET and POST requests http://localhost:3000:
Method | Endpoint | Description |
---|---|---|
GET | /users | Gets a list of user names from a users collection. |
POST | /users | Requires a name in the request body. Adds a new user to a
users collection. |
Note
1 // File: App.java 2 3 import java.util.Map; 4 import java.util.logging.Logger; 5 6 import org.bson.Document; 7 import org.json.JSONArray; 8 9 import com.mongodb.MongoException; 10 import com.mongodb.client.MongoClient; 11 import com.mongodb.client.MongoClients; 12 import com.mongodb.client.MongoCollection; 13 import com.mongodb.client.MongoDatabase; 14 15 import fi.iki.elonen.NanoHTTPD; 16 17 public class App extends NanoHTTPD { 18 private static final Logger LOGGER = Logger.getLogger(App.class.getName()); 19 20 static int port = 3000; 21 static MongoClient client = null; 22 23 public App() throws Exception { 24 super(port); 25 26 // Replace the uri string with your MongoDB deployment's connection string 27 String uri = "<atlas-connection-string>"; 28 client = MongoClients.create(uri); 29 30 start(NanoHTTPD.SOCKET_READ_TIMEOUT, false); 31 LOGGER.info("\nStarted the server: http://localhost:" + port + "/ \n"); 32 } 33 34 public static void main(String[] args) { 35 try { 36 new App(); 37 } catch (Exception e) { 38 LOGGER.severe("Couldn't start server:\n" + e); 39 } 40 } 41 42 43 public Response serve(IHTTPSession session) { 44 StringBuilder msg = new StringBuilder(); 45 Map<String, String> params = session.getParms(); 46 47 Method reqMethod = session.getMethod(); 48 String uri = session.getUri(); 49 50 if (Method.GET == reqMethod) { 51 if (uri.equals("/")) { 52 msg.append("Welcome to my API!"); 53 } else if (uri.equals("/users")) { 54 msg.append(listUsers(client)); 55 } else { 56 msg.append("Unrecognized URI: ").append(uri); 57 } 58 } else if (Method.POST == reqMethod) { 59 try { 60 String name = params.get("name"); 61 if (name == null) { 62 throw new Exception("Unable to process POST request: 'name' parameter required"); 63 } else { 64 insertUser(client, name); 65 msg.append("User successfully added!"); 66 } 67 } catch (Exception e) { 68 msg.append(e); 69 } 70 } 71 72 return newFixedLengthResponse(msg.toString()); 73 } 74 75 static String listUsers(MongoClient client) { 76 MongoDatabase database = client.getDatabase("test"); 77 MongoCollection<Document> collection = database.getCollection("users"); 78 79 final JSONArray jsonResults = new JSONArray(); 80 collection.find().forEach((result) -> jsonResults.put(result.toJson())); 81 82 return jsonResults.toString(); 83 } 84 85 static String insertUser(MongoClient client, String name) throws MongoException { 86 MongoDatabase database = client.getDatabase("test"); 87 MongoCollection<Document> collection = database.getCollection("users"); 88 89 collection.insertOne(new Document().append("name", name)); 90 return "Successfully inserted user: " + name; 91 } 92 }
Note
The following server application uses Express, which you need to add to your project as a dependency before you can run it.
1 const express = require('express'); 2 const bodyParser = require('body-parser'); 3 4 // Use the latest drivers by installing & importing them 5 const MongoClient = require('mongodb').MongoClient; 6 7 const app = express(); 8 app.use(bodyParser.json()); 9 app.use(bodyParser.urlencoded({ extended: true })); 10 11 const uri = "mongodb+srv://<db_username>:<db_password>@cluster0-111xx.mongodb.net/test?retryWrites=true&w=majority"; 12 13 const client = new MongoClient(uri, { 14 useNewUrlParser: true, 15 useUnifiedTopology: true 16 }); 17 18 // ----- API routes ----- // 19 app.get('/', (req, res) => res.send('Welcome to my API!')); 20 21 app.get('/users', (req, res) => { 22 const collection = client.db("test").collection("users"); 23 24 collection 25 .find({}) 26 .maxTimeMS(5000) 27 .toArray((err, data) => { 28 if (err) { 29 res.send("The request has timed out. Please check your connection and try again."); 30 } 31 return res.json(data); 32 }); 33 }); 34 35 app.post('/users', (req, res) => { 36 const collection = client.db("test").collection("users"); 37 collection.insertOne({ name: req.body.name }) 38 .then(result => { 39 res.send("User successfully added!"); 40 }, err => { 41 res.send("An application error has occurred. Please try again."); 42 }) 43 }); 44 // ----- End of API routes ----- // 45 46 app.listen(3000, () => { 47 console.log(`Listening on port 3000.`); 48 client.connect(err => { 49 if (err) { 50 console.log("Not connected: ", err); 51 process.exit(0); 52 } 53 console.log('Connected.'); 54 }); 55 });
Note
The following web application uses FastAPI. To create a new application, use the FastAPI sample file structure.
1 # File: main.py 2 3 from fastapi import FastAPI, Body, Request, Response, HTTPException, status 4 from fastapi.encoders import jsonable_encoder 5 6 from typing import List 7 from models import User 8 9 import pymongo 10 from pymongo import MongoClient 11 from pymongo import errors 12 13 # Replace the uri string with your |service| connection string 14 uri = "<atlas-connection-string>" 15 db = "test" 16 17 app = FastAPI() 18 19 20 def startup_db_client(): 21 app.mongodb_client = MongoClient(uri) 22 app.database = app.mongodb_client[db] 23 24 25 def shutdown_db_client(): 26 app.mongodb_client.close() 27 28 ##### API ROUTES ##### 29 30 def list_users(request: Request): 31 try: 32 users = list(request.app.database["users"].find().max_time_ms(5000)) 33 return users 34 except pymongo.errors.ExecutionTimeout: 35 raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="The request has timed out. Please check your connection and try again.") 36 37 38 def new_user(request: Request, user: User = Body(...)): 39 user = jsonable_encoder(user) 40 try: 41 new_user = request.app.database["users"].insert_one(user) 42 return {"message":"User successfully added!"} 43 except pymongo.errors.DuplicateKeyError: 44 raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Could not create user due to existing '_id' value in the collection. Try again with a different '_id' value.")