Docs Menu
Docs Home
/
MongoDB Atlas
/

Build a Resilient Application with MongoDB Atlas

On this page

  • Cluster Resilience
  • Improved Memory Management
  • Operation Rejection Filters
  • Cluster-level Timeouts for Read Operations
  • Isolate the Impact of Busy, Unsharded Collections
  • Application and Client-Side Best Practices
  • Install the Latest Drivers
  • Connection Strings
  • Retryable Writes and Reads
  • Write and Read Concern
  • Error Handling
  • Test Failover
  • Resilient Example Application

When building a mission-critical application, it's important to prepare for unexpected events that may happen in production. This includes unexpected slow queries, missing indexes, or a sharp rise in workload volume.

MongoDB Atlas helps you build a resilient application by equipping you with out-of-the-box features that allow you to prepare proactively and respond reactively to situations. To build a resilient application, we recommend that you configure your MongoDB deployment with the following cluster resilience and application and client-side best practices.

To improve the resiliency of your cluster, upgrade your cluster to MongoDB 8.0. MongoDB 8.0 introduces the following performance improvements and new features related to resilience:

To run your application safely in production, it's important to ensure that your memory utilization allows for headroom. If a node runs out of available memory, it can become susceptible to the Linux Out of Memory Killer that terminates the mongod process.

MongoDB 8.0 uses an upgraded TCMalloc for all deployments automatically, which reduces average memory fragmentation growth over time. This lower fragmentation improves operational stability during peak loads and results in overall improved memory utilization.

An unintentional resource-intensive operation can cause problems in production if you don't handle it promptly.

MongoDB 8.0 allows you to minimize the impact of these operations by using operation rejection filters. Operation rejection filters allow you to configure MongoDB to reject queries from running until you re-enable queries with that query shape.

In other words, once you identify a slow query, you don't need to wait for application teams to fix their queries to contain the slow query's impact. Instead, once you notice a poorly performing query in either your Query Profiler, Real Time Performance Panel, or query logs, you can set a rejection filter on that query shape. MongoDB then prevents new instances of that incoming query shape from executing. Once you fix the query, you can re-enable the query shape.

You should use an operation rejection filter if you want to:

  • Contain the impact of slow queries quickly while the fix is in progress.

  • Prioritize more important workloads during times of overload by rejecting less important queries.

  • Give the cluster time to recover if it's close to max resource utilization.

To use an operation rejection filter in the Atlas UI:

1
  1. If it's not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.

  2. If it's not already displayed, select your desired project from the Projects menu in the navigation bar.

  3. If the Clusters page is not already displayed, click Database in the sidebar.

    The Clusters page displays.

2
  1. Click View Monitoring for that instance in the project panel.

  2. Click the Query Insights tab.

  3. Click the Query Profiler tab.

3
4

In the details on the right-hand side, copy the queryHash value.

5

Use setQuerySettings in your db.adminCommand() function.

For an example, see Block Slow Queries with Operation Rejection Filters.

You can monitor how your queries run afterwards in the Metrics tab:

1
  1. Click on the name of the cluster to open the cluster overview.

  2. Click the Metrics tab.

2

Under Shard Name, click the shard you'd like to monitor.

3

Under MongoDB Metrics, click Operation Throttling.

With this metric, the MongoDB chart shows the following:

  • Killed, which shows the number of read operations that MongoDB kills over time due to exceeding the default cluster timeout.

  • Rejected, which shows the number of operations that MongoDB rejects over time because the query matches the user-defined rejection filter.

It's important to ensure that your development process carefully considers the efficiency of queries before they reach production. Exceptions may always occur, but having a proactive mitigation against inefficient queries can help prevent cluster performance issues.

With MongoDB 8.0, you can protect your queries from unindexed operations with the server-side defaultMaxTimeMS coming into the cluster. If an operation exceeds this timeout, MongoDB cancels the operation to prevent queries from running too long and holding on to resources. This allows you to:

  • Shift the responsibility of setting timeouts from individual application teams to database focused teams.

  • Minimize the impact of a collection scan if the query is missing an index.

  • Have a last-round mitigation against expensive operations that make it to production.

If you have queries that require a different timeout, such as analytics queries, you can override them by setting the operation-level timeout with the maxTimeMS method.

To set the defaultMaxTimeMS parameter through the Atlas Administration API, see Update Advanced Configuration Options for One Cluster.

To set the defaultMaxTimeMS parameter in the Atlas UI:

1
  1. If you have an existing cluster, navigate to the Edit Cluster page.

    If you are creating a new cluster, from the Select a version dropdown, select MongoDB 8.0.

  2. Click Additional Settings.

  3. Scroll down and click More Configuration Options.

2
3
4
  1. Click Review Changes.

  2. Review your changes, then click Apply Changes to update your cluster.

To view the behavior of killed operations, see Monitor Your Queries After Rejection or Timeout. To learn more, see defaultMaxTimeMS and Set Default Timeout for Read Operations.

Sharding allows you to scale your cluster horizontally. With MongoDB, you can shard some collections, while allowing other collections in the same cluster to remain unsharded. When you create a new database, the shard in the cluster with the least amount of data is picked as that database's primary shard by default. All of the unsharded collections of that database live in that primary shard by default. This can cause increased traffic to the primary shard as your workload grows, especially if the workload growth focuses on the unsharded collections on the primary shard.

To distribute this workload better, MongoDB 8.0 allows you to move an unsharded collection to other shards from the primary shard with the moveCollection command. This allows you to place active, busy collections onto shards with less expected resource usage. With this, you can:

  • Optimize performance on larger, complex workloads.

  • Achieve better resource utilization.

  • Distribute date more evenly across shards.

We recommended to isolate your collection in the following circumstances:

  • If your primary shard experiences significant workload due to the presence of multiple high-throughput unsharded collections.

  • You anticipate an unsharded collection to experience future growth, which could become a bottleneck for other collections.

  • You are running a one-collection-per-cluster deployment design and you want to isolate those customers based on priority or workloads.

  • Your shards have more than a proportional amount of data due to the number of unsharded collections located on them.

To learn how to move an unsharded collection with mongosh, see Move a Collection.

You can configure features of your MongoDB deployments and the driver libraries to create a resilient application that can withstand network outages and failover events. To write application code that takes full advantage of the always-on capabilities of MongoDB Atlas, you should perform the following tasks:

Install the latest drivers for your language from MongoDB Drivers. Drivers connect and relay queries from your application to your database. Using the latest drivers enables the latest MongoDB features.

Then, in your application, import the dependency:

If you are using Maven, add the following to your pom.xml dependencies list:

<dependencies>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-sync</artifactId>
<version>4.0.1</version>
</dependency>
</dependencies>

If you are using Gradle, add the following to your build.gradle dependencies list:

dependencies {
compile 'org.mongodb:mongodb-driver-sync:4.0.1'
}
// Latest 'mongodb' version installed with npm
const MongoClient = require('mongodb').MongoClient;
# Install the latest 'pymongo' version with pip and
# import MongoClient from the package to establish a connection.
from pymongo import MongoClient

Note

Atlas provides a pre-configured connection string. For steps to copy the pre-configured string, see Atlas-Provided Connection Strings.

Use a connection string that specifies all the nodes in your Atlas cluster to connect your application to your database. If your cluster performs a replica set election and a new primary is elected, a connection string that specifies all the nodes in your cluster discovers the new primary without application logic.

You can specify all the nodes in your cluster using either:

The connection string can also specify options, notably retryWrites and writeConcern.

Atlas can generate an optimized SRV connection string for sharded clusters using the load balancers from your private endpoint service. When you use an optimized connection string, Atlas limits the number of connections per mongos between your application and your sharded cluster. The limited connections per mongos improve performance during spikes in connection counts.

Note

Atlas doesn't support optimized connection strings for clusters that run on Google Cloud or Azure.

To learn more about optimized connection strings for sharded clusters behind a private endpoint, see Improve Connection Performance for Sharded Clusters Behind a Private Endpoint.

If you copy your connection string from your Atlas cluster interface, the connection string is pre-configured for your cluster, uses the DNS Seedlist format, and includes the recommended retryWrites and w (write concern) options for resilience.

To copy your connection string URI from Atlas:

1
  1. If it's not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.

  2. If it's not already displayed, select your desired project from the Projects menu in the navigation bar.

  3. If the Clusters page is not already displayed, click Database in the sidebar.

    The Clusters page displays.

2
  1. Click Connect on the cluster you wish to connect your application to.

  2. Select Drivers as your connection method.

  3. Select your Driver and Version.

3

Copy the connection string or full driver example into your application code. You must provide database user credentials.

Note

This guide uses SCRAM authentication through a connection string. To learn about using X.509 certificates to authenticate, see X.509.

Use your connection string to instantiate a MongoDB client in your application:

// Copy the connection string provided by Atlas
String uri = <your Atlas connection string>;
// Instantiate the MongoDB client with the URI
MongoClient client = MongoClients.create(uri);
// Copy the connection string provided by Atlas
const uri = <your Atlas connection string>;
// Instantiate the MongoDB client with the URI
const client = new MongoClient(uri, {
useNewUrlParser: true,
useUnifiedTopology: true
});
# Copy the connection string provided by Atlas
uri = <your Atlas connection string>
# Pass your connection string URI to the MongoClient constructor
client = MongoClient(uri)

Note

Starting in MongoDB version 4.0 and with 4.2-compatible drivers, MongoDB retries both writes and reads once by default.

Use retryable writes to retry certain write operations a single time if they fail. If you copied your connection string from Atlas, it includes "retryWrites=true". If you are providing your own connection string, include "retryWrites=true" as a query parameter.

Retrying writes exactly once is the best strategy for handling transient network errors and replica set elections in which the application temporarily cannot find a healthy primary node. If the retry succeeds, the operation as a whole succeeds and no error is returned. If the operation fails, it is likely due to:

  • A lasting network error

  • An invalid command

When an operation fails, your application needs to handle the error itself.

Read operations are automatically retried a single time if they fail starting in MongoDB version 4.0 and with 4.2-compatible drivers. No additional configuration is required to retry reads.

You can tune the consistency and availability of your application using write concerns and read concerns. Stricter concerns imply that database operations wait for stronger data consistency guarantees, whereas loosening consistency requirements provides higher availability.

Example

If your application handles monetary balances, consistency is extremely important. You might use majority write and read concerns to ensure you never read from stale data or data that may be rolled back.

Alternatively, if your application records temperature data from hundreds of sensors every second, you may not be concerned if you read data that does not include the most recent readouts. You can loosen consistency requirements and provide faster access to that data.

You can set the write concern level of your Atlas replica set through the connection string URI. Use a majority write concern to ensure your data is successfully written to your database and persisted. This is the recommended default and sufficient for most use cases. If you copied your connection string from Atlas, it includes "w=majority".

When you use a write concern that requires acknowledgement, such as majority, you may also specify a maximum time limit for writes to achieve that level of acknowledgement:

  • The wtimeoutMS connection string parameter for all writes, or

  • The wtimeout option for a single write operation.

Whether or not you use a time limit and the value you use depend on your application context.

Important

If you do not specify a time limit for writes and the level of write concern is unachievable, the write operation will hang indefinitely.

You can set the read concern level of your Atlas replica set through the connection string URI. The ideal read concern depends on your application requirements, but the default is sufficient for most use cases. No connection string parameter is required to use default read concerns.

Specifying a read concern can improve guarantees around the data your application receives from Atlas.

Note

The specific combination of write and read concern your application uses has an effect on order-of-operation guarantees. This is called causal consistency. For more information on causal consistency guarantees, see Causal Consistency and Read and Write Concerns.

Invalid commands, network outages, and network errors that are not handled by retryable writes return errors. Refer to your driver's API documentation for error details.

For example, if an application tries to insert a document that contains an _id value that is already used in the database's collection, your driver returns an error that includes:

Unable to insert due to an error: com.mongodb.MongoWriteException:
E11000 duplicate key error collection: <db>.<collection> ...
{
"name": : "MongoError",
"message": "E11000 duplicate key error collection on: <db>.<collection> ... ",
...
}
pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: <db>.<collection> ...

Without proper error handling, an error may block your application from processing requests until it is restarted.

Your application should handle errors without crashing or side effects. In the previous example of an application inserting a duplicate _id into a collection, that application could handle errors as follows:

// Declare a logger instance from java.util.logging.Logger
private static final Logger LOGGER = ...
...
try {
InsertOneResult result = collection.insertOne(new Document()
.append("_id", 1)
.append("body", "I'm a goofball trying to insert a duplicate _id"));
// Everything is OK
LOGGER.info("Inserted document id: " + result.getInsertedId());
// Refer to the API documentation for specific exceptions to catch
} catch (MongoException me) {
// Report the error
LOGGER.severe("Failed due to an error: " + me);
}
...
collection.insertOne({
_id: 1,
body: "I'm a goofball trying to insert a duplicate _id"
})
.then(result => {
response.sendStatus(200) // send "OK" message to the client
},
err => {
response.sendStatus(400); // send "Bad Request" message to the client
});
...
try:
collection.insert_one({
"_id": 1,
"body": "I'm a goofball trying to insert a duplicate _id"
})
return {"message": "User successfully added!"}
except pymongo.errors.DuplicateKeyError as e:
print ("The insert operation failed:", e)

The insert operation in this example throws a "duplicate key" error the second time it's invoked because the _id field must be unique. The error is caught, the client is notified, and the app continues to run. The insert operation fails, however, and it is up to you to decide whether to show the user a message, retry the operation, or do something else.

You should always log errors. Common strategies for further processing errors include:

  • Return the error to the client with an error message. This is a good strategy when you cannot resolve the error and need to inform a user that an action cannot be completed.

  • Write to a backup database. This is a good strategy when you cannot resolve the error but don't want to risk losing the request data.

  • Retry the operation beyond the single default retry. This is a good strategy when you can solve the cause of an error programmatically, then retry it.

You must select the best strategies for your application context.

Example

In the example of a duplicate key error, you should log the error but not retry the operation because it will never succeed. Instead, you could write to a fallback database and review the contents of that database at a later time to ensure that no information is lost. The user doesn't need to do anything else and the data is recorded, so you can choose not to send an error message to the client.

Returning an error can be desirable behavior when an operation would otherwise hang indefinitely and block your application from executing new operations. You can use the maxTimeMS method to place a time limit on individual operations, returning an error for your application to handle if that time limit is exceeded.

The time limit you place on each operation depends on the context of that operation.

Example

If your application reads and displays simple product information from an inventory collection, you can be reasonably confident that those read operations only take a moment. An unusually long-running query is a good indicator that there is a lasting network problem. Setting maxTimeMS on that operation to 5000, or 5 seconds, means that your application receives feedback as soon as you are confident there is a network problem.

In the spirit of chaos testing, Atlas will perform replica set elections automatically for periodic maintenance and certain configuration changes.

To check if your application is resilient to replica set elections, test the failover process by simulating a failover event.

The example application brings together the following recommendations to ensure resilience against network outages and failover events:

  • Use the Atlas-provided connection string with retryable writes, majority write concern, and default read concern.

  • Specify an operation time limit with the maxTimeMS method. For instructions on how to set maxTimeMS, refer to your specific Driver Documentation.

  • Handle errors for duplicate keys and timeouts.

The application is an HTTP API that allows clients to create or list user records. It exposes an endpoint that accepts GET and POST requests http://localhost:3000:

Method
Endpoint
Description
GET
/users
Gets a list of user names from a users collection.
POST
/users
Requires a name in the request body. Adds a new user to a users collection.

Note

The following server application uses NanoHTTPD and json which you need to add to your project as dependencies before you can run it.

1// File: App.java
2
3import java.util.Map;
4import java.util.logging.Logger;
5
6import org.bson.Document;
7import org.json.JSONArray;
8
9import com.mongodb.MongoException;
10import com.mongodb.client.MongoClient;
11import com.mongodb.client.MongoClients;
12import com.mongodb.client.MongoCollection;
13import com.mongodb.client.MongoDatabase;
14
15import fi.iki.elonen.NanoHTTPD;
16
17public class App extends NanoHTTPD {
18 private static final Logger LOGGER = Logger.getLogger(App.class.getName());
19
20 static int port = 3000;
21 static MongoClient client = null;
22
23 public App() throws Exception {
24 super(port);
25
26 // Replace the uri string with your MongoDB deployment's connection string
27 String uri = "<atlas-connection-string>";
28 client = MongoClients.create(uri);
29
30 start(NanoHTTPD.SOCKET_READ_TIMEOUT, false);
31 LOGGER.info("\nStarted the server: http://localhost:" + port + "/ \n");
32 }
33
34 public static void main(String[] args) {
35 try {
36 new App();
37 } catch (Exception e) {
38 LOGGER.severe("Couldn't start server:\n" + e);
39 }
40 }
41
42 @Override
43 public Response serve(IHTTPSession session) {
44 StringBuilder msg = new StringBuilder();
45 Map<String, String> params = session.getParms();
46
47 Method reqMethod = session.getMethod();
48 String uri = session.getUri();
49
50 if (Method.GET == reqMethod) {
51 if (uri.equals("/")) {
52 msg.append("Welcome to my API!");
53 } else if (uri.equals("/users")) {
54 msg.append(listUsers(client));
55 } else {
56 msg.append("Unrecognized URI: ").append(uri);
57 }
58 } else if (Method.POST == reqMethod) {
59 try {
60 String name = params.get("name");
61 if (name == null) {
62 throw new Exception("Unable to process POST request: 'name' parameter required");
63 } else {
64 insertUser(client, name);
65 msg.append("User successfully added!");
66 }
67 } catch (Exception e) {
68 msg.append(e);
69 }
70 }
71
72 return newFixedLengthResponse(msg.toString());
73 }
74
75 static String listUsers(MongoClient client) {
76 MongoDatabase database = client.getDatabase("test");
77 MongoCollection<Document> collection = database.getCollection("users");
78
79 final JSONArray jsonResults = new JSONArray();
80 collection.find().forEach((result) -> jsonResults.put(result.toJson()));
81
82 return jsonResults.toString();
83 }
84
85 static String insertUser(MongoClient client, String name) throws MongoException {
86 MongoDatabase database = client.getDatabase("test");
87 MongoCollection<Document> collection = database.getCollection("users");
88
89 collection.insertOne(new Document().append("name", name));
90 return "Successfully inserted user: " + name;
91 }
92}

Note

The following server application uses Express, which you need to add to your project as a dependency before you can run it.

1const express = require('express');
2const bodyParser = require('body-parser');
3
4// Use the latest drivers by installing & importing them
5const MongoClient = require('mongodb').MongoClient;
6
7const app = express();
8app.use(bodyParser.json());
9app.use(bodyParser.urlencoded({ extended: true }));
10
11const uri = "mongodb+srv://<db_username>:<db_password>@cluster0-111xx.mongodb.net/test?retryWrites=true&w=majority";
12
13const client = new MongoClient(uri, {
14 useNewUrlParser: true,
15 useUnifiedTopology: true
16});
17
18// ----- API routes ----- //
19app.get('/', (req, res) => res.send('Welcome to my API!'));
20
21app.get('/users', (req, res) => {
22 const collection = client.db("test").collection("users");
23
24 collection
25 .find({})
26 .maxTimeMS(5000)
27 .toArray((err, data) => {
28 if (err) {
29 res.send("The request has timed out. Please check your connection and try again.");
30 }
31 return res.json(data);
32 });
33});
34
35app.post('/users', (req, res) => {
36 const collection = client.db("test").collection("users");
37 collection.insertOne({ name: req.body.name })
38 .then(result => {
39 res.send("User successfully added!");
40 }, err => {
41 res.send("An application error has occurred. Please try again.");
42 })
43});
44// ----- End of API routes ----- //
45
46app.listen(3000, () => {
47 console.log(`Listening on port 3000.`);
48 client.connect(err => {
49 if (err) {
50 console.log("Not connected: ", err);
51 process.exit(0);
52 }
53 console.log('Connected.');
54 });
55});

Note

The following web application uses FastAPI. To create a new application, use the FastAPI sample file structure.

1# File: main.py
2
3from fastapi import FastAPI, Body, Request, Response, HTTPException, status
4from fastapi.encoders import jsonable_encoder
5
6from typing import List
7from models import User
8
9import pymongo
10from pymongo import MongoClient
11from pymongo import errors
12
13# Replace the uri string with your |service| connection string
14uri = "<atlas-connection-string>"
15db = "test"
16
17app = FastAPI()
18
19@app.on_event("startup")
20def startup_db_client():
21 app.mongodb_client = MongoClient(uri)
22 app.database = app.mongodb_client[db]
23
24@app.on_event("shutdown")
25def shutdown_db_client():
26 app.mongodb_client.close()
27
28##### API ROUTES #####
29@app.get("/users", response_description="List all users", response_model=List[User])
30def list_users(request: Request):
31 try:
32 users = list(request.app.database["users"].find().max_time_ms(5000))
33 return users
34 except pymongo.errors.ExecutionTimeout:
35 raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="The request has timed out. Please check your connection and try again.")
36
37@app.post("/users", response_description="Create a new user", status_code=status.HTTP_201_CREATED)
38def new_user(request: Request, user: User = Body(...)):
39 user = jsonable_encoder(user)
40 try:
41 new_user = request.app.database["users"].insert_one(user)
42 return {"message":"User successfully added!"}
43 except pymongo.errors.DuplicateKeyError:
44 raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Could not create user due to existing '_id' value in the collection. Try again with a different '_id' value.")

Back

Cluster Sizing and Tiers