In contrast to a single-server MongoDB database, a MongoDB cluster allows a MongoDB database to either horizontally scale across many servers with sharding, or to replicate data ensuring high availability with MongoDB replica sets, therefore enhancing the overall performance and reliability of the MongoDB cluster.
In the context of MongoDB, “cluster” is the word usually used for either a replica set or a sharded cluster. A replica set is the replication of a group of MongoDB servers that hold copies of the same data; this is a fundamental property for production deployments as it ensures high availability and redundancy, which are crucial features to have in place in case of failovers and planned maintenance periods.
A sharded cluster is also commonly known as horizontal scaling, where data is distributed across many servers.
The main purpose of sharded MongoDB is to scale reads and writes along multiple shards.
MongoDB Atlas Cluster is a NoSQL Database-as-a-Service offering in the public cloud (available in Microsoft Azure, Google Cloud Platform, Amazon Web Services). This is a managed MongoDB service, and with just a few clicks, you can set up a working MongoDB cluster, accessible from your favorite web browser.
You don’t need to install any software on your workstation as you can connect to MongoDB directly from the web user interface as well as inspect, query, and visualize data.
Alternatively, if you prefer working with the command line, you can connect using the mongo shell. To do this, you’ll need to configure the firewall from the web portal to accept your IP. From the homepage, navigate to Security and then Network Access. Finally, click on “Add IP Address” and add your IP:
Then, substitute the following configuration settings (MongoDB Atlas cluster name, database name, and username) in the mongo shell command line window. For example:
mongo "mongodb+srv://<clustername>.nupbd.mongodb.net/<dbname>" --username <username>
Note: When using the mongo shell like above, you will be prompted to type the password that you submitted when the MongoDB deployment was created.
Whether your initial intentions are about developing proof of concept applications or planning for a production environment, a very good starting point is to create a new MongoDB cluster on MongoDB Atlas. By using the free tier, the default setup is to deploy a MongoDB cluster with a replica set. However, if you also want to enable sharding, then you have to enable this feature separately and specify the number of shards to have.
A MongoDB replica set ensures replication is enforced by providing data redundancy and high availability over more than one MongoDB server.
If a MongoDB deployment lacked a replica set, that means all data would be present in exactly one server. In case that main server fails, then all data would be lost - but not when a replica set is enabled. Therefore, we can immediately see the importance of having a replica set for a MongoDB deployment.
In addition to fault tolerance, replica sets can also provide extra read operations capacity for the MongoDB cluster, directing clients to the additional servers, subsequently reducing the overall load of the cluster.
Replica sets can also be beneficial for distributed applications due to the data locality they offer, so that faster and parallel read operations happen to the replica sets instead of having to go through one main server.
A MongoDB cluster needs to have a primary node and a set of secondary nodes in order to be considered a replica set.
At most, one primary node must exist, and its role is to receive all write operations. All changes on the primary node’s data sets are recorded in a special capped collection called the operation log (oplog).
The role of the secondary nodes is to then replicate the primary node’s operation log and make sure that the data sets reflect exactly the primary’s data sets. This functionality is illustrated in the following diagram:
The replication from the primary node’s oplog to the secondaries happens asynchronously, which allows the replica set to continue to function despite potential failure of one or more of its members. If the current primary node suddenly becomes unavailable, an election will take place to determine the new primary node. In the scenario of having two secondary nodes, one of the secondary nodes will become the primary node:
If for any reason only one secondary node exists, a special instance called arbiter can be added to a replica set which can only participate in replica set elections but does not replicate the oplog from the primary. This means it can’t provide data redundancy and it will always be an arbiter, i.e., it can’t become a primary or a secondary node, whereas a primary can become a secondary node and vice versa.
The failure of a primary node is not the only reason a replica set election can occur. Replica set elections can happen for other reasons in response to a variety of events, such as when a new node is added in the replica set, on the initiation of a replica set, during maintenance of the replica set, or if any secondary node loses connectivity to the primary node for a period more than the configured timeout period (‘10sec’ by default).
A replica set in a MongoDB cluster is transparent to client applications. This means they can’t identify whether the cluster has a replica set enabled or whether it’s running on a single server deployment.
However, MongoDB offers additional read and write operations on top of the standard input and output commands. Client applications can optionally address directly to which replica set node a read operation should execute. By default, all read operations are directed to the primary node, but specific routing to secondaries can also be configured; this is called read preference.
Several read preference modes can be configured. For example, if a client application is configured to go directly to secondaries, then the mode parameter in the read preference should be set to secondary. If there are specific needs for the least network latency irrespective of whether that happens in the primary or any secondary node, then the nearest read preference mode should be configured. However, in this option, a risk of potentially stale data comes into play (if the nearest node is a secondary node) due to the nature of asynchronous replication from primary to secondaries.
Alternatively, the read preference mode can be set to primary preferred or secondary preferred. These two modes also make use of another property called maxStalenessSeconds to determine to which node of the replica set should the read operation be directed. In all cases where there is a chance for the read operation to happen on a non-primary node, you must ensure that your application can tolerate stale data.
When writing data in a MongoDB replica set, you can include additional options to ensure that the write has propagated successfully throughout the cluster. This involves adding a write concern property alongside an insert operation. A write concern means what level of acknowledgement we desire to have from the cluster upon each write operation, and it consists of the following options:
The w value can be set to 0, meaning that no write acknowledgement is needed. 1 is the default and it means that it only requires the primary node to acknowledge the write, while any number greater than 1 corresponds to the number of nodes plus the primary that need to acknowledge. For example, 4 means 3 secondaries need to signal as well as the primary node.
The j value corresponds to whether MongoDB has been written on disk in a special area called the journal. This is used from MongoDB for recovery purposes in case of a hard shutdown, and it is enabled by default.
Finally the wtimeout value is the time the command should wait before returning any result. If this is not specified and if for any reason the actual write has any network issues, then the command would block indefinitely, so it is a good practice to set this value. It is measured in milliseconds and it is only applicable for w values greater than 1.
In the following example, if we had a 5-node replica set, we are requesting that the majority (w option) of the nodes (3) replies back with a successful write acknowledgment:
In the above example, we are using the mongo shell insert command. The first parameter is the document that will be inserted in the products collection, while the second parameter is the write concern. The write concern requests for the MongoDB cluster to acknowledge that the write has succeeded in the majority of data bearing nodes, and overall, this operation should not take more than 5 seconds.
A sharded cluster in MongoDB is a collection of datasets distributed across many shards (servers) in order to achieve horizontal scalability and better performance in read and write operations.
Sharding is very useful for collections that have a lot of data and high query rates.
Based on the above diagram, Collection1 is sharded, unlike Collection2, which is not sharded. If we kept Collection1 in one server, the CPU of that server could spike because it wouldn't have the necessary capacity to deal with the requests it received. However, adding another shard for Collection1 helps distribute the load by increasing the overall capacity that it can receive.
Configuring a sharded cluster allows a shard to also be configured as a replica set. In other words, it is configured to be with high availability and horizontal scalability. In addition, a component called mongos is deployed when configuring a sharded cluster. Mongos is a query router; it acts as an intermediary between the client and the server, or the shard that the data resides in. Apart from routing, it can also handle load balancing for all query and write operations to the shards.
Finally, a metadata service called config servers (configsvr) is also deployed. It contains any information and configurations relating to the sharded cluster. This component also stores the location and the ranges of the data chunks in the different shards, as well as authentication information including usernames, roles, and permissions to different databases and collections.
Planning for disaster recovery and zero downtime means having a replica set of the MongoDB cluster across a single region or a number of regions, depending on how crucial the assessment of the risk is.
A replica set consists of the primary mongod process and other secondary processes. It is recommended that the total number of these processes is odd so that majority is ensured in the case that the master fails and a new master needs to be allocated.
Sharding a MongoDB cluster is also at the cornerstone of deploying a production cluster with huge data loads.
Obviously, designing your data models, appropriately storing them in collections, and defining corrected indexes is essential. But if you truly want to leverage the power of MongoDB, you need to have a plan regarding sharding your cluster.