On this page
The following checklist, along with the Development Checklist list, provides recommendations to help you avoid issues in your production MongoDB deployment.
Align your disk partitions with your RAID configuration.
VMware users should use VMware virtual drives over NFS.
Linux/Unix: format your drives into XFS or EXT4. If possible, use XFS as it generally performs better with MongoDB.
With the WiredTiger storage engine, use of XFS is strongly recommended to avoid performance issues found when using EXT4 with WiredTiger.
If using RAID, you may need to configure XFS with your RAID geometry.
Windows: use the NTFS file system. Do not use any FAT file system (i.e. FAT 16/32/exFAT).
Verify that all non-hidden replica set members are identically provisioned in terms of their RAM, CPU, disk, network setup, etc.
Configure the oplog size to suit your use case:
The replication oplog window should cover normal maintenance and downtime windows to avoid the need for a full resync.
The replication oplog window should cover the time needed to restore a replica set member from the last backup.
Changed in version 3.4: The replication oplog window no longer needs to cover the time needed to restore a replica set member via initial sync as the oplog records are pulled during the data copy. However, the member being restored must have enough disk space in the local database to temporarily store these oplog records for the duration of this data copy stage.
With earlier versions of MongoDB, replication oplog window should cover the time needed to restore a replica set member by initial sync.
Ensure that your replica set includes at least three data-bearing nodes that run with journaling and that you issue writes with
w:"majority"write concern for availability and durability.
Use hostnames when configuring replica set members, rather than IP addresses.
Ensure full bidirectional network connectivity between all
Ensure that each host can resolve itself.
Ensure that your replica set contains an odd number of voting members.
For high availability, deploy your replica set into a minimum of three data centers.
Place your config servers on dedicated hardware for optimal performance in large clusters. Ensure that the hardware has enough RAM to hold the data files entirely in memory and that it has dedicated storage.
Use NTP to synchronize the clocks on all components of your sharded cluster.
Use CNAMEs to identify your config servers to the cluster so that you can rename and renumber your config servers without downtime.
Ensure that all instances use journaling.
Place the journal on its own low-latency disk for write-intensive workloads. Note that this will affect snapshot-style backups as the files constituting the state of the database will reside on separate volumes.
Use RAID10 and SSD drives for optimal performance.
SAN and Virtualization:
Windows Azure: Adjust the TCP keepalive (
tcp_keepalive_time) to 100-120. The TCP idle timeout on the Azure load balancer is too slow for MongoDB's connection pooling behavior. See: Azure Production Notes for more information.
Use MongoDB version 2.6.4 or later on systems with high-latency storage, such as Windows Azure, as these versions include performance improvements for those systems.
Turn off transparent hugepages. See Transparent Huge Pages Settings for more information.
Adjust the readahead settings on the devices storing your database files.
For the WiredTiger storage engine, set readahead between 8 and 32 regardless of storage media type (spinning disk, SSD, etc.), unless testing shows a measurable, repeatable, and reliable benefit in a higher readahead value.
MongoDB commercial support can provide advice and guidance on alternate readahead configurations.
tunedon RHEL / CentOS, you must customize your
tunedprofile. Many of the
tunedprofiles that ship with RHEL / CentOS can negatively impact performance with their default settings. Customize your chosen
deadlinedisk schedulers for SSD drives.
noopdisk scheduler for virtualized drives in guest VMs.
ulimitvalues on your hardware to suit your use case. If multiple
mongosinstances are running under the same user, scale the
ulimitvalues accordingly. See: UNIX
ulimitSettings for more information.
Configure sufficient file handles (
fs.file-max), kernel pid limit (
kernel.pid_max), maximum threads per process (
kernel.threads-max), and maximum number of memory map areas per process (
vm.max_map_count) for your deployment. For large systems, the following values provide a good starting point:
fs.file-maxvalue of 98000,
kernel.pid_maxvalue of 64000,
kernel.threads-maxvalue of 64000, and
vm.max_map_countvalue of 128000
Ensure that your system has swap space configured. Refer to your operating system's documentation for details on appropriate sizing.
Ensure that the system default TCP keepalive is set correctly. A value of 300 often provides better performance for replica sets and sharded clusters. See: Does TCP
keepalivetime affect MongoDB Deployments? in the Frequently Asked Questions for more information.
Consider disabling NTFS "last access time" updates. This is analogous to disabling
atimeon Unix-like systems.
Format NTFS disks using the default Allocation unit size of 4096 bytes.
Schedule periodic tests of your back up and restore process to have time estimates on hand, and to verify its functionality.
Use MongoDB Cloud Manager or Ops Manager, an on-premise solution available in MongoDB Enterprise Advanced or another monitoring system to monitor key database metrics and set up alerts for them. Include alerts for the following metrics:
replication oplog window
Monitor hardware statistics for your servers. In particular, pay attention to the disk use, CPU, and available disk space.
In the absence of disk space monitoring, or as a precaution:
Create a dummy 4 GB file on the
storage.dbPathdrive to ensure available space if the disk becomes full.
A combination of
cron+dfcan alert when disk space hits a high-water mark, if no other monitoring tool is available.
Configure load balancers to enable "sticky sessions" or "client affinity", with a sufficient timeout for existing connections.
Avoid placing load balancers between MongoDB cluster or replica set components.