Welcome to the sixth in a series of blog posts covering performance best practices for MongoDB.
In this series, we are covering key considerations for achieving performance at scale across a number of important dimensions, including:
- Data modeling and sizing memory (the working set)
- Query patterns and profiling
- Transactions and read/write concerns
- Hardware and OS configuration, which we’ll cover today
If you are running MongoDB on Atlas, our fully-managed and global cloud database service, then many of the considerations in this section are taken care of for you. You should refer to the Atlas Sizing and Tier Selection documentation for guidance on sizing.
If you are running MongoDB yourself, then this post will be useful for you.
Run on Supported Platforms
Beyond Atlas, you can run MongoDB on a variety of operating systems and processor architectures – from 64 bit x86 and ARM CPUs, to IBM POWER and mainframe systems. Refer to the supported platforms section of the documentation for the latest hardware and OS support matrices.
Ensure Your Working Set Fits in RAM
As stated in the very first blog post of this series, MongoDB performs best when the application’s working set (indexes and most frequently accessed data) fits in memory. RAM size is the most important factor for instance sizing; other optimizations may not significantly improve the performance of the database if there is insufficient RAM.
Refer to Part 1 of the series for more information.
Use Multiple CPU Cores
MongoDB’s WiredTiger storage engine architecture is capable of efficiently using multiple CPU cores. Typically a single client connection is represented by its own thread. In addition background worker threads perform tasks like checkpointing and cache eviction. You should provision an adequate number of CPU cores in proportion to concurrent client connections. Note that typically investing in more RAM and disk IOPS gives the highest benefit to database performance.
In MongoDB Atlas, the number of CPU cores and concurrent client connections is a function of your chosen cluster tier. Review the documentation to see the current limits.
Dedicate Each Server to a Single Role in the System
For best performance, you should run one
mongod process per host.
With appropriate sizing and resource allocation using virtualization or container technologies, multiple MongoDB processes can safely run on a single physical server without contending for resources.
For some use cases (multi-tenant, microsharding) users deploy multiple MongoDB processes on the same host. In this case you will have to make several configuration changes to make sure each process has sufficient resources.
For availability, multiple members of the same replica set should not be co-located on the same physical hardware or share any single point of failure such as a power supply or network switch.
Configuring the WiredTiger Cache
The size of WiredTiger storage engine’s internal cache is tunable through the
storage.wiredTiger.engineConfig.cacheSizeGB setting and should be large enough to hold your entire working set. If the cache does not have enough space to load additional data, WiredTiger evicts pages from the cache to free up space.
storage.wiredTiger.engineConfig.cacheSizeGB is set to 50% of the available RAM, minus 1 GB. Caution should be taken if raising the value as it takes resources from the OS, and WiredTiger performance can actually degrade as the filesystem cache becomes less effective. Note that MongoDB itself will also allocate memory beyond the WiredTiger cache.
Also, as MongoDB supports variable sized records and WiredTiger creates variable sized pages, some memory fragmentation is expected and will consume memory above the configured cache size.
Use Multiple Query Routers
mongos processes [query routers] should be spread across multiple servers. You should use at least as many mongos processes as there are shards. MongoDB Atlas automatically provisions a query router for each shard in your cluster.
Use Interleave Policy on NUMA Architecture
Running MongoDB on a system with Non-Uniform Memory Access (NUMA) can cause a number of operational problems, including slow performance for periods of time, inability to use all available RAM, and high system process usage.
When running MongoDB servers and clients on NUMA hardware, you should configure a memory interleave policy using the
numactl --interleave command.
As a distributed database, MongoDB relies on efficient network transport during query routing and inter-node replication. Based on the snappy compression algorithm, network traffic across a MongoDB cluster can be compressed by up to 80%, providing major performance benefits in bandwidth-constrained environments, and reducing networking costs.
compressors parameter to the connection string to enable compression:
Storage and Disk I/O Considerations
While MongoDB performs all read and write operations through in-memory data structures, data is persisted to disk, and queries on data not already in RAM trigger a read from disk. As a result, the performance of the storage sub-system is a critical aspect of any system.
Users should use high-performance storage. The following considerations will help you use the best storage configuration, including OS and file system settings.
Use SSDs for IO Intensive Applications
Most disk access patterns in MongoDB do not have sequential properties, and as a result, you may experience substantial performance gains by using SSDs.
Good results and strong price to performance have been observed with SATA, PCIe, and NVMe SSDs. Rather than spending more on expensive spinning drives, that money may be more effectively spent on more RAM or SSDs. SSDs should also be used for read heavy applications if the working set no longer fits in memory.
We recommend storing MongoDB's journal files on a separate disk partition.
Most MongoDB deployments should use RAID-10 storage configurations. RAID-5 and RAID-6 have limitations and may not provide sufficient performance. MongoDB's replica sets allow deployments to provide stronger availability for data, and should be considered with RAID and other factors to meet the desired availability SLA. You don't need to buy SAN disk arrays for high availability.
Use MongoDB’s Default Compression for Storage and I/O-Intensive Workloads
The default snappy compression reduces storage footprint typically by 50% or more, and enables higher IOPs as fewer bits are read from disk. As with any compression algorithm, you trade storage efficiency for CPU overhead, and so it is important to test the impacts of compression in your own environment.
MongoDB offers you a range of compression options for both documents and indexes. The snappy compression algorithm provides a balance between high document compression ratios (typically around 50%+, dependent on data types) with low CPU overhead, while the optional zStandard and zlib libraries will achieve higher compression, but incur additional CPU cycles as data is written to and read from disk. zStandard was introduced with the MongoDB 4.2 release and is recommended over the existing zLib library due to lower CPU overhead.
Indexes use prefix compression by default, which serves to reduce the in-memory footprint of index storage, freeing up more of the RAM for frequently accessed documents. Testing has shown a typical 50% compression ratio using the prefix algorithm, though users are advised to test with their own data sets.
You can modify the default compression settings for all collections and indexes. Compression is also configurable on a per-collection and per-index basis during collection and index creation.
Set the readahead setting between 8 and 32. Use the
blockdev --setra <value> command to set the
readahead block size
Use XFS File Systems; Avoid EXT4
Use of XFS is strongly recommended to avoid performance issues that have been observed when using EXT4 with WiredTiger.
Disable Access Time Settings
Some file systems will maintain metadata for the last time a file was accessed. While this may be useful for some applications, in a database it means that the file system will issue a write every time the database accesses a page, which will negatively impact the performance and throughput of the system.
Disable Transparent Hugepages
Transparent hugepages can add additional memory pressure and CPU utilization and have negative performance implications for swapping.
That wraps up this installment of the performance best practices series. In our final post we will cover benchmarking.