Working set MUST fit in memory?

The snippet from this document:

“If you have and use multiple collections, you must consider the size of all indexes on all collections. The indexes and the working set must be able to fit in memory at the same time”

contradicts with FAQ doc for diagnostics:
https://docs.mongodb.com/manual/faq/diagnostics/#memory-diagnostics-for-the-wiredtiger-storage-engine. In this doc it says:

Must my working set size fit RAM?
No”

Can someone from mongodb help to make it more clear?

Welcome to the community forums @astro!

Your first snippet from the docs is missing the opening context which is:

For the fastest processing, ensure that your indexes fit entirely in RAM so that the system can avoid reading the index from disk.

However, the must in this snippet is more correctly should . I’ll raise a DOCS pull request to fix the wording.

Your concise quote from the FAQ is also correct, but the full FAQ answer includes useful elaboration on cache size and eviction.

Both documentation pages are trying to suggest the same outcome: for best performance you will want your commonly used indexes and working set to fit in memory. This is not a strict requirement for most MongoDB storage engines (with one notable exception which I’ll mention in a moment). However, if your working set is significantly larger than available memory, performance will suffer as moving data to and from disk becomes a significant bottleneck.

The one exception to this guidance is the In-Memory Storage Engine which is part of MongoDB Enterprise edition. The In-Memory Storage Engine provides predictable latency by intentionally not maintaining any on-disk data, and requires all data to fit within the specified inMemorySizeGB cache.

Regards,
Stennie

Thanks for clarifying, Stennie.

A system with 100GB wotth working set may need ~3 shards(considering 32GB memory on every node). That sums up-to 3*3(PSS)= 9 nodes in the cluster and 12 nodes cluster including config servers. That’s the lot of hardware for 100GB data.

Any suggestions how to optimize this case with trade-off b/w performance and less hardware.

PS: The 100GB working set is thoughtful consideration, and already reduced to considerable extent. 32GB memory per node is the available configuration

Hi Astro,

It really depends what flexibility you have in terms of your deployment configuration, how the working set impacts your workload, and anticipated future growth. Your provisioning for 100GB of data is factoring in data redundancy, failover, and performance. You can compromise some or all of those dimensions for cost savings.

On a strict cost and effort basis, you could also have a more straightforward setup using a 3 member replica set with 128GB of RAM per server. On current hardware purchases, 32GB RAM is a decently spec’d laptop and server class machines are available well above 128GB RAM.

Other ideas you could consider:

  • Review your data model for possible efficiencies to reduce your working set (for example, more granular document sizes). The Building with Patterns series may be a helpful read for your development team.
  • Reduce the size of your WiredTiger cache from the default. The cache is used for working with uncompressed data in memory, with the remainder of RAM available for use by the filesystem cache (which uses the data representation compressed as the on-disk format). Fetching data from the filesystem cache isn’t as fast as working with at data in the WiredTiger internal cache, but is still significantly faster than fetching from disk. If your data compresses well, you are effectively fitting more data “in memory” factoring in the filesystem cache.
  • Ensure updates are setting individual fields rather than doing full document replacements. This may reduce overhead of oplog entries and network traffic.
  • Use zone-aware sharding to implement tiered storage for different data SLAs (hot vs archived data) or use cases (operational vs analytics/reporting)
  • Use the latest production release series of MongoDB. You haven’t mentioned what version you are using, but there have been ongoing improvements to performance in successive major releases as well as reduction in write amplification.
  • Negotiate for approval of more RAM and/or faster disks, or move to a cloud-based system which allows more dynamic provisioning.

If you don’t have the in-house expertise to do this capacity planning with confidence, I recommend engaging an experienced consultant, either from MongoDB’s Consulting team or one of our partners.

Public forum discussion can provide some general advice, but a good consultant will spend the time to understand all of your business requirements and constraints for more holistic recommendations.

Regards,
Stennie

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.