Working set should be sized to fit into RAM or WiredTiger internal cache?

As per this doc, it is mentioned that we should ensure the working set fits into the RAM.

Also, as per this doc, wiredTiger uses 50% of the RAM and the rest is utilized by the filesystem cache & other MongoDB processes.

My concern is, should we ensure that the RAM is sized such that the working set fits into the WiredTiger cache (50% of the RAM) or into memory (100% of RAM)?

I see that the wiredtiger cache holds data in uncompressed form, whereas the filesystem cache holds the data in the same form as on disk & only advantage being avoiding more expensive disk reads.
So, should we ensure the working set fits into WiredTiger cache for best performance??

For example, lets say we have 100GB (indexes) + 10 GB (working set), should we size the RAM to be ~120 GB or 240 GB??

Hi @Mohan_97489 and welcome to the MongoDB Community :muscle: !

So sorry nobody came back to you earlier! :frowning_face:

Your question is a bit technical so I’ll try to make it simple. MongoDB needs enough RAM for indexes + working set + queries + Operating System.

Usually we recommend to have somewhere between 15 to 20% of the total data size as RAM. You can take inspiration from the MongoDB Atlas Cluster Tiers that respect this rule of thumb more or less.

Of course this isn’t valid for ALL the use cases but it’s a general direction.

So to answer your question, 100 GB of indexes + 10 GB of working set sounds a bit unbalanced to me (but it’s possible!). You could be either underestimating the working or you could have useless indexes (or too large, not optimised, etc).
With this amount of indexes, I would evaluate your data set to at least 1TB so I would aim for 200GB of RAM at least. That’s about an M140 or M200 in Atlas.