Hi,
I have a mongo setup which is sharded and meant for scale.
The services may include multiple config and shard servers (mongod instances), and grow overtime with amount of shards (works on multiple nodes of course).
When I was doing some stress tests with my application I started having some memory issues and saw that mongo instances are taking most of the memory in the system (more than 50%).
Some more info:
- My workload consists of many inserts to many different collections, all under the same DB.
- The collections that get most of the workload have 3 indexes (regular indexes, not compound).
- All the collections are sharded.
- I’m using a mongos to balance the workload between my nodes.
A few questions:
- Is there a sweetspot for memory usage needed by the mongod instances?
- Is there a way to limit ALL the instances together to not exceed 30% of memory consumption? I see that I can put a wiredTigerCacheSizeGB flag per mongod instance but its fixed sized and probably requires restarting the mongod instances and change the values across whenever a mongod instance is added/deleted. I’m looking for a global parameter that all of them will balance to (by the router maybe?).
Cheers,
Oded