Limiting resources with multiple instances

Hi,
I have a mongo setup which is sharded and meant for scale.
The services may include multiple config and shard servers (mongod instances), and grow overtime with amount of shards (works on multiple nodes of course).
When I was doing some stress tests with my application I started having some memory issues and saw that mongo instances are taking most of the memory in the system (more than 50%).

Some more info:

  • My workload consists of many inserts to many different collections, all under the same DB.
  • The collections that get most of the workload have 3 indexes (regular indexes, not compound).
  • All the collections are sharded.
  • I’m using a mongos to balance the workload between my nodes.

A few questions:

  1. Is there a sweetspot for memory usage needed by the mongod instances?
  2. Is there a way to limit ALL the instances together to not exceed 30% of memory consumption? I see that I can put a wiredTigerCacheSizeGB flag per mongod instance but its fixed sized and probably requires restarting the mongod instances and change the values across whenever a mongod instance is added/deleted. I’m looking for a global parameter that all of them will balance to (by the router maybe?).

Cheers,
Oded

Your complete setup is unclear but the following

makes be think you are running multiple mongod on the same hardware machine.

The goal of replica set is for high availability. Running multiple instances of the same replica set within the same machine goes against the goal of high availability.

The goal of shards is to increase capacity and performance. Running multiple shards within the same machine goes against the goal of sharding. Your instances are fighting over the same resources.

Running multiple mongod instances for replica sets or shards withing the same machine is fine for experimentation and to gain experience. Certainly not for

The sweetspot is to have all memory available for a single instance and to run a single instance per physical host.

1 Like

@steevej
thank you for the answer

1 Like