Mongo memory not released post usage as per observation

I can see the number of active connections are only 2 which include the current terminal. with just 2 active connections the system consuming lesser CPU and higher memory(all available) ! Is this a expected behavior. What is the need of consuming all memory with minimal operation?

As you mentioned the crud operation happening continuously can you please let know from which IP the traffic is more , so that I can check locally. Is there a way we can check ? where it shows the active traffic from client?.

Also this behavior is consistent . The metrics shared is just for 2 day but mongodb in this lab never releases memory (always 98%). Would like to understand more.

PRIMARY> db.serverStatus().connections

{ “current” : 151, “available” : 838709, “totalCreated” : 98058, “active” : 2 }

Below image in my local set up clearly shows the virtual memory increased from 6GB to 8GB during insert operation adn after that remained high at 8GB despite no active connections or any curd operation.

This memory came down when a restart of mongo triggered. Why the behavior is like this ?

is this not expected that mongo should release the memory if there are no operations ?

Hi @S_P and welcome in the MongoDB Community :muscle: !

MongoDB needs memory for several things:

  • connections
  • indexes
  • working set
  • read/write operations & aggregations, in-memory sorts, etc.

Which is, of course, on top of what your OS is consuming. So for example, when you run a query, it will first create a connection which will consume RAM and then release it (if you close it…). But it will also consume RAM to run the query and retrieve documents from disk which will then stay in the working set (most frequently used documents), until they are replaced at some point by more recently needed documents.

Indexes can also get smaller or bigger of course, but they need to fit in RAM to ensure good performances.

Usually, in most use cases, about 10-20% of RAM compared to the cluster size is about right. So 10 to 20GB or RAM for 100GB of data is about right.

With only 8 GB of RAM and a bunch allocated for the OS, I guess you shouldn’t have more than 60GB or so of data without too many indexes and large in-memory sorts and aggregations. You would require more to support theses correctly.

MongoDB tends to use all the RAM available to keep documents in RAM & avoid disk accesses. Too many IOPS can potentially be solved by adding more RAM as less docs would be evicted from the RAM too early and would need to be fetched from disk.