Get the status of CPU, RAM & Disk Util from MongoDB Atlas

I am using M40 Cluster on ATLAS. I have a few collections with >65 Million records & it’s keep increasing.

When I am updating the records (by setting batchsize) it is taking around 2-4 mins, I am ok with it.
While updating the records CPU & Disk Util is <80%.

My Node application have a queue of queries/commands to execute, & there are multiple instances running of Node app. When two or more queries/commands are getting executed simultaneously, the CPU & DiskUtil is exausting.

So, I just want to make sure that CPU & Disk utilisation are under threshold (e.g. <40%) before hitting next Query/Command to MongoDB.

Is there any way to get the CPU, Disk Util & RAM status by hitting query/command using MongoDB Driver through node app? or any other solution to this problem?

Hi @Ashish_Zanwar - Welcome to the community!

When two or more queries/commands are getting executed simultaneously, the CPU & DiskUtil is exausting.

I believe we should investigate the issue CPU & DiskUtil exhaustion first rather than attempting to try query for particular resource metric values to determine whether an operation can or cannot be executed based off a threshold. (Note: it could be that you’ve optimised the workload / queries as much as possible but please provide those details if possible)

Although there may be ways you could query for the metrics, there’s quite a few scenarios where this could cause further issues. For some examples, lets say you query the server and get the response at a particular point in time and find that the server is under your required threshold. What if by the time the operation is executed, the server is beyond the threshold or what if multiple processes get this response and then bombard the server all at once etc.

To further assist with this, could you provide more context surrounding the workload or queries that are being executed that cause the resource exhaustion mentioned as well as the effect of the exhaustion (total stall, etc)?

In saying the above, the following Atlas documentation may be of use when investigating the queries in question:

When I am updating the records (by setting batchsize) it is taking around 2-4 mins, I am ok with it.
While updating the records CPU & Disk Util is <80%.

Regarding the above, is the update being performed across all documents in the collection? Additionally, what would be the average document size?

Is there any way to get the CPU, Disk Util & RAM status by hitting query/command using MongoDB Driver through node app? or any other solution to this problem?

As mentioned above, doing this may lead to a “race condition” in which maybe multiple processes receive a particular value which is under threshold and bombard the server all at once thus leading to the resource exhaustion again.

However, in saying so, you could possibly integrate some of the following tools to assist:

I am using M40 Cluster on ATLAS. I have a few collections with >65 Million records & it’s keep increasing.

Have you considered perhaps upgrading to a higher tier cluster to see if the resource exhaustion is eliminated or at least reduced? If a cluster tier upgrade resolves the issue without needing to perform any other changes such as a locking mechanism or querying for hardware metrics then perhaps attempting to optimise the operations may help.

Regards,
Jason

2 Likes

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.