MongoDB constanly reading/writing to disk at idle

Hey I just expanded my RAID array for an upcoming project and after i did this I noticed the drive heads NEVER stopped moving, you could hear the heads moving back and forth 24/7. Originally I thought this isn’t normal and so I decided to investigate. It took me a while as I couldn’t find anything that was utilizing the hard drives using tools like iotop but I knew something was going on because during monitoring with Prometheus I could see around 2 IO operations /sec reading/writing about 10kB /sec on each of the drives in the RAID array. Finally I stopped the MongoDB container and everything settled down but picked right back up after starting the container.

This happens during idle, meaning the container this is in simply exists I am not currently writing to any database/collection or querying for any info, it’s just sitting there waiting for me to finish the part of my project that utilizes mongoDB.

So my question is this normal and expected? If so what exactly is gong on that requires mongoDB to access the drives constantly? Is there a way to mitigate this behavior?

I couldn’t edit previous / orignal post so here is what I’m looking at, all panels are the same time frame and the little blue boxes in the first couple panels show where the container was stopped.

Looking at the logs this is what I get and from what I understand its some type of state snapshot that is being written? Can Anyone confirm this? Also From my reading this should be happening every 60 seconds or when a predetermined size is reached however this seems to be happening every second and I’m not sure why.

If anyone could shed light on this and maybe someway to extend the time period between snapshots I would really appreciate it. Thank you.

{“t”:{"$date":“2022-03-12T20:20:46.117+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“WTCheckpointThread”,“msg”:“WiredTiger message”,“attr”:{“message”:"[1647116446:117751][1:0xffff97fb8cc0], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1492, snapshot max: 1492 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 12731"}}

{“t”:{"$date":“2022-03-12T20:21:46.380+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“WTCheckpointThread”,“msg”:“WiredTiger message”,“attr”:{“message”:"[1647116506:380135][1:0xffff97fb8cc0], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1494, snapshot max: 1494 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 12731"}}

{“t”:{"$date":“2022-03-12T20:22:46.672+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“WTCheckpointThread”,“msg”:“WiredTiger message”,“attr”:{“message”:"[1647116566:672487][1:0xffff97fb8cc0], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1496, snapshot max: 1496 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 12731"}}

Hi @Art

There’s not enough information to determine if it’s definitely caused by the mongod process, but off the top of my head, the mongod process does have a Full Time Diagnostic Data Capture process that records telemetry approximately every second. This data is typically used by MongoDB engineers to investigate issues with the server.

Another possibility that comes in mind is if the Database Profiler is active, it may record data periodically as well.

Also a replica set Primary would write a periodic noop into the oplog every ~10 seconds to refresh how up-to-date the Secondaries are for operational purposes.

Those are some processes that I can think of off the top of my head that does something with disk periodically. Having said that, it’s also possible that the activity was a combination of all the above plus some monitoring software external to mongod that you may have running. I guess as long as you don’t see any performance issues, it’s safe to ignore these events in the time being. Otherwise you may inadvertently turn off some process that’s important to the operation of the database :slight_smile:

Best regards