Mongodb Logs what does bytesRead mean in slow query log

I got this below the log in the Mongodb slow query log. I have the Mongodb version running 5.0 with 3 shards in PSS mode.

The query is running on the primary key and the total document size itself is only 4.4 Mb, But in the slow query log it’s showing 64MB data transferred, how can one document do so for my data transfer?

 {"t":{"$date":"2023-03-15T16:02:14.635+05:30"},"s":"I",  "c":"WRITE",    "id":51803,   "ctx":"conn5559","msg":"Slow query","attr":{"type":"update","ns":"db_name.XXXXXXX","command":{"q":{"uid":308793847},"u":{"$push":{"ps":{"$each":[{"mid":5109,"aid":1412,"trid":"89461-5109-308793847-1412-230315072428","guid":"3919037b-18ce-4973-ad00-1891bc7365e3","st":"sent","dt":230315072428,"adw":4,"ad":230315,"at":72428}],"$slice":-5000}}},"multi":false,"upsert":false},"planSummary":"IXSCAN { uid: 1 }","keysExamined":1,"docsExamined":1,"nMatched":1,"nModified":1,"nUpserted":0,"keysInserted":1,"keysDeleted":0,"numYields":1,"queryHash":"B34121E2","planCacheKey":"CFF4BBD8","locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":289}},"ReplicationStateTransition":{"acquireCount":{"w":290}},"Global":{"acquireCount":{"w":289}},"Database":{"acquireCount":{"w":289}},"Collection":{"acquireCount":{"w":289}},"Mutex":{"acquireCount":{"r":670}}},"flowControl":{"acquireCount":155,"timeAcquiringMicros":131},"storage":{"data":{**"bytesRead":64292681**,"timeReadingMicros":337574}},"remote":"172.31.22.28:60864","durationMillis":103}}

can someone help me with this?

Hey @Kathiresh_Nadar,

Welcome to the MongoDB Community forums :sparkles:

Apology for the late reply.

The bytesRead is the number of bytes read by the operation from the disk to the cache. However, if the data is already in the cache, then the number of bytes read from disk could be 0.

  • The bytesRead value may include more than just the queried documents since WiredTiger reads in units of pages, which can contain multiple documents. All documents on that page are read into the cache and included in the bytesRead value.

  • Furthermore, if the index is not in the cache or is stale, WiredTiger reads several internal and leaf pages from the disk to reconstruct the index in the cache.

Please refer to the Database Profiler Output - storage.data.bytesRead to read more about this.

I hope it addresses your question. Let us know if you have any further questions.

Best,
Kushagra

1 Like

Hi @Kushagra_Kesav ,

Thanks for responding. I just did not looked back as it took too much time to respond. I thought i might not get any response.

Coming to my point, we are seeing that the writes are taking a lot of time and we are writing to the primary key only so no more indexing can solve the problem. But i see that memory can be a problem as you mentioned that it brings lots of other records also to cache.

We have 256Gb Ram on the server and if each document updates bring 200mb of data, then we will have ot take memory in TB’s.

So how do i solve the problem, is there any mongodb fine tuning that can be done.

Thanks
Kathiresh

what is the problem here? what issues do you see from the server metrics?

Hi @Kobe_W ,

The writes are taking too much time, like for eg, even for an upsert into the documents takes 10 to 20 seconds. And we are doing the upsert in the primary key, so it should be fast i think.

So how can we bring down the updates to less than 1 sec.

Regards

it’s hard to say without more info.

Anything in the end to end flow can slow down the whole path. resources like cpu/disk/mem, network conditions, connection pooling, etc all those can potentially be the cause of slow write.

i would suggest you take a look at all available dashboards and try to narrow down the scope of the problem. e.g. it’s on client side or server side or network issue ? that’s why you need logging and all those metrics.