Unnecessary flushing of "local.oplog.rs" getmore command Logs on secondary instance

On a three node mongodb cluster one node is flushed with getmore logs and also admin.$cmd command: replSetUpdatePosition logs which is increasing production disk size exponentially.

These logs are only generated on mongodb secondary02 instance only.

Avg load , connections, mongotop,mongostat is same as that of primary and seconday01 instance.

Any idea of possible RCA ?

Logs for ref:

COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 6), t: 32 }, durableWallTime: new Date(1645090502027), appliedOpTime: { ts: Timestamp(1645090502, 6), t: 32 }, appliedWallTime: new Date(1645090502027), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 3), t: 32 }, lastCommittedWall: new Date(1645090502022), lastOpVisible: { ts: Timestamp(1645090502, 3), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 8), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.033+0000 I  COMMAND  [conn215] command local.oplog.rs command: getMore { getMore: 919477960749797777, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 32, lastKnownCommittedOpTime: { ts: Timestamp(1645090502, 3), t: 32 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1645090502, 9), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1644907037, 55) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 32, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1644907183, 171), signature: { hash: BinData(0, ABC413A3E021052A54C17BC8D43B5D660335BF0C), keyId: 7013241102522122241 } }, $db: "local" } planSummary: COLLSCAN cursorid:919477960749797777 keysExamined:0 docsExamined:1 numYields:0 nreturned:1 reslen:1394 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.035+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 6), t: 32 }, durableWallTime: new Date(1645090502027), appliedOpTime: { ts: Timestamp(1645090502, 7), t: 32 }, appliedWallTime: new Date(1645090502028), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 7), t: 32 }, lastCommittedWall: new Date(1645090502028), lastOpVisible: { ts: Timestamp(1645090502, 7), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 9), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.035+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 10), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:3 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:1925 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.036+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 7), t: 32 }, durableWallTime: new Date(1645090502028), appliedOpTime: { ts: Timestamp(1645090502, 7), t: 32 }, appliedWallTime: new Date(1645090502028), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 7), t: 32 }, lastCommittedWall: new Date(1645090502028), lastOpVisible: { ts: Timestamp(1645090502, 7), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 10), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.036+0000 I  COMMAND  [conn215] command local.oplog.rs command: getMore { getMore: 919477960749797777, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 32, lastKnownCommittedOpTime: { ts: Timestamp(1645090502, 7), t: 32 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1645090502, 10), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1644907037, 55) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 32, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1644907183, 171), signature: { hash: BinData(0, ABC413A3E021052A54C17BC8D43B5D660335BF0C), keyId: 7013241102522122241 } }, $db: "local" } planSummary: COLLSCAN cursorid:919477960749797777 keysExamined:0 docsExamined:2 numYields:0 nreturned:2 reslen:2143 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.036+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 11), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:1395 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 10239, timeReadingMicros: 14 } } protocol:op_msg 0ms
2022-02-17T09:35:02.038+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 7), t: 32 }, durableWallTime: new Date(1645090502028), appliedOpTime: { ts: Timestamp(1645090502, 9), t: 32 }, appliedWallTime: new Date(1645090502032), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 7), t: 32 }, lastCommittedWall: new Date(1645090502028), lastOpVisible: { ts: Timestamp(1645090502, 7), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 11), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.038+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 12), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:3 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:1938 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 15295, timeReadingMicros: 15 } } protocol:op_msg 0ms
2022-02-17T09:35:02.039+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 9), t: 32 }, durableWallTime: new Date(1645090502032), appliedOpTime: { ts: Timestamp(1645090502, 9), t: 32 }, appliedWallTime: new Date(1645090502032), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 7), t: 32 }, lastCommittedWall: new Date(1645090502028), lastOpVisible: { ts: Timestamp(1645090502, 7), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 12), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.039+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 13), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:3 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:1995 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 11563, timeReadingMicros: 12 } } protocol:op_msg 0ms
2022-02-17T09:35:02.039+0000 I  COMMAND  [conn173] command local.oplog.rs command: getMore { getMore: 919477960749797777, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 32, lastKnownCommittedOpTime: { ts: Timestamp(1645090502, 7), t: 32 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1645090502, 12), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1644907037, 55) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 32, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1644907183, 171), signature: { hash: BinData(0, ABC413A3E021052A54C17BC8D43B5D660335BF0C), keyId: 7013241102522122241 } }, $db: "local" } planSummary: COLLSCAN cursorid:919477960749797777 keysExamined:0 docsExamined:1 numYields:0 nreturned:1 reslen:1386 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.041+0000 I  COMMAND  [conn173] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 9), t: 32 }, durableWallTime: new Date(1645090502032), appliedOpTime: { ts: Timestamp(1645090502, 10), t: 32 }, appliedWallTime: new Date(1645090502034), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 10), t: 32 }, lastCommittedWall: new Date(1645090502034), lastOpVisible: { ts: Timestamp(1645090502, 10), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 13), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.042+0000 I  COMMAND  [conn173] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 10), t: 32 }, durableWallTime: new Date(1645090502034), appliedOpTime: { ts: Timestamp(1645090502, 10), t: 32 }, appliedWallTime: new Date(1645090502034), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 10), t: 32 }, lastCommittedWall: new Date(1645090502034), lastOpVisible: { ts: Timestamp(1645090502, 10), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 14), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.042+0000 I  COMMAND  [conn215] command local.oplog.rs command: getMore { getMore: 919477960749797777, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 32, lastKnownCommittedOpTime: { ts: Timestamp(1645090502, 10), t: 32 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1645090502, 14), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1644907037, 55) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 32, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1644907183, 171), signature: { hash: BinData(0, ABC413A3E021052A54C17BC8D43B5D660335BF0C), keyId: 7013241102522122241 } }, $db: "local" } planSummary: COLLSCAN cursorid:919477960749797777 keysExamined:0 docsExamined:3 numYields:0 nreturned:3 reslen:2847 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.042+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 15), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:828 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.043+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 10), t: 32 }, durableWallTime: new Date(1645090502034), appliedOpTime: { ts: Timestamp(1645090502, 13), t: 32 }, appliedWallTime: new Date(1645090502038), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 10), t: 32 }, lastCommittedWall: new Date(1645090502034), lastOpVisible: { ts: Timestamp(1645090502, 10), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 14), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.044+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 16), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:831 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.044+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 13), t: 32 }, durableWallTime: new Date(1645090502038), appliedOpTime: { ts: Timestamp(1645090502, 13), t: 32 }, appliedWallTime: new Date(1645090502038), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 10), t: 32 }, lastCommittedWall: new Date(1645090502034), lastOpVisible: { ts: Timestamp(1645090502, 10), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 16), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.045+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 17), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:3 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:1970 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 9405, timeReadingMicros: 12 } } protocol:op_msg 0ms
2022-02-17T09:35:02.045+0000 I  COMMAND  [conn215] command local.oplog.rs command: getMore { getMore: 919477960749797777, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 32, lastKnownCommittedOpTime: { ts: Timestamp(1645090502, 10), t: 32 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1645090502, 16), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1644907037, 55) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 32, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1644907183, 171), signature: { hash: BinData(0, ABC413A3E021052A54C17BC8D43B5D660335BF0C), keyId: 7013241102522122241 } }, $db: "local" } planSummary: COLLSCAN cursorid:919477960749797777 keysExamined:0 docsExamined:1 numYields:0 nreturned:1 reslen:1408 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms
2022-02-17T09:35:02.046+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 13), t: 32 }, durableWallTime: new Date(1645090502038), appliedOpTime: { ts: Timestamp(1645090502, 14), t: 32 }, appliedWallTime: new Date(1645090502040), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 14), t: 32 }, lastCommittedWall: new Date(1645090502040), lastOpVisible: { ts: Timestamp(1645090502, 14), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 17), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.046+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find { find: "pc_connected_devices", filter: { cdMac: "" }, $db: "abc", $clusterTime: { clusterTime: Timestamp(1645090502, 18), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, lsid: { id: UUID("d1234") }, $readPreference: { mode: "secondaryPreferred" } } planSummary: IXSCAN { cdMac: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:4EC5CACF planCacheKey:93BC0E72 reslen:818 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 15758, timeReadingMicros: 16 } } protocol:op_msg 0ms
2022-02-17T09:35:02.047+0000 I  COMMAND  [conn215] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 0, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090502, 14), t: 32 }, durableWallTime: new Date(1645090502040), appliedOpTime: { ts: Timestamp(1645090502, 14), t: 32 }, appliedWallTime: new Date(1645090502040), memberId: 1, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 2, cfgver: 4 }, { durableOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, durableWallTime: new Date(1645090501137), appliedOpTime: { ts: Timestamp(1645090501, 1), t: 32 }, appliedWallTime: new Date(1645090501137), memberId: 3, cfgver: 4 } ], $replData: { term: 32, lastOpCommitted: { ts: Timestamp(1645090502, 14), t: 32 }, lastCommittedWall: new Date(1645090502040), lastOpVisible: { ts: Timestamp(1645090502, 14), t: 32 }, configVersion: 4, replicaSetId: ObjectId('5e145aba610908a7a851ae69'), primaryIndex: 0, syncSourceIndex: 2 }, $clusterTime: { clusterTime: Timestamp(1645090502, 18), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "admin" } numYields:0 reslen:396 locks:{} protocol:op_msg 0ms
2022-02-17T09:35:02.048+0000 I  COMMAND  [conn173] command local.oplog.rs command: getMore { getMore: 919477960749797777, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 32, lastKnownCommittedOpTime: { ts: Timestamp(1645090502, 14), t: 32 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1645090502, 18), signature: { hash: BinData(0, 8AAC08BD9DBDAA1AB0FD3D10CF7C94B9E2BB6214), keyId: 7013241102522122241 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1644907037, 55) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 32, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode:

Hi @rakhi_maheshwari1

Looking at the logs you posted, I found a couple of interesting things:

2022-02-17T09:35:02.036+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find
2022-02-17T09:35:02.038+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find
2022-02-17T09:35:02.039+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find
2022-02-17T09:35:02.042+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find
2022-02-17T09:35:02.044+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find
2022-02-17T09:35:02.045+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find
2022-02-17T09:35:02.046+0000 I  COMMAND  [conn81] command abc.pc_connected_devices command: find

From this snippet of selected lines it appears that there is an app trying to execute a find command 7 times within 10 ms. I don’t know if this is normal, but checking where conn81 originated from might provide further clues.

2022-02-17T09:35:02.036+0000 I  COMMAND  [conn215] command local.oplog.rs command: getMore
2022-02-17T09:35:02.039+0000 I  COMMAND  [conn173] command local.oplog.rs command: getMore
2022-02-17T09:35:02.042+0000 I  COMMAND  [conn215] command local.oplog.rs command: getMore
2022-02-17T09:35:02.045+0000 I  COMMAND  [conn215] command local.oplog.rs command: getMore
2022-02-17T09:35:02.048+0000 I  COMMAND  [conn173] command local.oplog.rs command: getMore

From this snippet, it appears that conn215 and conn173 are tailing the oplog, and sending a getMore command 6 times within 10 ms (it appears to mirror what conn81 was doing). This is the common command between all of them:

originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1644907037, 55) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 32, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }

So in terms of the oplog, two connections are tailing it, and reading it using the secondaryPreferred read preference.

I would suggest to check what app originates the conn81, conn173, and conn215 (the logs should have this information when the client first connects), and see if it’s a misbehaving app.

Best regards
Kevin

1 Like

HI @kevinadi

We have checked all the applications and they are working as expected, even mongotop and mongostat has same number of insert, read and update request.

CPU,memory, load average through API is showing same trend as previous when issue was not reported.

Also these logs are reported only in mongodb03 not on mongodb02 server.

Anyother issue which is causing this oplog behaviour ?

Hi @rakhi_maheshwari1

What is your MongoDB version, and what is the topology of your deployment? E.g. are all mongod located on separate servers, what is the output of rs.status(), rs.conf(), and other deployment specific information that would help?

Also, when you said originally that it “increases production disk size exponentially”, could you check what file is taking the most space inside the dbpath?

Best regards
Kevin

Hi @kevinadi ,

Mongodb Version that we are using is 4.0. All mongodb are launched on separate instances but on same region.

Please find DBPATH output.

total 201G
-rw------- 1 mongod mongod 21 Jan 6 2020 WiredTiger.lock
-rw------- 1 mongod mongod 47 Jan 6 2020 WiredTiger
-rw------- 1 mongod mongod 114 Jan 6 2020 storage.bson
-r-------- 1 mongod mongod 1.0K Jan 6 2020 rsetkey
-rw------- 1 mongod mongod 20K Jan 7 2020 index-5--6107785091052691230.wt
-rw------- 1 mongod mongod 20K Jan 7 2020 index-3--6107785091052691230.wt
-rw------- 1 mongod mongod 20K Jan 7 2020 index-9--6107785091052691230.wt
-rw------- 1 mongod mongod 4.0K Jan 7 2020 index-27--6107785091052691230.wt
-rw------- 1 mongod mongod 36K May 25 2020 index-17--6107785091052691230.wt
drwx------ 4 mongod mongod 80 Sep 29 06:40 rollback
-rw------- 1 mongod mongod 36K Dec 7 06:46 index-29--6107785091052691230.wt
-rw------- 1 mongod mongod 24K Dec 7 06:46 collection-4-7139555439410204424.wt
-rw------- 1 mongod mongod 20K Feb 2 06:46 index-6--8070384122795160933.wt
-rw------- 1 mongod mongod 20K Feb 2 06:46 index-14--8070384122795160933.wt
-rw------- 1 mongod mongod 20K Feb 2 06:46 index-16--8070384122795160933.wt
-rw------- 1 root root 4.0K Feb 15 06:35 WiredTigerLAS.wt
-rw------- 1 mongod mongod 44K Feb 15 06:35 _mdb_catalog.wt
-rw------- 1 mongod mongod 6 Feb 15 06:35 mongod.lock
-rw------- 1 mongod mongod 36K Feb 15 06:35 collection-8--6107785091052691230.wt
-rw------- 1 mongod mongod 20K Feb 15 06:35 index-19--6107785091052691230.wt
-rw------- 1 mongod mongod 20K Feb 15 06:35 collection-18--6107785091052691230.wt
-rw------- 1 mongod mongod 20K Feb 15 06:35 index-7--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 15 06:35 collection-6--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 15 06:35 collection-4--6107785091052691230.wt
-rw------- 1 mongod mongod 20K Feb 15 06:35 index-1--6107785091052691230.wt
-rw------- 1 mongod mongod 4.0K Feb 15 06:35 collection-26--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 15 06:35 collection-28--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 15 06:35 index-16--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 15 06:35 collection-15--6107785091052691230.wt
-rw------- 1 mongod mongod 60K Feb 15 06:36 collection-46--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 15 06:36 index-3--5090926036936016609.wt
-rw------- 1 mongod mongod 44K Feb 15 06:36 collection-2--5090926036936016609.wt
-rw------- 1 mongod mongod 36K Feb 15 06:45 index-45--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 15 06:45 collection-44--6107785091052691230.wt
-rw------- 1 mongod mongod 20K Feb 15 07:00 collection-0-4286738152977983213.wt
-rw------- 1 mongod mongod 20K Feb 15 08:20 collection-0--49572441777440768.wt
-rw------- 1 mongod mongod 20K Feb 15 10:30 index-1--49572441777440768.wt
-rw------- 1 mongod mongod 36K Feb 16 00:02 index-47--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 16 00:02 index-1--1645510594599633921.wt
-rw------- 1 mongod mongod 36K Feb 16 00:02 collection-0--1645510594599633921.wt
-rw------- 1 mongod mongod 20K Feb 16 00:02 index-1-4286738152977983213.wt
-rw------- 1 mongod mongod 76K Feb 25 21:43 index-5--49572441777440768.wt
-rw------- 1 mongod mongod 52K Feb 25 21:43 index-28--8070384122795160933.wt
-rw------- 1 mongod mongod 244K Feb 25 21:45 collection-4--49572441777440768.wt
-rw------- 1 mongod mongod 12K Feb 28 00:03 index-5-7139555439410204424.wt
-rw------- 1 mongod mongod 76K Feb 28 05:10 index-4--8070384122795160933.wt
-rw------- 1 mongod mongod 124K Feb 28 05:10 index-59--6107785091052691230.wt
-rw------- 1 mongod mongod 68K Feb 28 05:23 index-3--49572441777440768.wt
-rw------- 1 mongod mongod 52K Feb 28 05:23 index-24--8070384122795160933.wt
-rw------- 1 mongod mongod 104K Feb 28 05:23 collection-2--49572441777440768.wt
-rw------- 1 mongod mongod 52K Feb 28 05:23 index-26--8070384122795160933.wt
-rw------- 1 mongod mongod 3.8M Feb 28 05:47 index-51--6107785091052691230.wt
-rw------- 1 mongod mongod 1.9M Feb 28 05:47 index-2--8070384122795160933.wt
-rw------- 1 mongod mongod 2.0M Feb 28 05:47 index-0--8070384122795160933.wt
-rw------- 1 mongod mongod 31M Feb 28 05:47 collection-50--6107785091052691230.wt
-rw------- 1 mongod mongod 384K Feb 28 05:47 index-4-1244580582339824502.wt
-rw------- 1 mongod mongod 2.5M Feb 28 05:47 index-2-1244580582339824502.wt
-rw------- 1 mongod mongod 1.1M Feb 28 05:47 index-0-1244580582339824502.wt
-rw------- 1 mongod mongod 7.7M Feb 28 05:47 index-55--6107785091052691230.wt
-rw------- 1 mongod mongod 155M Feb 28 05:51 index-3--1645510594599633921.wt
-rw------- 1 mongod mongod 730M Feb 28 05:51 collection-2--1645510594599633921.wt
-rw------- 1 mongod mongod 400K Feb 28 05:51 collection-58--6107785091052691230.wt
-rw------- 1 mongod mongod 52K Feb 28 05:51 index-23--6107785091052691230.wt
-rw------- 1 mongod mongod 52K Feb 28 05:53 index-12--8070384122795160933.wt
-rw------- 1 mongod mongod 44K Feb 28 05:53 index-8--8070384122795160933.wt
-rw------- 1 mongod mongod 472K Feb 28 05:53 index-57--6107785091052691230.wt
-rw------- 1 mongod mongod 68K Feb 28 05:53 index-10--8070384122795160933.wt
-rw------- 1 mongod mongod 1.9M Feb 28 05:53 collection-56--6107785091052691230.wt
-rw------- 1 mongod mongod 29M Feb 28 05:54 index-0--9168161409917501209.wt
-rw------- 1 mongod mongod 197M Feb 28 05:54 collection-52--6107785091052691230.wt
-rw------- 1 mongod mongod 64M Feb 28 05:54 index-53--6107785091052691230.wt
-rw------- 1 mongod mongod 542M Feb 28 05:54 index-30--8070384122795160933.wt
-rw------- 1 mongod mongod 2.5G Feb 28 05:54 index-42--6107785091052691230.wt
-rw------- 1 mongod mongod 3.0G Feb 28 05:54 index-38--6107785091052691230.wt
-rw------- 1 mongod mongod 5.8M Feb 28 05:54 index-22--8070384122795160933.wt
-rw------- 1 mongod mongod 650M Feb 28 05:54 index-36--6107785091052691230.wt
-rw------- 1 mongod mongod 15M Feb 28 05:54 index-20--8070384122795160933.wt
-rw------- 1 mongod mongod 2.9G Feb 28 05:54 index-32--6107785091052691230.wt
-rw------- 1 mongod mongod 617M Feb 28 05:54 index-34--6107785091052691230.wt
-rw------- 1 mongod mongod 15M Feb 28 05:54 index-18--8070384122795160933.wt
-rw------- 1 mongod mongod 328M Feb 28 05:54 collection-54--6107785091052691230.wt
-rw------- 1 mongod mongod 67M Feb 28 05:54 collection-48--6107785091052691230.wt
-rw------- 1 mongod mongod 23M Feb 28 05:54 index-49--6107785091052691230.wt
-rw------- 1 mongod mongod 13G Feb 28 05:54 collection-30--6107785091052691230.wt
-rw------- 1 mongod mongod 1.4G Feb 28 05:54 index-31--6107785091052691230.wt
-rw------- 1 mongod mongod 9.7G Feb 28 05:54 index-2-7139555439410204424.wt
-rw------- 1 mongod mongod 36K Feb 28 05:54 index-21--6107785091052691230.wt
-rw------- 1 mongod mongod 17G Feb 28 05:54 index-1-7139555439410204424.wt
-rw------- 1 mongod mongod 36K Feb 28 05:54 collection-2--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 28 05:54 collection-20--6107785091052691230.wt
-rw------- 1 mongod mongod 144G Feb 28 05:54 collection-0-7139555439410204424.wt
-rw------- 1 mongod mongod 36K Feb 28 05:54 collection-0--6107785091052691230.wt
-rw------- 1 mongod mongod 36K Feb 28 05:54 sizeStorer.wt
-rw------- 1 mongod mongod 5.5G Feb 28 05:54 collection-14--6107785091052691230.wt
-rw------- 1 mongod mongod 316K Feb 28 05:54 WiredTiger.wt
-rw------- 1 root root 1.2K Feb 28 05:54 WiredTiger.turtle
drwx------ 2 mongod mongod 110 Feb 28 05:54 journal
drwx------ 2 mongod mongod 4.0K Feb 28 05:55 diagnostic.data

Also networkin started overburden in mongodb03 and mongodb02 started taking less traffic automatically.

Hi @rakhi_maheshwari1

I don’t see anything of concern in the dbpath you posted. The largest file there is collection-0-7139555439410204424.wt but it’s a normal collection file so I don’t believe it’s an issue.

You can check what collection is using that file by running the command db.collection.stats().wiredTiger.uri on all the collections in all the databases in that server.

Best regards
Kevin