Issue when upgrading Mongodb 4.4.8 to 6.0.4

Hi,

We have a replicaset with three nodes in the docker container hosted on the same host on a 32GB Filesystem that supports three mongodb data nodes.
We’ve got a problem after upgrading mongodb RS from 4.4.8 version to 5.0 and after 6.0.4 after two hours in Mongo 6.0.4 we see that the file system is filling up with temporary files caused by a Mongo request.
On Mongo node 1 we see in log after mongo has completely filled the filesystem this message:

{"log":"{\"t\":{\"$date\":\"2023-02-27T16:35:02.652+00:00\"},\"s\":\"W\",  \"c\":\"QUERY\",    \"id\":23798,   \"ctx\":\"conn13376\",\"msg\":\"Plan executor error during find command\",\"attr\":{\"error\":{\"code\":5642403,\"codeName\":\"Location5642403\",\"errmsg\":\"Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\"},\"stats\":{\"stage\":\"SORT\",\"nReturned\":0,\"works\":162854,\"advanced\":0,\"needTime\":162853,\"needYield\":0,\"saveState\":174,\"restoreState\":174,\"failed\":true,\"isEOF\":0,\"sortPattern\":{\"-$natural\":1},\"memLimit\":104857600,\"type\":\"simple\",\"totalDataSizeSorted\":0,\"usedDisk\":false,\"spills\":0,\"inputStage\":{\"stage\":\"COLLSCAN\",\"nReturned\":162853,\"works\":162854,\"advanced\":162853,\"needTime\":1,\"needYield\":0,\"saveState\":174,\"restoreState\":174,\"isEOF\":0,\"direction\":\"forward\",\"docsExamined\":162853}},\"cmd\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1},\"lsid\":{\"id\":{\"$uuid\":\"d77e409f-c760-4a20-9d06-2354573a53aa\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1677515699,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"K2Zxssj9aVxFx7AUshry/eug8Os=\",\"subType\":\"0\"}},\"keyId\":7144359882769039361}},\"$db\":\"local\",\"$readPreference\":{\"mode\":\"primaryPreferred\"}}}}\r\n","stream":"stdout","time":"2023-02-27T16:35:02.653055582Z"}
{"log":"{\"t\":{\"$date\":\"2023-02-27T16:35:02.678+00:00\"},\"s\":\"I\",  \"c\":\"COMMAND\",  \"id\":51803,   \"ctx\":\"conn13376\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"local.oplog.rs\",\"command\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1},\"lsid\":{\"id\":{\"$uuid\":\"d77e409f-c760-4a20-9d06-2354573a53aa\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":
{\"t\":1677515699,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"K2Zxssj9aVxFx7AUshry/eug8Os=\",\"subType\":\"0\"}},\"keyId\":7144359882769039361}},\"$db\":\"local\",\"$readPreference\":{\"mode\":\"primaryPreferred\"}},\"planSummary\":\"COLLSCAN\",\"numYields\":174,\"queryHash\":\"35E1175D\",\"planCacheKey\":\"35E1175D\",\"queryFramework\":\"classic\",\"ok\":0,\"errMsg\":\"Executor error during find command :: caused by :: Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\",\"errName\":\"Location5642403\",\"errCode\":5642403,\"reslen\":362,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":175}},\"Global\":{\"acquireCount\":{\"r\":175}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"storage\":{\"data\":{\"bytesRead\":42052646,\"timeReadingMicros\":584398}},\"remote\":\"172.18.0.1:49438\",\"protocol\":\"op_msg\",\"durationMillis\":1931}}\r\n","stream":"stdout","time":"2023-02-27T16:35:02.678270963Z"}

This message is caused by the Mongo query which executes the command :

"cmd\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1}

Also the file system is not immediately filled , because mongodb performs at some time the deletion of these temporary files but what is the rule who define that is there configurable?

I don’t understand why mongodb executes this command in a loop and what is the reason for executing this command.
Is it possible to restrict the size of temporary files written to the drive by Mongo?

We have restarted the migration process from 4.4.8 to 6.0.4 and we do not have the same issue. However, when we manually execute this command in the replica set, we have the same problem and the file system is filling up.

3 Likes

Hi @R_HU and welcome to the MongoDB Community forum!!

From the above error message shared in the error logs, it seems while upgrading, the disk ran out of space.

As mentioned in the docker documentation quotes:

as it many result into resource contention.

Is this specific to to the version mentioned or this is seen in other versions too.

After the successful upgrade of the deployment, do you see the similar issues. And, could you help me with the upgrade process and the error message received.
Also, please help us understand why you are executing the command.

Finally, please share the output for rs.conf() and rs.status()

Let us know if you have any queries.

Best Regards
Aasawari

hello,

I will explain again from 0 because we have encountered the problem on another cluster:

The cluster is composed of 3 Machines with 128Gb of disk, 7Gb of memory and 2Vcpu.
Our mongo services are containerised on each machine with a limit of 5Gb of memory and 1,5 CPU, mongodb wiredTiger cacheSizeGB is define to 2.5 for each node.

The upgrade process followed is that of the documentation provided here:

After investigation in the logs of the different versions I see that this error is present in version 4.4.8 and 5.0 :

{"log":"{\"t\":{\"$date\":\"2023-03-04T16:22:54.348+00:00\"},\"s\":\"I\",  \"c\":\"COMMAND\",  \"id\":51803,   \"ctx\":\"conn125400\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"local.oplog.rs\",\"command\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1},\"lsid\":{\"id\":{\"$uuid\":\"9a02cb13-57ee-4793-a27d-ae51ef6774af\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1677946973,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"EBMoYzvf/BJE1tf1ghz6ZkQ+E2Y=\",\"subType\":\"0\"}},\"keyId\":7206450133318238212}},\"$db\":\"local\",\"$readPreference\":{\"mode\":\"primaryPreferred\"}},\"planSummary\":\"COLLSCAN\",\"numYields\":71,\"queryHash\":\"A11B6D23\",\"planCacheKey\":\"A11B6D23\",\"ok\":0,\"errMsg\":\"Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.\",\"errName\":\"QueryExceededMemoryLimitNoDiskUseAllowed\",\"errCode\":292,\"reslen\":378,\"locks\":{\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":72}},\"Global\":{\"acquireCount\":{\"r\":72}},\"Database\":{\"acquireCount\":{\"r\":72}},\"Mutex\":{\"acquireCount\":{\"r\":1}},\"oplog\":{\"acquireCount\":{\"r\":72}}},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":105}}\r\n","stream":"stdout","time":"2023-03-04T16:22:54.349177116Z"}

this error refers to the command : "command":{"find":"oplog.rs","filter":{},"sort":{"-$natural":1}
It fails because the result is too large to be stored in memory and writing to disk is disabled by default in version 4.4.8 and 5.0 which is different from version 6.0
In my case it’s exactly this command which causes the filling of my disk in version 6.0.
I do not understand which event causes the execution of this command.

I suppose this event is due to out of sync of the replication or election process which causes the OPLOG reading for data recovering on every node, but this process is it a normal process of mongodb?

Must I set up a specific configuration for this process to run normally without having to serialize on disk?
Is it possible to restrict the size of temporary files written to the drive by Mongo?
Is my mongodb cluster wrongly sized ?

Here is the output of the rs.status() command for each version

Version 4.4.8

rs.status()
{
	"set" : "ds-rs",
	"date" : ISODate("2023-03-04T20:28:31.952Z"),
	"myState" : 1,
	"term" : NumberLong(3),
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"votingMembersCount" : 3,
	"writableVotingMembersCount" : 3,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"lastCommittedWallTime" : ISODate("2023-03-04T20:28:27.492Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"readConcernMajorityWallTime" : ISODate("2023-03-04T20:28:27.492Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"lastAppliedWallTime" : ISODate("2023-03-04T20:28:27.492Z"),
		"lastDurableWallTime" : ISODate("2023-03-04T20:28:27.492Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1677961703, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2023-03-04T20:23:43.468Z"),
		"electionTerm" : NumberLong(3),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1677959280, 1),
			"t" : NumberLong(2)
		},
		"numVotesNeeded" : 2,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"numCatchUpOps" : NumberLong(0),
		"newTermStartDate" : ISODate("2023-03-04T20:23:43.564Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2023-03-04T20:23:44.805Z")
	},
	"members" : [
		{
			"_id" : 1,
			"name" : "11.0.50.9:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 290,
			"optime" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDate" : ISODate("2023-03-04T20:28:27Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:28:27Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:28:31.779Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:28:31.022Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.11:27017",
			"syncSourceId" : 3,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 3
		},
		{
			"_id" : 2,
			"name" : "11.0.50.10:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 298,
			"optime" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDate" : ISODate("2023-03-04T20:28:27Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:28:27Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:28:31.781Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:28:30.026Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.9:27017",
			"syncSourceId" : 1,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 3
		},
		{
			"_id" : 3,
			"name" : "11.0.50.11:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 311,
			"optime" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDate" : ISODate("2023-03-04T20:28:27Z"),
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"electionTime" : Timestamp(1677961423, 1),
			"electionDate" : ISODate("2023-03-04T20:23:43Z"),
			"configVersion" : 1,
			"configTerm" : 3,
			"self" : true,
			"lastHeartbeatMessage" : ""
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1677961707, 41),
		"signature" : {
			"hash" : BinData(0,"p7TYQ+teX1adsMmdcQUirXqfXIE="),
			"keyId" : NumberLong("7206450133318238212")
		}
	},
	"operationTime" : Timestamp(1677961707, 41)
}

Version 5.0

rs.status()
{
	"set" : "ds-rs",
	"date" : ISODate("2023-03-04T20:35:46.296Z"),
	"myState" : 1,
	"term" : NumberLong(4),
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"votingMembersCount" : 3,
	"writableVotingMembersCount" : 3,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"lastCommittedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
		"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1677962126, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2023-03-04T20:34:56.969Z"),
		"electionTerm" : NumberLong(4),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1677961933, 1),
			"t" : NumberLong(3)
		},
		"numVotesNeeded" : 2,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"numCatchUpOps" : NumberLong(0),
		"newTermStartDate" : ISODate("2023-03-04T20:34:56.991Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2023-03-04T20:34:57.032Z")
	},
	"members" : [
		{
			"_id" : 1,
			"name" : "11.0.50.9:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 47,
			"optime" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDate" : ISODate("2023-03-04T20:35:36Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:35:36Z"),
			"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:35:45.018Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:35:44.896Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.11:27017",
			"syncSourceId" : 3,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 4
		},
		{
			"_id" : 2,
			"name" : "11.0.50.10:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 58,
			"optime" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDate" : ISODate("2023-03-04T20:35:36Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:35:36Z"),
			"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:35:45.029Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:35:45.559Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.11:27017",
			"syncSourceId" : 3,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 4
		},
		{
			"_id" : 3,
			"name" : "11.0.50.11:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 76,
			"optime" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDate" : ISODate("2023-03-04T20:35:36Z"),
			"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"electionTime" : Timestamp(1677962096, 1),
			"electionDate" : ISODate("2023-03-04T20:34:56Z"),
			"configVersion" : 1,
			"configTerm" : 4,
			"self" : true,
			"lastHeartbeatMessage" : ""
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1677962136, 1),
		"signature" : {
			"hash" : BinData(0,"4zdWQKCg1WIw8twSfdxRx49MlsU="),
			"keyId" : NumberLong("7206450133318238212")
		}
	},
	"operationTime" : Timestamp(1677962136, 1)
}

Version 6.0.4 Before Before setting FeatureCompatibilityVersion "6.0

rs.status()
{
  set: 'ds-rs',
  date: ISODate("2023-03-04T20:41:13.192Z"),
  myState: 1,
  term: Long("5"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    lastCommittedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    appliedOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    durableOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
    lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1677962337, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2023-03-04T20:40:50.481Z"),
    electionTerm: Long("5"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677962347, i: 1 }), t: Long("4") },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long("10000"),
    numCatchUpOps: Long("0"),
    newTermStartDate: ISODate("2023-03-04T20:40:51.551Z"),
    wMajorityWriteAvailabilityDate: ISODate("2023-03-04T20:40:56.028Z")
  },
  members: [
    {
      _id: 1,
      name: '11.0.50.9:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 11,
      optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:41:11.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:41:11.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastHeartbeat: ISODate("2023-03-04T20:41:11.593Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:41:11.896Z"),
      pingMs: Long("3"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 2,
      name: '11.0.50.10:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 22,
      optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:41:11.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:41:11.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastHeartbeat: ISODate("2023-03-04T20:41:11.574Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:41:11.586Z"),
      pingMs: Long("1"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 3,
      name: '11.0.50.11:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 38,
      optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:41:11.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1677962450, i: 1 }),
      electionDate: ISODate("2023-03-04T20:40:50.000Z"),
      configVersion: 2,
      configTerm: 5,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1677962471, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("a93b92c3b1b03f506d12a906e8b1b2aa853d7ccc", "hex"), 0),
      keyId: Long("7206450133318238212")
    }
  },
  operationTime: Timestamp({ t: 1677962471, i: 1 })
}

Version 6.0.4 After setting FeatureCompatibilityVersion "6.0

rs.status()
{
  set: 'ds-rs',
  date: ISODate("2023-03-04T20:42:30.793Z"),
  myState: 1,
  term: Long("5"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    lastCommittedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    appliedOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    durableOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
    lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1677962491, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2023-03-04T20:40:50.481Z"),
    electionTerm: Long("5"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677962347, i: 1 }), t: Long("4") },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long("10000"),
    numCatchUpOps: Long("0"),
    newTermStartDate: ISODate("2023-03-04T20:40:51.551Z"),
    wMajorityWriteAvailabilityDate: ISODate("2023-03-04T20:40:56.028Z")
  },
  members: [
    {
      _id: 1,
      name: '11.0.50.9:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 89,
      optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:42:18.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:42:18.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastHeartbeat: ISODate("2023-03-04T20:42:29.673Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:42:29.985Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 2,
      name: '11.0.50.10:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 100,
      optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:42:18.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:42:18.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastHeartbeat: ISODate("2023-03-04T20:42:29.674Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:42:29.675Z"),
      pingMs: Long("1"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 3,
      name: '11.0.50.11:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 115,
      optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:42:18.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1677962450, i: 1 }),
      electionDate: ISODate("2023-03-04T20:40:50.000Z"),
      configVersion: 2,
      configTerm: 5,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1677962538, i: 4 }),
    signature: {
      hash: Binary(Buffer.from("13338366b839f370e3acf785eb93e20276953006", "hex"), 0),
      keyId: Long("7206450133318238212")
    }
  },
  operationTime: Timestamp({ t: 1677962538, i: 4 })
}

Here is the output of the rs.printReplicationInfo() command for each version

Version 4.4.8

rs.printReplicationInfo()
configured oplog size:   6481.01123046875MB
log length start to end: 77846secs (21.62hrs)
oplog first event time:  Fri Mar 03 2023 22:48:47 GMT+0000 (UTC)
oplog last event time:   Sat Mar 04 2023 20:26:13 GMT+0000 (UTC)
now:                     Sat Mar 04 2023 20:26:17 GMT+0000 (UTC)

Version 5.0

rs.printReplicationInfo()
configured oplog size:   6481.01123046875MB
log length start to end: 78370secs (21.77hrs)
oplog first event time:  Fri Mar 03 2023 22:48:47 GMT+0000 (UTC)
oplog last event time:   Sat Mar 04 2023 20:34:57 GMT+0000 (UTC)
now:                     Sat Mar 04 2023 20:35:14 GMT+0000 (UTC)

Version 6.0.4 After setting FeatureCompatibilityVersion "6.0

rs.printReplicationInfo()
actual oplog size
'6481.01123046875 MB'
---
configured oplog size
'6481.01123046875 MB'
---
log length start to end
'78843.99999809265 secs (21.9 hrs)'
---
oplog first event time
'Fri Mar 03 2023 22:48:47 GMT+0000 (Coordinated Universal Time)'
---
oplog last event time
'Sat Mar 04 2023 20:42:51 GMT+0000 (Coordinated Universal Time)'
---
now
'Sat Mar 04 2023 20:43:00 GMT+0000 (Coordinated Universal Time)'

Here is the output of the rs.conf() command for version 4.4.8

rs.conf()
{
        "_id" : "ds-rs",
        "version" : 1,
        "term" : 1,
        "protocolVersion" : NumberLong(1),
        "writeConcernMajorityJournalDefault" : true,
        "members" : [
                {
                        "_id" : 1,
                        "host" : "11.0.50.9:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "11.0.50.10:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "11.0.50.11:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("6403ba5704b0564c053d341b")
        }
}

Problem solved, these requests were due to my monitoring solution metricbeat and its module mongodb, more precisely the module replstatus.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.