Messages in replication cluster used by containerized Java app.

I have a containerized Java application in Rancher. It has 7 replicas and uses a MongoDB replication cluster (5.06). In recent days, the application has experienced high usage, reaching the point where the pods reach their limit and restart.

The MongoDB cluster consists of 3 nodes instantiated on servers running CentOS 7 on Red Hat Virtualization.
The replication configuration is that only the primary node has priority 2 and the slaves have priority 0, so the primary server is fixed.
XX.XX.XX.23 ==> primary node
XX.XX.XX.24 ==> Slave01
XX.XX.XX.25 ==> Slave02

Rancher Workers:
XX.XX.XX.19
XX.XX.XX.20
XX.XX.XX.21
XX.XX.XX.22

The following messages appear in the primary server log:

{“t”:{“$date”:“2025-06-10T08:45:50.329-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn8657”,“msg”:“Connection ended”,“attr”:{“remote”:“XX.XX.XX.XX:56503”,“uuid”:“6a2b603c-e0cf-4269-abc8-b0d148c1e2a7”,“connectionId”:8657,“connectionCount”:289}

{“t”:{“$date”:“2025-06-10T08:44:56.106-06:00”},“s”:“I”, “c”:“CONNPOOL”, “id”:22566, “ctx”:“ReplNetwork”,“msg”:“Ending connection due to bad connection status”,“attr”:

{“hostAndPort”:“XX.XX.XX.23:27017”,“error”:“CallbackCanceled: Callback was canceled”,“numOpenConns”:0}}

{“t”:{“$date”:“2025-06-09T08:33:21.882-06:00”},“s”:“I”, “c”:“CONNPOOL”, “id”:22572, “ctx”:“ReplNetwork”,“msg”:“Dropping all pooled connections”,“attr”:{“hostAndPort”:“XX.XX.XX.23:27017”,“error”:“ConnectionPoolExpired: Pool for 10.9.85.23:27017 has expired.”}}

{“t”:{“$date”:“2025-06-10T09:29:44.513-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:22989, “ctx”:“conn8810”,“msg”:“Error sending response to client. Ending connection from remote”,“attr”:{“error”:{“code”:9001,“codeName”:“SocketException”,“errmsg”:“Broken pipe”},“remote”:“XX.XX.XX.22:37287”,“connectionId”:8810}}

{“t”:{“$date”:“2025-06-08T02:02:06.242-06:00”},“s”:“I”, “c”:“-”, “id”:20883, “ctx”:“conn7108”,“msg”:“Interrupted operation as its client disconnected”,“attr”:{“opId”:449961948}}

The rs.status seems to be correct:

rs.status()
{
“set” : “replicaXX”,
“date” : ISODate(“2025-06-10T20:44:42.847Z”),
“myState” : 1,
“term” : NumberLong(738),
“syncSourceHost” : “”,
“syncSourceId” : -1,
“heartbeatIntervalMillis” : NumberLong(3000),
“majorityVoteCount” : 2,
“writeMajorityCount” : 2,
“votingMembersCount” : 3,
“writableVotingMembersCount” : 3,
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1749588282, 104),
“t” : NumberLong(738)
},
“lastCommittedWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1749588282, 104),
“t” : NumberLong(738)
},
“appliedOpTime” : {
“ts” : Timestamp(1749588282, 104),
“t” : NumberLong(738)
},
“durableOpTime” : {
“ts” : Timestamp(1749588282, 104),
“t” : NumberLong(738)
},
“lastAppliedWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“lastDurableWallTime” : ISODate(“2025-06-10T20:44:42.839Z”)
},
“lastStableRecoveryTimestamp” : Timestamp(1749588230, 33),
“electionCandidateMetrics” : {
“lastElectionReason” : “electionTimeout”,
“lastElectionDate” : ISODate(“2025-06-03T05:33:26.427Z”),
“electionTerm” : NumberLong(738),
“lastCommittedOpTimeAtElection” : {
“ts” : Timestamp(1748928541, 1),
“t” : NumberLong(737)
},
“lastSeenOpTimeAtElection” : {
“ts” : Timestamp(1748928541, 1),
“t” : NumberLong(737)
},
“numVotesNeeded” : 2,
“priorityAtElection” : 2,
“electionTimeoutMillis” : NumberLong(30000),
“numCatchUpOps” : NumberLong(0),
“newTermStartDate” : ISODate(“2025-06-03T05:33:26.442Z”),
“wMajorityWriteAvailabilityDate” : ISODate(“2025-06-03T05:33:26.528Z”)
},
“members” : [
{
“_id” : 0,
“name” : “XX.XX.XX.XX:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 659510,
“optime” : {
“ts” : Timestamp(1749588282, 104),
“t” : NumberLong(738)
},
“optimeDate” : ISODate(“2025-06-10T20:44:42Z”),
“lastAppliedWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“lastDurableWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “”,
“electionTime” : Timestamp(1748928806, 1),
“electionDate” : ISODate(“2025-06-03T05:33:26Z”),
“configVersion” : 19,
“configTerm” : 738,
“self” : true,
“lastHeartbeatMessage” : “”
},
{
“_id” : 1,
“name” : “XX.XX.XX.24:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 659507,
“optime” : {
“ts” : Timestamp(1749588280, 52),
“t” : NumberLong(738)
},
“optimeDurable” : {
“ts” : Timestamp(1749588280, 52),
“t” : NumberLong(738)
},
“optimeDate” : ISODate(“2025-06-10T20:44:40Z”),
“optimeDurableDate” : ISODate(“2025-06-10T20:44:40Z”),
“lastAppliedWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“lastDurableWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“lastHeartbeat” : ISODate(“2025-06-10T20:44:40.605Z”),
“lastHeartbeatRecv” : ISODate(“2025-06-10T20:44:41.505Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncSourceHost” : “10.9.85.23:27017”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 19,
“configTerm” : 738
},
{
“_id” : 2,
“name” : “XX.XX.XX.25:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 659507,
“optime” : {
“ts” : Timestamp(1749588280, 52),
“t” : NumberLong(738)
},
“optimeDurable” : {
“ts” : Timestamp(1749588280, 52),
“t” : NumberLong(738)
},
“optimeDate” : ISODate(“2025-06-10T20:44:40Z”),
“optimeDurableDate” : ISODate(“2025-06-10T20:44:40Z”),
“lastAppliedWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“lastDurableWallTime” : ISODate(“2025-06-10T20:44:42.839Z”),
“lastHeartbeat” : ISODate(“2025-06-10T20:44:40.652Z”),
“lastHeartbeatRecv” : ISODate(“2025-06-10T20:44:39.990Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncSourceHost” : “10.9.85.23:27017”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 19,
“configTerm” : 738
}
],
“ok” : 1,
“$clusterTime” : {
“clusterTime” : Timestamp(1749588282, 104),
“signature” : {
“hash” : BinData(0,“rCFpwChqeba6ui0a+Xq7C1lECXk=”),
“keyId” : NumberLong(“7483002390312910849”)
}
},
“operationTime” : Timestamp(1749588282, 104)
}

Replication info:
source: XX.XX.XX.24:27017
syncedTo: Tue Jun 10 2025 14:48:07 GMT-0600 (CST)
2 secs (0 hrs) behind the primary
source: XX.XX.XX.25:27017
syncedTo: Tue Jun 10 2025 14:48:07 GMT-0600 (CST)
2 secs (0 hrs) behind the primary

Has anyone experienced a similar issure where the database server was linked to the application’s memory bloat?
This behavior hasn’t occurred previously.