Issue with replication after upgrade from 4.4.8 to 6.0

We have a replica set with three nodes in the docker container hosted on three hosts on a 128 GB filesystem supporting each mongodb data nodes.

Every mongodb container have 5GB of memory limits and 1,5 CPU limits, mongodb wiredTiger cacheSizeGB is define to 2.5.

We’ve got a problem after upgrading mongodb RS from 4.4.8 version to 5.0 and after 6.0.4 after a few minutes in Mongo 6.0.4 we see that the file system is filling up with temporary files caused by a Mongo request. On Mongo node 1 we see in log after mongo has completely filled the filesystem this message:

{"log":"{\"t\":{\"$date\":\"2023-02-27T16:35:02.652+00:00\"},\"s\":\"W\",  \"c\":\"QUERY\",    \"id\":23798,   \"ctx\":\"conn13376\",\"msg\":\"Plan executor error during find command\",\"attr\":{\"error\":{\"code\":5642403,\"codeName\":\"Location5642403\",\"errmsg\":\"Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\"},\"stats\":{\"stage\":\"SORT\",\"nReturned\":0,\"works\":162854,\"advanced\":0,\"needTime\":162853,\"needYield\":0,\"saveState\":174,\"restoreState\":174,\"failed\":true,\"isEOF\":0,\"sortPattern\":{\"-$natural\":1},\"memLimit\":104857600,\"type\":\"simple\",\"totalDataSizeSorted\":0,\"usedDisk\":false,\"spills\":0,\"inputStage\":{\"stage\":\"COLLSCAN\",\"nReturned\":162853,\"works\":162854,\"advanced\":162853,\"needTime\":1,\"needYield\":0,\"saveState\":174,\"restoreState\":174,\"isEOF\":0,\"direction\":\"forward\",\"docsExamined\":162853}},\"cmd\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1},\"lsid\":{\"id\":{\"$uuid\":\"d77e409f-c760-4a20-9d06-2354573a53aa\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1677515699,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"K2Zxssj9aVxFx7AUshry/eug8Os=\",\"subType\":\"0\"}},\"keyId\":7144359882769039361}},\"$db\":\"local\",\"$readPreference\":{\"mode\":\"primaryPreferred\"}}}}\r\n","stream":"stdout","time":"2023-02-27T16:35:02.653055582Z"}
{"log":"{\"t\":{\"$date\":\"2023-02-27T16:35:02.678+00:00\"},\"s\":\"I\",  \"c\":\"COMMAND\",  \"id\":51803,   \"ctx\":\"conn13376\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"local.oplog.rs\",\"command\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1},\"lsid\":{\"id\":{\"$uuid\":\"d77e409f-c760-4a20-9d06-2354573a53aa\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":
{\"t\":1677515699,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"K2Zxssj9aVxFx7AUshry/eug8Os=\",\"subType\":\"0\"}},\"keyId\":7144359882769039361}},\"$db\":\"local\",\"$readPreference\":{\"mode\":\"primaryPreferred\"}},\"planSummary\":\"COLLSCAN\",\"numYields\":174,\"queryHash\":\"35E1175D\",\"planCacheKey\":\"35E1175D\",\"queryFramework\":\"classic\",\"ok\":0,\"errMsg\":\"Executor error during find command :: caused by :: Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\",\"errName\":\"Location5642403\",\"errCode\":5642403,\"reslen\":362,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":175}},\"Global\":{\"acquireCount\":{\"r\":175}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"storage\":{\"data\":{\"bytesRead\":42052646,\"timeReadingMicros\":584398}},\"remote\":\"172.18.0.1:49438\",\"protocol\":\"op_msg\",\"durationMillis\":1931}}\r\n","stream":"stdout","time":"2023-02-27T16:35:02.678270963Z"}

This message is caused by the Mongo query which executes the command :

"cmd\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1}

Also the file system is not immediately filled , because mongodb performs at some time the deletion of these temporary files but what is the rule who define that is there configurable?

I suppose this event is due to out of sync of the replication or election process which causes the OPLOG reading for data recovering on every node, but this process is it a normal process of mongodb?

Must I set up a specific configuration for this process to run normally without having to serialize on disk?

Is it possible to restrict the size of temporary files written to the drive by Mongo?

Is my mongodb cluster wrongly sized ?

We have restarted the migration process from 4.4.8 to 6.0.4 and we have always the same issue. However, when we manually execute this command in the replica set, we have the same problem and the file system is filling up.