Data Collector Directly Beat Down MongoDB

Hello All

I have a data collector, It works with multi-thread. Each thread gets data from the source then calculating statistics. After that, I write both raw data and statistics MongoDB. To reduce database accessing cost I used MongoDB transaction. I add data to the session when finishing processing I commit my changes to MongoDB. Each document 5 MB. This collector works on three different pods in Kubernetes. It is a summary of my application.

My MongoDB deployment; I set up a MongoDB replica set to Kubernetes Cluster. The cluster has 10 Nodes.
MongoDb Version: 4.2.4

I used Helm Chart while deploying the MongoDB. I used values-production.yaml as in the stable repo.

When I start my collector, MongoDB primary was down after 5 minutes later. Then It can’t wake up. I need to delete everthing to run database again. I use Rook-Chep Storage. I delete old MongoDB data and I prepared a cleaned up database. New MongoDB stay about 15 minutes. When I cheking log, there is no any failure.

But I saw something in the log, the log said that

Blockquote Oplog contains 3103 record totaling to 27131919888 bytes.
I reduce oplog size but error didn’t change.

Do you know what’s the problem in my case? Totaly work 14*3 = 52 threads working. Why MongoDB didn’t work under the work. MongoDB use 28 GB RAM

I am waiting your helps. Thank you so much