MongoDb Slows Down, When mongoDb write operations are more than 10000/sec

I am running a cron whose code is written in golang, and i am using mongoDb as database There was 128GB Ram into my system in which DataBase is stored, and I am using different system for the code. The cron is running with 17000 merchants parallely, each merchant having different database, which means there was 17000 Db’s into system.

Now I will tell you the scenario, When the cron Runs, there are approximately 10000 write/insert operations per seconds, which makes mongodb slow and it affects the performance of the mongodb as well as the overall cron. The write operations include Bulk Insert queries as well as single Insertion and moreover these queries are being executed concurrently for different merchants.

To overcome this problem, I’m thinking to use Transactions for write operations, will it make an positive impact on the slow down of mongodb. Is there anything else which i can implement to improve the performance of mongoDb, that doesn’t slows it down and makes it faster than now.

I think that you have a bad case of massive number of collections.

With 17000 databases, even with only 1 collection per database and only the default index on _id, you have at least 34000 files. With 2 collections, your are at 68000 files. Add an extra index per collection and you reach 136000 files. Ouch!

The fact that you may have an unlimited number of databases/collections is like having the possibility to jump over a 136000 feet cliff. Both are possible but not none is a good idea most of the time.

Transactions should make things slower since more resources are locked for a longer time.

128G RAM is okay or not only if the working set fits. What is your data set size? Is your cron running on the same machine as mongod? Do you have a standalone or replica set? What is the size of the data you try to write concurrently? What is your physical storage?