Uploading Large number of records in batches using MongoCollection.bulkWrite() method in Sprint boot

Hi, we are working on a project where we have to store the large amount of hierarchical data into MongoDB. To do so we are creating sub batches of 1000 documents and running them across threads. But while performing the bulk write operation in threads it is taking very long time to store data in to the database around and many times we are getting GC-Overhead issue while performing the operation.

Following the specifications of the threads and the records handled by threads.
Total no of threads - 10
Records per threads - 1000
Average number of records in total - 1,00,000
Average time taken by the bulk-write operation - 20 mins

We are using UpdateManyModule with filter and new UpdateOptions().upsert(true) along with the BulkWriteOptions().ordered(false) while storing the data into MongoDb

Does anyone have any idea why it may be taking more time ?

heavy writes take a bit time. DB servers need to do a lot of things upon write, e.g. create index if any, move data blocks around, allocate resources, replicate the update…

If anything goes slower, the write takes longer time. So check your server metrics.

Thanks @Kobe_W for the response.

We were able to get the expected output after monitoring the server metrics it seems like mongoDb was not getting enough RAM so the operations were either stuck or failing. After configuring them correctly we were able to get the expected results.