Mongodb update takes a lot of time for large number of input

I have 1.1 million records in an array. Each record represents a document in mongoDb. I need to update each document in DB through a script.

I created a script and iterated over an array of 1.1 million records and called mongoose findOneAndUpdate method. This approach works but it takes a lot of time.


for(object in array)
 mongoose.findOneAndUpdate({a: object.a}, {b:object.b}) // this runs 11lakh times which is taking a lot of time.

Is there a way I can update the records in an time efficient manner?

You need

I do not know if you can use it despite the fact you are using mongoose. Hopefully, you still have access to the native driver API.

1 Like

so I need to something like pass an array of 1.1 million operations to bulkwrite function. Corect?

The API documentation I shared clearly indicates that the first parameter has to be an array of operations. I really do not know what other confirmation you need. I really cannot explain better than what is there.

Thanks for replying. I was just confirming on the large number of operations can be supported or not.

There’s a limit with bulkwrite, however you can still proceed to add 1.1 million arrays (memory intensive).

For your general knowledge

The number of operations in each group cannot exceed the value of the maxWriteBatchSize of the database. As of MongoDB 3.6, this value is 100,000 . This value is shown in the hello.maxWriteBatchSize field.

This limit prevents issues with oversized error messages. If a group exceeds this limit, the client driver divides the group into smaller groups with counts less than or equal to the value of the limit. For example, with the maxWriteBatchSize value of 100,000 , if the queue consists of 200,000 operations, the driver creates 2 groups, each with 100,000 operations.

If you use runCommand, however, it will throw error if it exceeds the limit.

1 Like