Inconsistent behaviour of bulkWrite - ordered:false. Stops on first duplicate document

Hi team,
I have created a new function in Realm in which I want to do multiple inserts in my database.
The function is fairly simple. It just generates an array with given values and submits them via BulkWrite. The documents contain an id that should be unique (I have set an index for that) inside the collection.

THE PROBLEM: although I have set the ‘ordered’ to false, its behavior is not consistent.
Some of the times it skips the duplicates and stores all new documents, returning the list with the existing ones, and some others it stops in the first duplication.

The function is the following:

exports = async function insertManyPerformancesSkipDups(performances) {
  const cluster = context.services.get("mongodb-atlas");
  const perfCollection = cluster.db("main").collection("performances");

  const operations = [];

  performances.forEach(p => {
    operations.push( { insertOne : { "document" : p } });
  });
  
  
  return await perfCollection.bulkWrite( operations, { ordered : false, writeConcern : { w : "majority", wtimeout : 2000 }  })
}

I search the documentation and forums but I couldn’t figure out what might cause this inconsistency. Any ideas?

Thanks in advance

Do you catch and print errors and exceptions? If not it might indicate what is the issue.

I do not think the issue is related to ordered:false. I suspect that w:majority or wtimeout:2000 trows an exception.

I added the w:majority, wtimeout:2000 in my attempt to solve this issue. The same was happening even without them.

Since I couldn’t solve it, I changed my approach, and instead of trying to insertOne() on writeBulk, now I use updateOne() with upsert: true. This works fine for my scenarios so I’ll stick to that.

1 Like