WriteConflict error in MongoDB when running batches of insertMany operation with transaction from Quarkus

Below is a snippet of my transaction implementation. Currently MongoDB will throw WriteConflict error every time when it is processing the records from 800k onwards.

    ClientSession clientSession = mongoClient.startSession();
    clientSession.startTransaction();
    int batchSize = 100000;
    try {
        for (int i = 1; i <= 1000000; i++) {
            ArrayList<SampleData> sampleDataList = new ArrayList();
            sampleDataList.add(new SampleData(i));
            if (i % batchSize == 0 || i == sampleDataList.size()) {
                sampleDataRepo.mongoCollection().insertMany(clientSession, sampleDataList);
                sampleDataList.clear();
            }
        }
        clientSession.commitTransaction();
    }
    catch (Exception e){
        clientSession.abortTransaction();
        throw new Exception(e.getMessage());
    }

Hi @Zhen_Wei_Wong,

Welcome to the MongoDB Community forums :sparkles:

The issue you are facing could be related to MongoDB’s document-level concurrency control. MongoDB uses optimistic concurrency control to ensure consistency in its transactions.

Like, if two or more transactions attempt to modify the same document concurrently, MongoDB will throw a WriteConflict error, indicating that one or more transactions failed due to conflicts.

To understand more about your error, could you please share the following:

  1. The full error message you’re seeing? Is there any more information other than just “write conflict error”?
  2. What MongoDB version you are using?
  3. Please share the SampleData() function and what you mean from 800k onwards.

Meanwhile please go through this link to read about In-progress Transactions and Write Conflicts

Best,
Kushagra

1 Like

Honestly, i have no idea in what scenario one would use a transaction to insert 1million docs.

Hi, thanks for the warm welcome.

I found out that the WriteConflict error is gone after setting --wiredTigerCacheSizeGB 180. However when the transaction is getting committed after all the 1million insertions goes through it will throw the error as shown.

{"t":{"$date":"2023-03-01T13:30:39.464+08:00"},"s":"E", "c":"WT", "id":22435, "ctx":"conn23","msg":"WiredTiger error message","attr":{"error":12,"message":{"ts_sec":1677648639,"ts_usec":460201,"thread":"2784:140705341332272","session_dhandle_name":"file:collection-14--2788248468824538652.wt","session_name":"WT_CURSOR.insert","category":"WT_VERB_DEFAULT","category_id":9,"verbose_level":"ERROR","verbose_level_id":-3,"msg":"int __cdecl __realloc_func(struct __wt_session_impl *,unsigned __int64 *,unsigned __int64,bool,void *):134:memory allocation of 8583939072 bytes failed","error_str":"Not enough space","error_code":12}}} {"t":{"$date":"2023-03-01T13:30:39.465+08:00"},"s":"F", "c":"REPL", "id":17322, "ctx":"conn23","msg":"Write to oplog failed","attr":{"error":"UnknownError: WiredTigerRecordStore::insertRecord 12: Not enough space"}} {"t":{"$date":"2023-03-01T13:30:39.474+08:00"},"s":"F", "c":"ASSERT", "id":23089, "ctx":"conn23","msg":"Fatal assertion","attr":{"msgid":17322,"file":"src\\mongo\\db\\repl\\oplog.cpp","line":369}} {"t":{"$date":"2023-03-01T13:30:39.474+08:00"},"s":"F", "c":"ASSERT", "id":23090, "ctx":"conn23","msg":"\n\n***aborting after fassert() failure\n\n"} {"t":{"$date":"2023-03-01T13:30:39.475+08:00"},"s":"F", "c":"CONTROL", "id":6384300, "ctx":"conn23","msg":"Writing fatal message","attr":{"message":"Got signal: 22 (SIGABRT).\n"}}

Hi :wave: @Zhen_Wei_Wong,

Thanks for sharing the full error message.

The error seems to be related to the WT storage engine. The first part of the msg shows an error due to memory allocation failure and it was unable to allocate certain bytes of memory due to insufficient space.

The further part of the msg shows that there was a failure to write to the oplog. The error message shows that there was not enough space available to insert a record into the WiredTigerRecordStore.

Finally, there is a message showing that a signal (SIGABRT) was received, which likely triggered the process to abort.

Overall the error message indicates a problem likely related to memory or disk space constraints. To resolve this issue, the root cause of the memory allocation failure needs to be identified.

However, can you clarify the following:

  • What if you don’t use transactions? Does that work?
  • What MongoDB Version you are using?
  • What are your hardware specifications?
  • Also, please share the sample document with us.

Best,
Kushagra