In my code i have a findOneAndUpdate with filter on _id (and two other fields) and upsert: true.
If run concurrently on MongoDB 7 I get this error (Java Driver):
'Plan executor error during findAndModify :: caused by :: E11000 duplicate key error collection: XXXXXX dup key: { _id: "XXXXXX" } found value: {}' on server 127.0.0.1:27017. The full response is {"ok": 0.0, "errmsg": "Plan executor error during findAndModify :: caused by :: E11000 duplicate key error collection: XXXXXX dup key: { _id: \"XXXXXX\" } found value: {}", "code": 11000, "codeName": "DuplicateKey", "keyPattern": {"_id": 1}, "keyValue": {"_id": "XXXXXX"}, "foundValue": {}, "$clusterTime": {"clusterTime": {"$timestamp": {"t": 1724767678, "i": 2}}, "signature": {"hash": {"$binary": {"base64": "XXXXXX", "subType": "00"}}, "keyId": XXXXXX}}, "operationTime": {"$timestamp": {"t": 1724767678, "i": 2}}}
I think this is this very old behavior: https://jira.mongodb.org/browse/SERVER-14322
What is the correct way to handle it?
The thing that comes to my mind is to catch the error on the application side and retry the update a certain number of times but this makes me write a lot of code.
Is it possible that this very common pattern has no elegant solution?
Thanks.