Handling concurrent updates Across the collection

I have photos collection like this

    "id": "1",
    "imageOrder": 1,
    "imageUrl": "url1"
    "id": "2",
    "imageOrder": 2,
    "imageUrl": "url2"
    "id": "3",
    "imageOrder": 3,
    "imageUrl": "url3"


  1. id is mongo Id
  2. imageOrder is the order in which the image should be displayed in the UI
  3. imageUrl is the url of the image
  4. userId is the id of the user to whom this image belongs to

I have an endpoint that allows user to delete his/her photo, logic roughly works like below
lets say you want to delete 2nd image in the above example,

  1. find all photos of requesting user
  2. re-order rest of the image(ex: if you are deleting 2nd image, 3rd image’s imageOrder will become 2)
  3. use bulkWrite to update imageOrders for the rest of the images and delete the 2nd image.
  4. final snapshot would look like this [{id:1,imageOrder:1},{id:3,imageOrder:2}]

I noticed an unpredictable behaviour, if 2 parallel calls are made to the delete api for the same user

  1. delete call to image 1 and 2 is triggered by the user

  2. both delete 1 and 2 would have read all 3 images to memory using find call

  3. call to delete image 1 would have [{id:2,imageOrder:1},{id:3,imageOrder:2}] in memory

  4. call to delete image 2 would have [{id:1,imageOrder:1},{id:3,imageOrder:2}] in memory

now which even is updating the DB last will be persisted.

the expected behaviour is that you have only [{id:3,imageOrder:1}] in the collection.
I tried using transactions with sessions but dint seem to help.

How do i Achieve this?

using mongo atlas v5.0.14

Hi @MAHENDRA_HEGDE and welcome to the MongoDB community forum!!

The above condition mentioned sounds like a race condition situation where two parallel calls are made to update the "imageOrder" and delete the image with a specific "id".

However, if updating the imageOrder is not significantly important for the application, the recommendation would be to avoid using the field value. The imageOrder field certainly causes race condition for your situation with the overhead of updating more documents inside the collection.

Consider a scenario where, you delete an image for user 1 with imageOrder1 and your collection contains, 1 million images for user 1. In the operation mentioned above, one delete operation would include a million updates into the collection.
Further, if this field is only used to sort images, can this function be achieved using the _id field?

To understand the requirement with more understanding, could you help with the following information regarding using transactions for the same:

  1. A sample code which could help me reproduce the issue in my local environment.
  2. Any error message that you might have observed while using transactions
  3. The driver and the version for achieving the requirement.

However, the below sample code in Pymongo version using transaction could be helpful for the same:

       import pymongo
       from pymongo import MongoClient
       conn = pymongo.MongoClient("mongodb+srv://cluster0.sqm88.mongodb.net/test")
       db = conn["test"]
       collection = db["forum205166"]

       from pymongo import MongoClient, InsertOne, DeleteOne, ReplaceOne, UpdateOne

       def callback(session):
           requests = [InsertOne({'_id': 1, "imageOrder": 1, "userId": 1, "imageURL": "url1" }),
                InsertOne({'_id': 2, "imageOrder": 2, "userId": 1, "imageURL": "url2" }),
                InsertOne({'_id': 3, "imageOrder": 3, "userId": 1, "imageURL": "url3" }),
                UpdateOne({'id': 3}, {'$set': {'imageOrder': 2}}),
                DeleteOne( { "id": 2})]
           result = mycollection.bulk_write(requests, session=session)

       with conn.start_session() as session:
          session.with_transaction( callback,read_concern=ReadConcern("local"), write_concern=wc_majority, read_preference=ReadPreference.PRIMARY)

Let us know if you have any further queries

Best Regards

1 Like