Atlast triggers update data

I have a few questions related to triggers in Atlas:

  1. I have a trigger that listens for updates on a collection, in particular I am adding an ID to an array using $addToSet. The problem is that I am getting the entire array in the trigger (change stream) instead of just the item being added to the array. So when I add ID2 I get the following:

{ accepted: [ ‘ID1’, ‘ID2’ ] }

Is there a guarantee that ID2 will always be the last item in the array? How can I determine what item was added to the array?

  1. As part of the update I can add the ID to one of 2 arrays:

{ accepted: [ ‘ID1’, ‘ID2’ ] }

OR

{ declined: [ ‘ID3’, ‘ID4’ ] }

How can I filter the trigger to only execute when the ID is added to the accepted array?

In setting you change stream you may specify fields to watch. See https://docs.mongodb.com/manual/reference/method/db.collection.watch/.

Hello @Dev_Ops

My name is Josman and I will try to assist you with this issue.

How can I filter the trigger to only execute when the ID is added to the accepted array?

You can use the $push expression instead of the $addToSet one. If you configure your trigger and disabled the fullDocument option, every time you update a document in this collection with the $push expression, you will receive the following in your changeStream object:

"updateDescription":{
  "updatedFields":{
     "accepted.2":"ID3"
  },
  "removedFields":[]
}

In that sense, you will know the element that was inserted in that update as well as its index position in the array.

Please let me know if you have any additional questions or concerns regarding the details above.

Kind Regards,
Josman

1 Like

Thanks for your suggestion @Josman_Perez_Exposit, this presents a new problem however.

Due to the fact the index will change on every $push, the following code to filter the events based on items being added to the accepted array will not work:

{
  "updateDescription.updatedFields.accepted": {
    "$exists": true
  }
}

Changing from $addToSet to $push, has eliminated the array, which is great, but now I need to filter the $match expression.

Also, when I send this data to my AWS Eventbridge lambda function, calling updateDescription.updatedFields.accepted in order to get the ID added becomes an issue (because of the .2…index added on the end of accepted). Can I rename the key before sending the event data to AWS?

Thanks.

Thanks.

Just a word of caution about using $push vs $addToSet. The semantic is not the same.

push ID3 into [ ID1, ID3 ] will result in an update to [ ID1 , ID3 , ID3 ].

addToSet ID3 into [ ID1 , ID3 ] will result to no update and no message on the stream

May be your use case cad use push rather than addToSet.

1 Like

@steevej , thanks for pointing that out. What I did to mitigate duplicate ID's is added the following to my code used for the update:
Collection.update({ _id: 1234, accepted: { $nin: [ID] }, { $push: {accepted: ID })

The accepted: { $nin: [ID] } should take care of the duplicates

2 Likes

That is a smart idea.

Can anyone let me know if what I am trying to do is even possible? If it’s not then I would want to start looking into alternate options.

Thanks.

Here is an idea.

When you push IDn into accepted. Also push it into a second array accepted_to_process. Set the change stream to listen to accepted_to_process. Once you processed an ID from the second array you would need to remove it from it. This is the caveat of this hack, you end up with 2 updates.

As I wrote the above, I think of something else.

Rather than pushing IDn push the object { id:IDn , state:to_process } but you also need to change state once it is processed.

Hello @Dev_Ops ,

Can anyone let me know if what I am trying to do is even possible?

Sorry for the late reply. I have tried to find an answer to this and have scaled this problem internally to be able to use a $match expression each time the accepted field is updated regardless of the index added. Please, allow me some time to investigate this.

The $match expression for this is not straightforward as it will need to use a sort of $regex expression. However, in the meantime, I looked for a workaround using the $addToSet. Every time you use $addToSet, the element will be added to the end of the array but you need to be aware that MongoDB Realm limits the execution of Trigger Functions to a rate of 1000 executions per second per Trigger. If there are additional Trigger executions beyond this threshold, MongoDB Realm adds their associated Function calls to a queue and executes each one when the capacity becomes available.

If you add in your trigger your $match expression as follows:

{"updateDescription.updatedFields.accepted":{"$exists":true}}

This will make the trigger run every time you update your accepted array and you will get all the elements plus the element added, so to be able to get the ID added and sent it to AWS Eventbridge, you could benefit from the following $project expression:

{
  "operationType":{"$numberInt":"1"},
  "lastID":{"$arrayElemAt":["$updateDescription.updatedFields.accepted",{"$numberInt":"-1"}]}
}

This will result in a field called lastID that will contain the latest ID added to the accepted array in the update operation.

Please let me know if this adequately addresses your questions.

Kind Regards,
Josman

Hello @steevej

One addition to the following:

Rather than pushing IDn push the object { id:IDn , state:to_process } but you also need to change state once it is processed.

Aside from what option you choose to apply to your application, it is a good idea to have a certain logging or report mechanism that will help you narrow down any possible issue you might encounter in your system.

1 Like

Thanks for the suggestions!

@Josman_Perez_Exposit , I will give your suggestion a try and report back as it will involve one DB update.

One question, the arrays can grow in length to a few thousand items (max 3000). Would there be any performance ramifications with your suggested work around?

@Josman_Perez_Exposit , your suggestion using the $addToSet works! Thank you. My final concern\question is this.

Will the last item on the accepted array always be the ID I just inserted? There is no chance that it could be another ID regardless of the 1000 items per second rate limit? I just want to make sure that I will not accidentally process an ID twice.

Thanks.

Hello @Dev_Ops

One question, the arrays can grow in length to a few thousand items (max 3000 ). Would there be any performance ramifications with your suggested work around?

Yes, you can assume the time to insert grows as the array does, best design practice is to not use arrays >200 elements in schemas anyway.

Will the last item on the accepted array always be the ID I just inserted? There is no chance that it could be another ID regardless of the 1000 items per second rate limit? I just want to make sure that I will not accidentally process an ID twice.

I am a bit worried about this as whilst it may be true today it’s not documented behaviour as far as I’m aware.

Kind Regards,
Josman