Trigger Feature

How can we identify a trigger that has been triggered by another trigger?

If i create a stupid trigger that for each update update the same document with a timestamp is a infinite loop :wink:

Hi @decaxd and welcome in the MongoDB Community :muscle: !

I think you have no way to know. Each trigger are completely independent from one another.

What you could do though is add an extra layer of filtering on your triggers so you only match the documents you are supposed to match for this particular trigger.

Example: don’t just filter on {operationType: "update"}, add an extra condition on the relevant fields you are really watching.
Example: Only retrieve the anniversary updates: {"updateDescription.updatedFields.age": {$exists: true}}

Example of a document retrieved with the above filter:

  _id: {
    _data: '8262056496000000012B022C0100296E5A100487308F2D66014D99A58D3CE631BDE12946645F69640064620563822E6765AD1CB7FC360004'
  operationType: 'update',
  clusterTime: Timestamp({ t: 1644520598, i: 1 }),
  ns: { db: 'test', coll: 'test' },
  documentKey: { _id: ObjectId("620563822e6765ad1cb7fc36") },
  updateDescription: {
    updatedFields: { age: 35 },
    removedFields: [],
    truncatedArrays: []

By making your triggers more specific, you limit the chances of infinite loop.

Also, if you can, try to avoid generating a new update on the same collection if you are already triggering on an update. You can probably do the entire processing in the first update operation and avoid extra processing using a trigger.

Not sure if that’s really a solution but at least I tried :sweat_smile: !


It was one of our first tries. But we receive from a kafka some string value and we want to correctly encode the date. In this way we should have a first trigger that encode and a second that do nothing.

We resolved this by design by adding a ftimestamp that can be set only by our changestream. In this way we can filter and process only events that don’t have the timestamp in updatedfields

Hi @decaxd,

If you receive the docs from Kafka, then you can trigger on the insert event rather than the update one I guess so you shouldn’t trigger on the update event a second time.

Also, you can filter to only trigger on documents that are inserted AND have a $type: “string” filter so you only trigger when the date field is a string and not an ISODate.

Here is how to do it:[{$match: {operationType: "insert", "": {$type: "string"}}}])

The above change stream triggers on

db.coll.insertOne({date: "12/03/88"})

but not on this

db.coll.insertOne({date: new Date()})


Kafka can send me also update of previous inserted document.:slight_smile:

Ha, right :grin:.
I forgot about that case.
Well in that case, also allow updates with a $in filter and I think my solution still works.