Need Help understanding discrepancy seen in how aggregate: merge: whenMatched custom pipeline ($set) executes

We are using Mongo 4.2 with nodejs client. And we are running an aggregation on a source collection similar to below to merge onto target collection to set the value of 2 fields alone if there is a _id match.
Now ive seen that on specific conditions (like more load on DB from say DB operations from a different tenant in same cluster) that the aggregation(merge) was taking more time than usual(for same volume, nature of data). On debugging i see that in mongodb pod logs, i see that in those occurrences where it took longer, the
$set operation is executed as single update op for each of the record vs in normal scenario i do see single log entry only for the aggregation pipeline . I would like to understand why this happens and if there is any workaround or options like batchSize i can pass to make sure the aggregation merge does not happen as single update operations. Many Thanks in advance.

        [{
        $merge: {
            into: 'target',
            on: '_id',
            whenMatched: [{
                $set: {
                    fieldA: '$$new.fieldA',
                    fieldB: '$$new.fieldB'
                }
            }],
            whenNotMatched: 'discard'
        }
    }]

Log snippet for aggregation:
COMMAND [conn460043] command 5f6d968f80c6d0c092dfc024_test.source command: aggregate { aggregate: “source”, pipeline: [ { $merge: { into: “target”, on: "_id", whenMatched: [ { $set: { fieldA: "$$new.field”A, fieldB: "$$new.fieldB” } } ], whenNotMatched: "discard" } } ], writeConcern: { w: "majority" }, allowDiskUse: true, cursor: {}, readConcern: { level: "majority" }, maxTimeMS: 7200000, lsid: { id: UUID("beb21883-6e55-4e39-bbe5-3dbf23077997") }, $clusterTime: { clusterTime: Timestamp(1635620537, 172), signature: { hash: BinData(0, A7BD0DCBE227D2AB7FDE94CA6A6E1FCA7E8CDDEC), keyId: 6993171338823204866 } }, $db: "5f6d968f80c6d0c092dfc024_ecam" } planSummary: COLLSCAN keysExamined:0 docsExamined:12411 cursorExhausted:1 numYields:3752 nreturned:0 reslen:284 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 16566 } }, ReplicationStateTransition: { acquireCount: { w: 16675 } }, Global: { acquireCount: { r: 610, w: 16065 } }, Database: { acquireCount: { r: 109, w: 16065 } }, Collection: { acquireCount: { r: 110, w: 16065 } }, Mutex: { acquireCount: { r: 24827 } } } flowControl:{ acquireCount: 16065, timeAcquiringMicros: 258397 } storage:{ data: { bytesRead: 125453, timeReadingMicros: 205 } } protocol:op_msg 218122ms

Log Snippet for single update operation:
WRITE [conn459923] update 5f6d968f80c6d0c092dfc024_test.target command: { q: { _id: ObjectId('617cdaf7e7b51ae11245e30f') }, u: [ { $set: { filedA: "$$filedA", fieldB: "$$new.fieldB" } } ], c: { new: { _id: ObjectId('617cdaf7e7b51ae11245e30f'), fieldA: 21.68221633341363, fieldB: [ { name: "Upcharge010910", type: "uc", ruleName: "uc1”, amount: 21.68221633341363 } ] } }, multi: false, upsert: false } planSummary: IDHACK keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keysInserted:0 keysDeleted:0 numYields:1 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 3, timeAcquiringMicros: 17 } storage:{ data: { bytesRead: 14738, timeReadingMicros: 27 } } 127ms

In nutshell, i would like to understand on which conditions above aggregation merge will be run as individual update operations and if there is any options i can pass to avoid it?