On this page
Starting in version 4.2, MongoDB provides the ability to perform multi-document transactions for sharded clusters.
The following page lists concerns specific to running transactions on a sharded cluster. These concerns are in addition to those listed in Production Considerations.
For transactions on MongoDB 4.2 deployments (replica sets and sharded clusters), clients must use MongoDB drivers updated for MongoDB 4.2.
On sharded clusters with multiple
performing transactions with drivers updated for MongoDB 4.0 (instead
of MongoDB 4.2) will fail and can result in errors, including:
Your driver may return a different error. Refer to your driver's documentation for details.
cannot continue txnId -1 for session ... with txnId 1
cannot commit with no participants
Transactions that target a single shard should have the same performance as replica-set transactions.
Transactions that affect multiple shards incur a greater performance cost.
On a sharded cluster, transactions that span multiple shards will error and abort if any involved shard contains an arbiter.
To specify a time limit, specify a
maxTimeMS limit on
maxTimeMS is unspecified, MongoDB will use the
transactionLifetimeLimitSeconds for a sharded
cluster, the parameter must be modified for all shard replica set
For transactions on a sharded cluster, only the
"snapshot" read concern provides a consistent snapshot
across multiple shards.
For more information on read concern and transactions, see Transactions and Read Concern.
Regardless of the write concern specified for the
transaction, the commit operation for a
sharded cluster transaction includes some parts that use
"majority", j: true} write concern.
Transactions whose write operations span multiple shards will error and abort if any transaction operation reads from or writes to a shard that contains an arbiter.
mongorestore as a backup strategy
for sharded clusters, you must stop the
sharded cluster balancer and use the
fsync command or the
db.fsyncLock() method on
mongos to block writes on the cluster during backups.
Sharded clusters can also use one of the following coordinated backup and restore processes, which maintain the atomicity guarantees of transactions across shards:
Chunk migration acquires exclusive collection locks during certain stages.
If an ongoing transaction has a lock on a collection and a chunk migration that involves that collection starts, these migration stages must wait for the transaction to release the locks on the collection, thereby impacting the performance of chunk migrations.
If a chunk migration interleaves with a transaction (for instance, if a transaction starts while a chunk migration is already in progress and the migration completes before the transaction takes a lock on the collection), the transaction errors during the commit and aborts.
Depending on how the two operations interleave, some sample errors include (the error messages have been abbreviated):
an error from cluster data placement change ... migration commit in progress for <namespace>
Cannot find shardId the chunk belonged to at cluster time ...
During the commit for a transaction, outside read operations may try to read the same documents that will be modified by the transaction. If the transaction writes to multiple shards, then during the commit attempt across the shards:
Outside reads that are part of causally consistent sessions (those that include afterClusterTime) wait until all writes of a transaction are visible.
Outside reads using other read concerns do not wait until all writes of a transaction are visible, but instead read the before-transaction version of the documents.
See also Production Considerations.