This version of the documentation is archived and no longer supported.

FAQ: Concurrency

Changed in version 2.2.

MongoDB allows multiple clients to read and write a single corpus of data using a locking system to ensure that all clients receive the same view of the data and to prevent multiple applications from modifying the exact same pieces of data at the same time. Locks help guarantee that all writes to a single document occur either in full or not at all.

What type of locking does MongoDB use?

MongoDB uses a readers-writer [1] lock that allows concurrent reads access to a database but gives exclusive access to a single write operation.

When a read lock exists, many read operations may use this lock. However, when a write lock exists, a single write operation holds the lock exclusively, and no other read or write operations may share the lock.

Locks are “writer greedy,” which means write locks have preference over reads. When both a read and write are waiting for a lock, MongoDB grants the lock to the write.

[1]You may be familiar with a “readers-writer” lock as “multi-reader” or “shared exclusive” lock. See the Wikipedia page on Readers-Writer Locks for more information.

How granular are locks in MongoDB?

Changed in version 2.2.

Beginning with version 2.2, MongoDB implements locks on a per-database basis for most read and write operations. Some global operations, typically short lived operations involving multiple databases, still require a global “instance” wide lock. Before 2.2, there is only one “global” lock per mongod instance.

For example, if you have six databases and one takes a database-level write lock, the other five are still available for read and write. A global lock makes all six databases unavailable during the operation.

How do I see the status of locks on my mongod instances?

For reporting on lock utilization information on locks, use any of the following methods:

Specifically, the locks document in the output of serverStatus, or the locks field in the current operation reporting provides insight into the type of locks and amount of lock contention in your mongod instance.

To terminate an operation, use db.killOp().

Does a read or write operation ever yield the lock?

In some situations, read and write operations can yield their locks.

Long running read and write operations, such as queries, updates, and deletes, yield under many conditions. MongoDB uses an adaptive algorithms to allow operations to yield locks based on predicted disk access patterns (i.e. page faults.)

MongoDB operations can also yield locks between individual document modification in write operations that affect multiple documents like update() with the multi parameter.

MongoDB uses heuristics based on its access pattern to predict whether data is likely in physical memory before performing a read. If MongoDB predicts that the data is not in physical memory an operation will yield its lock while MongoDB loads the data to memory. Once data is available in memory, the operation will reacquire the lock to complete the operation.

Changed in version 2.6: MongoDB does not yield locks when scanning an index even if it predicts that the index is not in memory.

Which operations lock the database?

Changed in version 2.2.

The following table lists common database operations and the types of locks they use.

Operation Lock Type
Issue a query Read lock
Get more data from a cursor Read lock
Insert data Write lock
Remove data Write lock
Update data Write lock
Map-reduce Read lock and write lock, unless operations are specified as non-atomic. Portions of map-reduce jobs can run concurrently.
Create an index Building an index in the foreground, which is the default, locks the database for extended periods of time.
db.eval() Write lock. The db.eval() method takes a global write lock while evaluating the JavaScript function. To avoid taking this global write lock, you can use the eval command with nolock: true.
eval Write lock. By default, eval command takes a global write lock while evaluating the JavaScript function. If used with nolock: true, the eval command does not take a global write lock while evaluating the JavaScript function. However, the logic within the JavaScript function may take write locks for write operations.
aggregate() Read lock

Which administrative commands lock the database?

Certain administrative commands can exclusively lock the database for extended periods of time. In some deployments, for large databases, you may consider taking the mongod instance offline so that clients are not affected. For example, if a mongod is part of a replica set, take the mongod offline and let other members of the set service load while maintenance is in progress.

The following administrative operations require an exclusive (i.e. write) lock on the database for extended periods:

The following administrative commands lock the database but only hold the lock for a very short time:

Does a MongoDB operation ever lock more than one database?

The following MongoDB operations lock multiple databases:

  • db.copyDatabase() must lock the entire mongod instance at once.
  • db.repairDatabase() obtains a global write lock and will block other operations until it finishes.
  • Journaling, which is an internal operation, locks all databases for short intervals. All databases share a single journal.
  • User authentication requires a read lock on the admin database for deployments using 2.6 user credentials. For deployments using the 2.4 schema for user credentials, authentication locks the admin database as well as the database the user is accessing.
  • All writes to a replica set’s primary lock both the database receiving the writes and then the local database for a short time. The lock for the local database allows the mongod to write to the primary’s oplog and accounts for a small portion of the total time of the operation.

How does sharding affect concurrency?

Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances.

Each mongod instance is independent of the others in the shard cluster and uses the MongoDB readers-writer lock. The operations on one mongod instance do not block the operations on any others.

How does concurrency affect a replica set primary?

In replication, when MongoDB writes to a collection on the primary, MongoDB also writes to the primary’s oplog, which is a special collection in the local database. Therefore, MongoDB must lock both the collection’s database and the local database. The mongod must lock both databases at the same time to keep the database consistent and ensure that write operations, even with replication, are “all-or-nothing” operations.

How does concurrency affect secondaries?

In replication, MongoDB does not apply writes serially to secondaries. Secondaries collect oplog entries in batches and then apply those batches in parallel. Secondaries do not allow reads while applying the write operations, and apply write operations in the order that they appear in the oplog.

MongoDB can apply several writes in parallel on replica set secondaries, in two phases:

  1. During the first prefer phase, under a read lock, the mongod ensures that all documents affected by the operations are in memory. During this phase, other clients may execute queries against this member.
  2. A thread pool using write locks applies all write operations in the batch as part of a coordinated write phase.

What kind of concurrency does MongoDB provide for JavaScript operations?

Changed in version 2.4: The V8 JavaScript engine added in 2.4 allows multiple JavaScript operations to run at the same time. Prior to 2.4, a single mongod could only run a single JavaScript operation at once.

Does MongoDB support transactions?

MongoDB does not support multi-document transactions.

However, MongoDB does provide atomic operations on a single document. Often these document-level atomic operations are sufficient to solve problems that would require ACID transactions in a relational database.

For example, in MongoDB, you can embed related data in nested arrays or nested documents within a single document and update the entire document in a single atomic operation. Relational databases might represent the same kind of data with multiple tables and rows, which would require transaction support to update the data atomically.

What isolation guarantees does MongoDB provide?

MongoDB provides the following guarantees in the presence of concurrent read and write operations.

  1. Read and write operations are atomic with respect to a single document and will always leave the document in a consistent state. This means that readers will never see a partially updated document, and indices will always be consistent with the contents of the collection. Furthermore, a set of read and write operations to a single document are serializable.
  2. Correctness with respect to query predicates, e.g. db.collection.find() will only return documents that match and db.collection.update() will only write to matching documents.
  3. Correctness with respect to sort. For read operations that request a sort order (e.g. db.collection.find() or db.collection.aggregate()), the sort order will not be violated due to concurrent writes.

Although MongoDB provides these strong guarantees for single-document operations, read and write operations may access an arbitrary number of documents during execution. Multi-document operations do not occur transactionally and are not isolated from concurrent writes. This means that the following behaviors are expected under the normal operation of the system:

  1. Non-point-in-time read operations. Suppose a read operation begins at time t1 and starts reading documents. A write operation then commits an update to a document at some later time t2. The reader may see the updated version of the document, and therefore does not see a point-in-time snapshot of the data.
  2. Non-serializable operations. Suppose a read operation reads a document d1 at time t1 and a write operation updates d1 at some later time t3. This introduces a read-write dependency such that, if the operations were to be serialized, the read operation must precede the write operation. But also suppose that the write operation updates document d2 at time t2 and the read operation subsequently reads d2 at some later time t4. This introduces a write-read dependency which would instead require the read operation to come after the write operation in a serializable schedule. There is a dependency cycle which makes serializability impossible.
  3. Dropped results. Reads may miss matching documents that are updated or deleted during the course of the read operation. However, data that has not been modified during the operation will always be visible.

Can reads see changes that have not been committed to disk?

Yes, readers can see the results of writes before they are made durable, regardless of write concern level or journaling configuration. As a result, applications may observe the following behaviors:

  1. MongoDB will allow a concurrent reader to see the result of the write operation before the write is acknowledged to the client application. For details on when writes are acknowledged for different write concern levels, see Write Concern.
  2. Reads can see data which may subsequently be rolled back in rare cases such as replica set failover or power loss. It does not mean that read operations can see documents in a partially written or otherwise inconsistent state.

Other systems refer to these semantics as read uncommitted.