To provide durability in the event of a failure, MongoDB uses write ahead logging to on-disk journal files.
The log mentioned in this section refers to the WiredTiger write-ahead log (i.e. the journal) and not the MongoDB log file.
WiredTiger uses checkpoints to provide a consistent view of data on disk and allow MongoDB to recover from the last checkpoint. However, if MongoDB exits unexpectedly in between checkpoints, journaling is required to recover information that occurred after the last checkpoint.
With journaling, the recovery process:
- Looks in the data files to find the identifier of the last checkpoint.
- Searches in the journal files for the record that matches the identifier of the last checkpoint.
- Apply the operations in the journal files since the last checkpoint.
Changed in version 3.2.
With journaling, WiredTiger creates one journal record for each client initiated write operation. The journal record includes any internal write operations caused by the initial write. For example, an update to a document in a collection may result in modifications to the indexes; WiredTiger creates a single journal record that includes both the update operation and its associated index modifications.
MongoDB configures WiredTiger to use in-memory buffering for storing the journal records. Threads coordinate to allocate and copy into their portion of the buffer. All journal records up to 128 kB are buffered.
WiredTiger syncs the buffered journal records to disk upon any of the following conditions:
For replica set members (primary and secondary members),
If there are operations waiting for oplog entries. Operations that can wait for oplog entries include:
- forward scanning queries against the oplog
- read operations performed as part of causally consistent sessions
- Additionally for secondary members, after every batch application of the oplog entries.
If a write operation includes or implies a write concern of
- At every 100 milliseconds (See
- When WiredTiger creates a new journal file. Because MongoDB uses a journal file size limit of 100 MB, WiredTiger creates a new journal file approximately every 100 MB of data.
In between write operations, while the journal records
remain in the WiredTiger buffers, updates can be lost following a
hard shutdown of
For the journal files, MongoDB creates a subdirectory named
dbPath directory. WiredTiger journal
files have names with the following format
<sequence> is a zero-padded number starting from
Journal files contain a record per each client initiated write operation
- The journal record includes any internal write operations caused by the initial write. For example, an update to a document in a collection may result in modifications to the indexes; WiredTiger creates a single journal record that includes both the update operation and its associated index modifications.
- Each record has a unique identifier.
- The minimum journal record size for WiredTiger is 128 bytes.
By default, MongoDB configures WiredTiger to use snappy compression for
its journaling data. To specify a different compression algorithm or no
compression, use the
For details, see Change WiredTiger Journal Compressor.s
If a log record less than or equal to 128 bytes (the mininum log record size for WiredTiger), WiredTiger does not compress that record.
WiredTiger journal files for MongoDB have a maximum size limit of approximately 100 MB.
- Once the file exceeds that limit, WiredTiger creates a new journal file.
- WiredTiger automatically removes old journal files to maintain only the files needed to recover from last checkpoint.
WiredTiger pre-allocates journal files.
Starting in MongoDB Enterprise version 3.2.6, the In-Memory
Storage Engine is part of general availability (GA).
Because its data is kept in memory, there is no separate journal. Write
operations with a write concern of
j: true are
writeConcernMajorityJournalDefault set to
MongoDB does not wait for
writes to be written to the on-disk journal before acknowledging the
writes. As such,
"majority" write operations could
possibly roll back in the event of a transient loss (e.g. crash and
restart) of a majority of nodes in a given replica set.