Docs Menu
Docs Home
/
MongoDB Manual
/

Release Notes for MongoDB 4.4

On this page

  • Patch Releases
  • Full Time Diagnostic Data Capture Requirements
  • Aggregation
  • Replica Sets
  • Sharded Clusters
  • Projection
  • Transactions
  • Sorting
  • Security Improvements
  • Structured Logging
  • Platform Support
  • Mongo Shell
  • Tools
  • Drivers
  • Indexes
  • Removed Commands
  • Networking
  • General Improvements
  • Changes Affecting Compatibility
  • Upgrade Procedures
  • Downgrade Consideration
  • Download
  • Known Issues
  • Report an Issue

Warning

Past Release Limitations

Some past releases have critical issues. These releases are not recommended for production use. Use the latest available patch release version instead.

Issue
Affected Versions
WT-7426
4.4.5
4.4.8
4.4.2 - 4.4.8
4.4.0 - 4.4.18 (ARM64 or POWER system architectures)
4.4.8 - 4.4.21 (Incremental backups on Ops Manager or Cloud Manager clusters)
4.4.7

Issues fixed:

  • SERVER-50792 Return more useful errors when a shard key index can't be found for shardCollection or refineCollectionShardKey

  • SERVER-80021 Make $convert round-trip correctly between double and string

  • SERVER-81106 Recipient shard doesn't wait for the collection version to be locally persisted before starting the cloning phase

  • SERVER-81966 Avoid modification of previous ChunkMap instances during refresh

  • WT-10424 cursor::search_near slow performance if many deleted items are present

  • All JIRA issues closed in 4.4.26

  • 4.4.26 Changelog

Issues fixed:

  • SERVER-76299 Report writeConflicts in serverStatus on secondaries

  • SERVER-78828 LDAP host timing data can be inconsistent during sorting

  • WT-11031 Fix RTS to skip tables with no time window information in the checkpoint

  • SERVER-70973 Balancer should stop iterating collections when there are no more available shards

  • SERVER-71627 Refreshed cached collection route info will severely block all client request when a cluster with 1 million chunks

  • SERVER-78813 Commit point propagation fails indefinitely with exhaust cursors with null lastCommitted optime

  • WT-8570 Do not increase oldest ID during recovery

  • WT-10449 Do not save update chain when there are no updates to be written to the history store

  • All JIRA issues closed in 4.4.25

  • 4.4.25 Changelog

Issues fixed:

Issues fixed:

Issues fixed:

Issues fixed:

  • SERVER-68122 Investigate replicating the collection WiredTiger config string during initial sync

  • SERVER-71759 dataSize command doesn't yield

  • SERVER-72222 MapReduce with single reduce optimization fails when merging results in sharded cluster

  • SERVER-72535 Sharded clusters allow creating the 'admin', 'local', and 'config' databases with alternative casings

  • SERVER-70235 Don't create range deletion documents upon v4.2-v4.4 upgrade in case of collection uuid mismatch

  • WT-9599 Acquire the hot backup lock to call fallocate in the block manager

  • All JIRA issues closed in 4.4.19

  • 4.4.19 Changelog

Issues fixed:

Issues fixed:

Issues fixed:

Issues fixed:

Issues fixed:

  • SERVER-64983 Release Client lock before rolling back WT transaction in TransactionParticipant::_resetTransactionState

  • SERVER-62229 Fix invariant when applying index build entries while recoverFromOplogAsStandalone=true

  • SERVER-60412 Host memory limit check does not honor cgroups v2

  • SERVER-55429 Abort migration earlier when receiver is not cleaning overlapping ranges

  • WT-8924 Don't check against on disk time window if there is an insert list when checking for conflicts in row-store

  • All JIRA issues closed in 4.4.14

  • 4.4.14 Changelog

Issues fixed:

Issues fixed:

Issues fixed:

  • WT-8395 Inconsistent data after upgrade from 4.4.3 and 4.4.4 to 4.4.8+ and 5.0.2+

  • SERVER-60326 Windows Server fails to start when X509 certificate has empty subject name

  • SERVER-60310 OCSP response validation should not consider statuses of irrelevant certificates

  • SERVER-59226 Deadlock when stepping down with a profile session marked as uninterruptible

  • SERVER-56226 [v4.4] Introduce 'permitMigrations' field on config.collections entry to prevent chunk migrations from committing

  • SERVER-51329 Unexpected non-retryable error when shutting down a mongos server

  • SERVER-45953 Exempt oplog readers from acquiring read tickets

  • All JIRA issues closed in 4.4.11

  • 4.4.11 Changelog

Issues fixed:

Issues fixed:

  • SERVER-57630: Enable SSL_OP_NO_RENEGOTIATION on Ubuntu 18.04 when running against OpenSSL 1.1.1

  • SERVER-34938: Secondary slowdown or hang due to content pinned in cache by single oplog batch

  • WT-8005: Fix a prepare commit bug that could leave the history store entry unresolved

  • WT-7995: Fix the global visibility that it cannot go beyond checkpoint visibility

  • WT-7984: Fix a bug that could cause a checkpoint to omit a page of data

  • All JIRA issues closed in 4.4.9

  • 4.4.9 Changelog

Issues fixed:

  • SERVER-58936: Unique index constraints may not be enforced

  • SERVER-58258: Wait for initial sync to clear state before asserting 'replSetGetStatus' reply has no 'initialSync' field

  • SERVER-52906: moveChunk after failed migration that rolled back cloning indexes can hang indefinitely due to missing shard key index

  • WT-7837: Clear updates structure in wt_hs_insert_updates to avoid firing assert

  • WT-6729: Quiesce eviction prior running rollback to stable's active transaction check

  • All JIRA issues closed in 4.4.8

  • 4.4.8 Changelog

Issues fixed:

Issues fixed:

Issues fixed:

Issues fixed:

  • SERVER-48471: Hashed indexes may be incorrectly marked multikey and be ineligible as a shard key

  • SERVER-50769: server restarted after expr{"expr":"_currentApplyOps.getArrayLength() > 0","file":"src/mongo/db/pipeline/document_source_change_stream_transform.cpp","line":535}}

  • SERVER-52919: Wire compression not enabled for initial sync

  • WT-7109: Retain no longer supported configuration options for backward compatibility

  • WT-7117: RTS to skip modifies that are more recent than on-disk base update while restoring an update

  • All JIRA issues closed in 4.4.4

  • 4.4.4 Changelog

Issues fixed:

Issues fixed:

  • SERVER-48067: Reduce memory consumption for unique index builds with large numbers of non-unique keys

  • SERVER-48523: Unconditionally check the first entry in the oplog when attempting to resume a change stream

  • SERVER-50365: Stuck with long-running transactions that can't be timed out

  • SERVER-50394: mongod audit log attributes DDL operations to the __system user in a sharded environment

  • SERVER-51041: Throttle starting transactions for secondary reads

  • SERVER-51120: Find queries with MERGE_SORT incorrectly sort the results when the collation is specified

  • All JIRA issues closed in 4.4.2

  • 4.4.2 Changelog

Issues fixed:

  • SERVER-48531: 3 way deadlock can happen between chunk splitter, prepared transactions and stepdown thread.

  • SERVER-48641: Deadlock due to the MigrationDestinationManager waiting for write concern with the session checked-out

  • SERVER-49546: setFCV to 4.4 should insert range deletion tasks in batches rather than one at a time

  • SERVER-49694: On a sharded cluster, nearest or hedged reads may not be routed to a near shard replica.

  • SERVER-50137: MongoDB crashes with Invariant failure due to oplog entries generated in 3.4

  • SERVER-50140: Initial sync cannot survive unclean restart of the sync source

  • SERVER-50170: Fix server selection failure on mongos

  • WT-6623: Set the connection level file ID in recovery file scan

  • All JIRA issues closed in 4.4.1

  • 4.4.1 Changelog

Starting in version 4.4, if the Full Time Diagnostic Data Capture (FTDC) thread in mongod or mongos fails, it terminates the originating process. To avoid the most common failures, confirm that the user running mongod/mongos has permissions to create the FTDC diagnostic.data directory within storage.dbPath (for mongod) or parallel to systemLog.path (for mongos).

MongoDB 4.4 adds the $unionWith aggregation stage, providing the ability to combines pipeline results from multiple collections into a single result set.

For details, see $unionWith.

Starting in version 4.4, MongoDB provides the following new operators that allow users to define custom aggregation expressions:

With the addition of these new operators, you can use aggregation to write custom JavaScript expressions instead of relying on mapReduce and $where.

Note

Even before version 4.4, various map-reduce expressions could also be rewritten using other aggregation pipeline stages, such as $group, $merge, etc., without requiring custom functions.

For more information, see Map-Reduce to Aggregation Pipeline.

Operator
Description
Returns the result of a user-defined accumulator operator.
Returns the size of a given string or binary data value's content in bytes.
Returns the size in bytes of a given document (i.e. bsontype Object) when encoded as BSON.
Defines a custom aggregation expression.

Returns boolean true if the specified expression resolves to an integer, decimal, double, or long.

Returns boolean false if the expression resolves to any other BSON type, null, or a missing field

Replaces the first instance of a matched string in a given input.
Replaces all instances of a matched string in a given input.

Starting in MongoDB 4.4:

  • $out can output to a collection in a different database. In earlier versions, $out can only output to a collection in the same database database where the aggregation is run.

  • $out can only run on replica set secondary nodes if all the nodes in cluster have featureCompatibilityVersion set to 4.4 or higher and the Read Preference allows secondary reads. Check your driver documentation to see when your driver added support.

Starting in MongoDB 4.4 (also available starting in 4.2.4), $indexStats includes the following fields in its output:

Field
Description
Name of the shard, if applicable.
Index specification document
A boolean flag that indicates if the index is currently being built.

Starting in MongoDB 4.4:

  • $merge can output to a collection in a different database. In earlier versions, $merge can only output to a collection in the same database where the aggregation is run.

  • $merge can only run on replica set secondary nodes if all the nodes in cluster have featureCompatibilityVersion set to 4.4 or higher and the Read Preference allows secondary reads. Check your driver documentation to see when your driver added support.

Starting in MongoDB 4.4, $merge can output to the same collection that is being aggregated. You can also output to a collection which appears in other stages of the pipeline, such as $lookup.

Versions of MongoDB prior to 4.4 did not allow $merge to output to the same collection as the collection being aggregated.

Warning

When $merge outputs to the same collection that is being aggregated, documents may get updated multiple times or the operation may result in an infinite loop. This behavior occurs when the update performed by $merge changes the physical location of documents stored on disk. When the physical location of a document changes, $merge may view it as an entirely new document, resulting in additional updates. For more information on this behavior, see Halloween Problem.

Starting in version 4.4,

Starting in MongoDB 4.4, $collStats accepts the queryExecStats field as an argument document. Providing this field returns the following fields in the output:

The collectionScans field contains an embedded document bearing the following fields:

Field Name
Description
total
A 64-bit integer giving the total number of queries that performed a collection scan. The total consists of queries that did and did not use a tailable cursor.
nonTailable
A 64-bit integer giving the number of queries that performed a collection scan that did not use a tailable cursor.

Starting in MongoDB 4.4, when you run the db.collection.explain().aggregate() method in executionStats and allPlansExecution modes, each pipeline stage listed in the explain output includes nReturned and executionTimeMillisEstimate.

Starting in MongoDB 4.4, a secondary performing initial sync can attempt to resume the sync process if interrupted by a transient (i.e. temporary) network error, collection drop, or collection rename. The sync source must also run MongoDB 4.4 to support resumable initial sync. If the sync source runs MongoDB 4.2 or earlier, the secondary must restart the initial sync process as if it encountered a non-transient network error.

By default, the secondary tries to resume initial sync for 24 hours. MongoDB 4.4 adds the initialSyncTransientErrorRetryPeriodSeconds server parameter for controlling the amount of time the secondary attempts to resume initial sync. If the secondary cannot successfully resume the initial sync process during the configured time period, it selects a new healthy source from the replica set and restarts the initial synchronization process from the beginning.

Prior to MongoDB 4.4, the secondary would restart the entire initial sync if it encountered an error during the process.

Starting in MongoDB 4.4, sync from sources send a continuous stream of oplog entries to their syncing secondaries.

Prior to MongoDB 4.4, secondaries fetched batches of oplog entries by issuing a request to their sync from source and waiting for a response. This required a network roundtrip for each batch of oplog entries. MongoDB 4.4 adds the oplogFetcherUsesExhaust startup parameter for disabling streaming replication and using the older replication behavior.

For details, see Streaming Replication.

Starting in Mongo 4.4, the rollback directory for a collection is named after the collection's UUID rather than the collection namespace; e.g.

<dbpath>/rollback/20f74796-d5ea-42f5-8c95-f79b39bad190/removed.2020-02-19T04-57-11.0.bson

For details, see Rollback Data.

Starting in MongoDB 4.4, you can specify the minimum number of hours to preserve an oplog entry. The mongod only removes an oplog entry if:

  • The oplog has reached the maximum configured size, and

  • The oplog entry is older than the configured number of hours based on the host system clock.

By default MongoDB does not set a minimum oplog retention period and automatically truncates the oplog starting with the oldest entries to maintain the configured maximum oplog size.

To configure the minimum oplog retention period when starting the mongod, either:

To configure the minimum oplog retention period on a running mongod, use replSetResizeOplog. Setting the minimum oplog retention period while the mongod is running overrides any values set on startup. You must update the value of the corresponding configuration file setting or command line option to persist those changes through a server restart.

Important

The oplog can grow without constraint so as to retain oplog entries for the configured number of hours. This may result in reduction or exhaustion of system disk space due to a combination of high write volume and large retention period.

Tip

See also:

Starting in MongoDB 4.4, slow oplog application logs on replica set secondaries are affected by the slowOpSampleRate. In previous versions, MongoDB logs all slow oplog entries regardless of the sample rate.

slowOpSampleRate specifies the fraction of slow operations that should be profiled or logged.

Note

Requires featureCompatibilityVersion 4.4+

Each mongod in the replica set or sharded cluster must have featureCompatibilityVersion set to at least 4.4 to start index builds simultaneously across replica set members.

MongoDB 4.4 running featureCompatibilityVersion: "4.2" builds indexes on the primary before replicating the index build to secondaries.

Starting with MongoDB 4.4, index builds on a replica set or sharded cluster build simultaneously across all data-bearing replica set members. For sharded clusters, the index build occurs only on shards containing data for the collection being indexed. The primary requires a minimum number of data-bearing voting members (i.e commit quorum), including itself, that must complete the build before marking the index as ready for use.

By default, index builds use a commit quorum of all data-bearing voting members. To start an index build with a non-default commit quorum, MongoDB 4.4 adds the commitQuorum parameter to createIndexes or its shell helpers db.collection.createIndex() and db.collection.createIndexes().

To modify the quorum required for an in-progress index build, MongoDB 4.4 introduces the new setIndexCommitQuorum command.

See Index Builds in Replicated Environments for more information.

Starting in MongoDB 4.4, the replSetReconfig command waits until a majority of voting members install the replica configuration before returning success. A voting member is any replica member where members[n].votes is 1, including arbiters. First, the operation waits until the current configuration is committed before installing the new configuration on the primary. The operation then waits until a majority of voting members install the new configuration before returning successfully. See Reconfiguration Waits Until a Majority of Members Install the Replica Configuration for more information.

replSetReconfig waits indefinitely for a majority of voting members to install the configuration by default. MongoDB 4.4 also adds the optional maxTimeMS parameter to replSetReconfig for specifying the maximum amount of time to wait for the operation to return successfully.

Starting in MongoDB 4.4, the replSetReconfig command allows adding or removing no more than 1 voting member at a time. To add or remove multiple voting members, issue a series of replSetReconfig or rs.reconfig() operations to add or remove one member at a time. See Reconfiguration Can Add or Remove No More than One Voting Member at a Time for more information.

The replSetGetConfig command can specify a new option commitmentStatus: true when run on the primary. When run with the option, the command includes in the output a commitmentStatus field. This output field indicates whether the replica set's previous reconfig has been committed, so that the replica set is ready to be reconfigured again. For more information, see the replSetGetConfig command.

MongoDB 4.4 adds the term field to the replica set configuration document. Replica set members use term and version to achieve consensus on the "newest" replica configuration. Setting featureCompatibilityVersion (fCV) : "4.4" implicitly performs a replSetReconfig to add the term field to the configuration document and blocks until the new configuration propagates to a majority of replica set members. Similarly, downgrading to fCV : "4.2" implicitly performs a reconfiguration to remove the term field.

Starting in MongoDB 4.4, you can specify the preferred initial sync source using the initialSyncSourceReadPreference parameter. You can only set this parameter on mongod startup, using either the setParameter configuration file setting or the --setParameter command line option.

initialSyncSourceReadPreference supports following read preference modes:

If the replica set has disabled chaining, the default initialSyncSourceReadPreference read preference mode is primary.

You cannot specify a tag set or maxStalenessSeconds to initialSyncSourceReadPreference.

Starting in version 4.4, MongoDB provides mirrored reads to pre-warm electable secondary members' cache with the most recently accessed data. With mirrored reads, the primary can mirror a subset of operations that it receives and send them to a subset of electable secondaries. Pre-warming the cache of a secondary can help restore performance more quickly after an election.

Note

The primary's response to the client is not affected by the mirror reads. The mirrored reads are "fire-and-forget" operations by the primary; i.e., the primary does not await the response for the mirrored reads.

MongoDB 4.4 adds the following mirrored reads parameter. You can set the parameter at startup using the setParameter configuration file setting or the --setParameter command line option or at runtime with setParameter command:

Parameter
Description

Specifies the samplingRate and maxTimeMS settings for mirrored reads.

{ samplingRate: <float>, maxTimeMS: <int> }

A samplingRate of 0 turns off mirrored reads.

The command serverStatus and its corresponding mongo shell method db.serverStatus() return mirroredReads if you specify the field's inclusion in the operation:

db.runCommand( { serverStatus: 1, mirroredReads: 1 } )

or

db.serverStatus( { mirroredReads: 1 } )

Starting in 4.4, MongoDB provides the refineCollectionShardKey command. With the new command, you can refine a collection's shard key by adding a suffix field or fields to the existing key. Refining a collection's shard key allows for a more fine-grained data distribution and can address situations where the existing key has led to jumbo (i.e. indivisible) chunks due to insufficient cardinality.

For example, you may have an existing orders collection with the shard key { customer_id: 1 }. You can change the shard key by adding a suffix order_id field to the shard key so that { customer_id: 1, order_id: 1 } becomes the new shard key, allowing data distribution by both customer_id and order_id fields.

To use the refineCollectionShardKey command, the sharded cluster must have feature compatibility version (fcv) of 4.4. For more information, see the refineCollectionShardKey command.

Note

After you refine the shard key, it may be that not all documents in the collection have the suffix field(s). To populate the missing shard key field(s), see Missing Shard Key Fields.

Before refining the shard key, ensure that all or most documents in the collection have the suffix fields, if possible, to avoid having to populate the field afterwards.

In earlier versions, once you select a shard key, you cannot modify the shard key.

Important

Missing Shard Keys

With the ability to refine a shard key with a suffix, it may be that not all documents in the collection have the suffix fields. Starting in version 4.4, documents in a sharded collection can be missing the shard key fields. In earlier versions, shard key fields must exist in every document for a sharded collection. For details, see Missing Shard Key Fields.

To minimize latencies, mongos instances, by default, can use hedge reads. With hedged reads, the mongos instances can route read operations to multiple members per each queried shard and return results from the first respondent per shard. By default, mongos instances support using hedged reads. To turn off a mongos instance's support for hedged reads, set the readHedgingMode parameter for the mongos.

Hedged reads are specified per operation as part of the read preference. Non-primary read preferences support hedged reads. Read preference nearest specifies hedged read by default.

For more information, see:

Parameter
Description
Enables or disables mongos instance's support for hedged reads.
Specifies the maximum time limit (in milliseconds) for the additional read sent to hedge a read operation.

To specify hedged read for a read preference, MongoDB 4.4 introduces the Hedged Read Option. To set using a MongoDB driver, refer to the driver read preference API documentation.

The following mongo shell methods can accept hedge options to enable hedged read for the specified read preference:

The command serverStatus and its corresponding mongo shell method db.serverStatus() return hedgingMetrics.

MongoDB 4.4 provides the command balancerCollectionStatus and the mongo shell helper method sh.balancerCollectionStatus() that return information about whether the chunks of a sharded collection are balanced (i.e. do not need to be moved) as of the time the command is run or need to be moved. With the command, users can verify that initial chunk creation and migration has finished.

Starting with MongoDB 4.4, mongos adds the following new default startup behavior:

  • mongos preloads a sharded cluster's routing table on startup, rather than doing so on-demand for the first incoming client connection.

  • mongos prewarms its connection pool to shard hosts on startup, rather than doing so on-demand for incoming client connections.

This behavior results in faster servicing of initial client connections after a mongos instance is started or restarted. In particular, this allows sites that employ multiple mongos instances to restart them as necessary, or add new ones, without initial client requests to those instances needing to wait on connection establishment.

Both routing table preloading and connection pool prewarming are enabled by default.

MongoDB 4.4 adds the following parameters for controlling this behavior:

Running flushRouterConfig is no longer required after executing the movePrimary or dropDatabase commands. These two commands now automatically refresh a sharded cluster's routing table as needed when run. Manually issuing the flushRouterConfig command is still recommended in the cases described under flushRouterConfig Considerations.

Starting in MongoDB 4.4, you can shard a collection using a compound shard key with a single hashed field. Prior to 4.4, MongoDB did not support compound shard keys with a hashed field.

Compound hashed sharding supports features like zone sharding, where the prefix (i.e. first) non-hashed field or fields support zone ranges while the hashed field supports more even distribution of the sharded data. For example, the following operation shards a collection on a compound hashed shard key that supports zoned sharding:

sh.shardCollection(
"examples.compoundHashedCollection",
{ "fieldA" : 1, "fieldB" : 1, "fieldC" : "hashed" }
)

Compound hashed sharding also supports shard keys with a hashed prefix for resolving data distribution issues related to monotonically increasing fields For example, the following operation shards a collection on a compound hashed shard key where the hashed field is the shard key prefix:

sh.shardCollection(
"examples.compoundHashedCollection",
{ "_id" : "hashed", "fieldA" : 1}
)

Starting in MongoDB 4.4, the following changes improve chunk migrations and orphaned document cleanup resiliency during failover:

  • Chunk ranges awaiting cleanup after a chunk migration are now persisted in the config.rangeDeletions collection and replicated throughout the shard. In the event of a failover, the shard's new primary reads the documents in the config.rangeDeletions collection and resumes deleting the corresponding ranges. The document that describes a range awaiting cleanup is deleted from the config.rangeDeletions collection after the range is deleted.

  • The cleanupOrphaned command no longer deletes orphaned documents from a shard. Instead, cleanupOrphaned waits for orphaned documents that are scheduled for deletion from a shard to be deleted.

Set the disableResumableRangeDeleter parameter to true on a shard's primary to pause range deletion on the shard.

Starting in MongoDB 4.4, the config server primary, by default, checks for index inconsistencies across the shards for sharded collections. The command serverStatus returns the field shardedIndexConsistency to report on index inconsistencies when run on the config server primary.

To configure the index consistency checks, MongoDB provides the following parameters:

Parameter
Description
Enable or disable the index consistency checks.
The interval at which the config server's primary checks the index consistency of sharded collections.

Starting in MongoDB 4.4, you can have more than one removeShard operation in progress.

In earlier versions, removeShard returns an error if another removeShard operation is in progress.

Starting in version 4.4, MongoDB removes the 512-byte limit on the shard key size. For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.

Starting in 4.4, if the find or subsequent getMore commands return partial results due to the unavailability of the queried shard(s), the output includes a boolean flag partialResultsReturned.

For chunks that are too large to migrate, starting in MongoDB 4.4:

  • A new balancer setting attemptToBalanceJumboChunks allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Ranges that Exceed Size Limit for details.

  • The moveChunk command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. The chunks may or may not be labeled jumbo.

Starting in 4.4, if there is a stale chunk, the catalog cache is only refreshed when routers access a shard that previously had or currently has that chunk.

Prior to MongoDB 4.4, any stale chunk caused the entire chunk distribution for a collection to be marked as stale and forced all routers who contact the shard to refresh their shard catalog cache. MongoDB 4.4 adds the enableFinerGrainedCatalogCacheRefresh startup parameter for disabling catalog cache refresh for only a targeted shard and using the older catalog cache refresh behavior. The enableFinerGrainedCatalogCacheRefresh parameter defaults to true.

Starting in version 4.4 (and 4.2.7), MongoDB automatically splits the system.sessions collection into at least 1024 chunks and distributes the chunks uniformly across shards in the cluster.

Starting in MongoDB 4.4, as part of making find() and findAndModify() projection consistent with aggregation's $project stage,

  • The find() and findAndModify() projection can accept aggregation expressions and aggregation syntax, including the use of literals and aggregation variables. With the use of aggregation expressions and syntax, you can project new fields or project existing fields with new values.

  • The find() and findAndModify() projection can specify embedded fields using the nested form; e.g. { field: { nestedfield: 1 } } as well as dot notation. In earlier versions, you can only use the dot notation.

For more information, see:

Starting in MongoDB 4.4, the $meta operator adds support for retrieving the indexKey metadata. The indexKey metadata is for debugging purposes only and not for application logic. See $meta for more information.

Starting in version 4.4, MongoDB makes the following { $meta: "textScore" } changes when used with db.collection.find():

  • You must specify the $text operator in the query predicate to use { $meta: "textScore" }.

  • You can sort the resulting documents by their search relevance, i.e. { $meta: "textScore" }, without also having to project the textScore.

    In earlier versions, to include { $meta: "textScore" } expression in the sort(), you must also include the same expression in the projection.

  • If you include the { $meta: "textScore" } expression in both the projection and sort, i.e. db.collection.find(<$text search>, <projection>).sort(<sort>), the projection and sort documents can have different field names for the expression.

    In previous versions of MongoDB, if you include the { $meta: "textScore" } in both the projection and sort, you must specify the same field name in both places.

For more information, see Text Score Metadata $meta: "textScore". For examples of "textScore" projections and sorts, see Text Search Score Examples.

See: Text Search Metadata { $meta: "textScore" } Query Requirement

Starting in MongoDB 4.4 with feature compatibility version (fcv) "4.4", you can create collections and indexes inside a multi-document transaction unless the transaction is a cross-shard write transaction.

When creating a collection inside a transaction:

When creating an index inside a transaction:

  • You can create an index on a non-existing collection. The collection is created as part of the operation.

  • You can create an index on a new empty collection created earlier in the same transaction.

  • The db.collection.createIndex() method fails if executed against a system collection.

For more details, see Create Collections and Indexes in a Transaction.

MongoDB 4.4 adds a new parameter shouldMultiDocTxnCreateCollectionAndIndexes which can enable (default) or disable collection and index creation inside a transaction. When setting the parameter for a sharded cluster, set the parameter on all shards.

For explicit creation of a collection or an index inside a transaction, the transaction read concern level must be "local". Explicit creation is through:

Starting in MongoDB 4.4, the sort() method now uses the same sort algorithm as the $sort aggregation stage. With this change, queries which perform a sort() on fields that contain duplicate values are much more likely to result in inconsistent sort orders for those values.

To guarantee sort consistency when using sort() on duplicate values, include an additional field in your sort that contains exclusively unique values.

This can be accomplished easily by adding the _id field to your sort.

See Sort Consistency for more information.

MongoDB Enterprise 4.4 provides a new mongokerberos tool for validating your platform's Kerberos configuration for use with MongoDB, and for testing end-to-end client authentication through Kerberos. When run, mongokerberos returns a report indicating any issues encountered, and provides potential advice for resolving them. mongokerberos is available in MongoDB Enterprise only.

Starting in version 4.4, MongoDB enables, by default, the use of OCSP (Online Certificate Status Protocol) to check for certificate revocation. The use of OCSP eliminates the need to periodically download a Certificate Revocation List (CRL) and restart the mongod / mongos with the updated CRL.

In versions 4.0 and 4.2, the use of OCSP is available only through the use of system certificate store on Windows or macOS.

As part of its OCSP support, MongoDB 4.4 supports the following on Linux:

  • OCSP stapling. With OCSP stapling, mongod and mongos instances attach or "staple" the OCSP status response to their certificates when providing these certificates to clients during the TLS/SSL handshake. By including the OCSP status response with the certificates, OCSP stapling obviates the need for clients to make a separate request to retrieve the OCSP status of the provided certificates.

  • OCSP must-staple extension. OCSP must-staple is an extension that can be added to the server certificate that tells the client to expect an OCSP staple when it receives a certificate during the TLS/SSL handshake.

MongoDB 4.4 adds the following OCSP parameters. You can set these parameters at startup using the setParameter configuration file setting or the --setParameter command line option:

Parameter
Description
Enables or disables the OCSP support.
Specifies the number of seconds to wait before refreshing the stapled OCSP status response.
Specifies the maximum number of seconds the mongod / mongos instance should wait to receive the OCSP status response for its certificates.
Specifies the maximum number of seconds that the mongod / mongos should wait for the OCSP response when verifying client certificates.

Starting in MongoDB 4.4, mongod / mongos logs a warning on connection if the presented x.509 certificate expires within 30 days of the mongod/mongos system clock. Specifically, the following connections to a mongod or mongos can trigger x.509 certificate expiry warnings:

The warning log message resembles the following:

<Timestamp> W NETWORK [connection] Peer certificate <Certificate Subject Name> expires...

Consider proactively renewing client x.509 certificates nearing expiration to ensure continued connectivity to the cluster.

MongoDB 4.4 adds the tlsX509ExpirationWarningThresholdDays parameter for controlling certificate expiration warning threshold. Set the parameter to 0 to disable the warning. For complete documentation, see tlsX509ExpirationWarningThresholdDays.

On CentOS 8 and RHEL 8, MongoDB 4.4 (as well as versions 4.2, 4.0, and 3.6) support TLS1.3.

A mongod, mongos, or mongoldap returns an error if one of the user to Distinguished Name (DN) mappings cannot be evaluated due to networking or authentication failures to the LDAP server.

The mongod, mongos, or mongoldap rejects the connection request and does not check the remaining mappings, if any.

To specify the user to DN mapping, see:

Starting in MongoDB 4.4, mongod / mongos instances now output all log messages in structured JSON format. Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as "severity", and each corresponding value records the associated logging information for that field type, such as "informational".

This includes log output sent to the file, syslog, and stdout (standard out) log destinations, as well as the output of the getLog command.

Previously, log entries were output as plaintext.

The following log messages in JSON format indicate that a mongod is listening and ready for connections:

{"t":{"$date":"2020-05-18T20:18:13.533+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}}
{"t":{"$date":"2020-05-18T20:18:13.533+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27001,"ssl":"off"}}

Structured logging with key-value pairs allows for efficient log analysis by automated tools or log ingestion services, and makes programmatic log parsing easier and more powerful.

When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.

jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.

For more information on structured logging, including a detailed examination of log entry components as well as command-line parsing examples, see Log Messages.

Starting in MongoDB 4.4, the ldapQueryPassword setParameter command accepts either a string or an array of strings. If set to an array, each password is tried until one succeeds. This can be used to perform a rollover of the LDAP account password without downtime for MongoDB.

MongoDB 4.4 adds support for the following platforms:

MongoDB 4.4 removes support for the following platforms:

  • Amazon Linux 2013.03

  • RHEL 6 / CentOS 6 / Oracle 6 on the s390x architecture

  • Windows 7 / Server 2008 R2

  • Windows 8 / Server 2012

  • Windows 8.1 / Server 2012 R2

  • macOS 10.12

See Platform Support for the full list of platforms and architectures supported in MongoDB 4.4.

Starting in MongoDB 4.4, the mongo shell supports using AWS IAM credentials to authenticate to a MongoDB Atlas cluster that has been configured for AWS IAM authentication.

Authenticating in this manner uses the new MONGODB-AWS authentication mechanism, and requires that you provide an AWS access key ID and a secret access key, which may be specified in the connection string or on the command-line via the --username and --password options.

Additionally, if you are using an AWS session token for authentication with temporary credentials when using an AssumeRole request, or when working with AWS resources that specify this value such as Lambda, you may provide that session token in the connection string using the AWS_SESSION_TOKEN authMechanismProperties value, or on the command-line via the --awsIamSessionToken option.

Alternatively, if the AWS access key ID, secret access key, or session token are defined on your platform using their respective AWS IAM environment variables the mongo shell uses these environment variable values to authenticate; you do not need to specify them in the connection string.

See Connection String Authentication Options for usage, and Connecting to an Atlas Cluster using MONGODB-AWS for examples.

Starting in MongoDB 4.4, the documentation for the following tools have been migrated to the MongoDB Database Tools project:

The MongoDB Database Tools use the Apache License, Version 2.0. See mongodb/mongo-tools for the source code.

Note

For documentation on previous versions of the listed tools, reference that version of the MongoDB server manual.

Quick links to older documentation:

MongoDB Enterprise 4.4 provides a new mongokerberos tool for validating your platform's Kerberos configuration for use with MongoDB, and for testing end-to-end client authentication through Kerberos. When run, mongokerberos returns a report indicating any issues encountered, and provides potential advice for resolving them. mongokerberos is available in MongoDB Enterprise only.

See the mongokerberos reference page for more information.

Starting in MongoDB 4.4, mongoreplay is removed from MongoDB packaging. mongoreplay and its related documentation are migrated to the mongodb-labs github project. Projects in mongodb-labs are experimental and not officially supported by MongoDB.

Quick links to older documentation

Starting in version 4.4, the Windows MSI installer for both Community and Enterprise editions does not include the MongoDB Database Tools (mongoimport, mongoexport, etc). To download and install the MongoDB Database Tools on Windows, see Installing the MongoDB Database Tools.

If you were relying on the MongoDB 4.2 or previous MSI installer to install the Database Tools along with the MongoDB Server, you must now download the Database Tools separately.

MongoDB 4.4 adds support for creating compound indexes with a single hashed field. MongoDB 4.2 and earlier only supported single field hashed indexes.

The following operation creates a compound hashed index on country and _id:

db.examples.createIndex( { "country" : 1, "_id" : "hashed" } )

Compound hashed indexes require featureCompatibilityVersion set to 4.4.

Starting in version 4.4, MongoDB adds the ability to hide or unhide indexes from the query planner. An index hidden from the query planner is not evaluated as part of query plan selection.

By hiding an index from the planner, users can evaluate the potential impact of dropping an index without having to drop the index. If the impact is negative, the user can unhide the index instead of having to recreate a dropped index. And because indexes are fully maintained while hidden, hidden indexes are immediately available for use once unhidden.

For details, see Hidden Indexes.

To support Hidden indexes, MongoDB introduces:

If an index specified to dropIndexes is still building, dropIndexes attempts to abort the in-progress build. Aborting an index build has the same effect as dropping the built index. Prior to MongoDB 4.4, dropIndexes would return an error if the collection had any in-progress index builds. This behavior also applies to the shell helpers db.collection.dropIndex() and db.collection.dropIndexes().

To drop a specific index out of a set of related in-progress builds, wait until the index builds complete and specify that index to dropIndexes or its shell helpers.

For more complete documentation, see:

Starting in MongoDB 4.4, the db.collection.drop() method and drop command abort any in-progress index builds on the target collection before dropping the collection. Prior to MongoDB 4.4, attempting to drop a collection with in-progress index builds results in an error, and the collection is not dropped.

For replica sets or shard replica sets, aborting an index on the primary does not simultaneously abort secondary index builds. MongoDB attempts to abort the in-progress builds for the specified indexes on the primary and if successful creates an associated abort oplog entry. Secondary members with replicated in-progress builds wait for a commit or abort oplog entry from the primary before either committing or aborting the index build.

For replica sets or shard replica sets, aborting an index on the primary does not simultaneously abort secondary index builds. MongoDB attempts to abort the in-progress builds for the specified indexes on the primary and if successful creates an associated abort oplog entry. Secondary members with replicated in-progress builds wait for a commit or abort oplog entry from the primary before either committing or aborting the index build.

Starting in MongoDB 4.4, the db.dropDatabase() method and dropDatabase command abort any in-progress index builds on collections in the target database before dropping the database. Aborting an index build has the same effect as dropping the built index. Prior to MongoDB 4.4, attempting to drop a database that contains a collection with an in-progress index build results in an error, and the database is not dropped.

MongoDB 4.4 deprecates the geoHaystack index and the geoSearch command. Use a 2d index with $geoNear or $geoWithin instead.

MongoDB removes the following command(s) and mongo shell helper(s):

Removed Command
Removed Helper
Alternatives
cloneCollection
db.cloneCollection()
planCacheListPlans
PlanCache.getPlansByQuery()
planCacheListQueryShapes
PlanCache.listQueryShapes()

Starting with MongoDB 4.4, mongod and mongos support TCP Fast Open (TFO) connections by default. TFO requires both the client and the mongod/mongos host machines support and enable TFO:

Windows

The following Windows operating systems support TFO:

  • Microsoft Windows Server 2016 or later.

  • Microsoft Windows 10 Update 1607 or later.

macOS
macOS 10.11 (El Capitan) and later support TFO
Linux

Linux operating systems running Linux Kernel 3.7 or later can support inbound TFO connections.

Linux operating systems running Linux Kernel 4.11 or later can support both inbound and outbound TFO connections.

Set the value of /proc/sys/net/ipv4/tcp_fastopen to enable support for inbound and/or outbound TFO connections:

  • Set to 1 to enable only outbound TFO connections

  • Set to 2 to enable only inbound TFO connections

  • Set to 3 to enable inbound and outbound TFO connections.

MongoDB 4.4 adds the following parameters for controlling TFO:

Parameter
Description

Default: true (Enabled)

Enables or disables support for inbound TFO connections to the mongod/mongos

Default: true (Enabled)

Linux Operating System Only

Enables or disables support for outbound TFO connections from the mongod/mongos.

Default: 1024

Control the size of the queue of pending TFO connections.

MongoDB 4.4 adds the following counters to the output of serverStatus and db.serverStatus():

Counter
Description

Linux only

Indicates kernel support for TFO.

Indicates operating system support for incoming TFO connections.
Indicates operating system support for outgoing TFO connections.
Indicates the total number of accepted incoming TFO connections to the mongod / mongos since the mongod/mongos last started.

A complete discussion of TFO is outside the scope of this documentation. For more information on TFO, start with the following external resources:

If MongoDB cannot use an index or indexes to obtain the sort order for a given cursor.sort() operation, MongoDB must perform a blocking sort on the data. A blocking sort indicates that MongoDB must consume and process all input documents to the sort before returning results. Blocking sorts do not block concurrent operations on the collection or database.

Prior to MongoDB 4.4, MongoDB returned an error if a blocking sort operations required more than 32 megabytes of system memory. Starting in MongoDB 4.4, blocking sort operations increase the limit on system memory to use for the sort operation to 100 megabytes. For blocking sort operations which require more than 100 megabytes of system memory, MongoDB returns an error unless the query specifies cursor.allowDiskUse() (New in MongoDB 4.4).

For more information on sorting and index use, see Sort and Index Use.

MongoDB 4.4 adds a new option allowDiskUse to the find command. With allowDiskUse: true, the operation can use temporary files on disk when processing a non-indexed ("blocking") sort operation that exceeds the 100 megabyte memory limit. Prior to MongoDB 4.4, a find operation with a blocking sort failed if it exceeded the memory limit while processing the sort.

For the db.collection.find() shell method with cursor.sort(), MongoDB 4.4 adds the cursor.allowDiskUse() cursor modifier.

allowDiskUse and cursor.allowDiskUse() have no effect if MongoDB can satisfy the sort using an index, or if the blocking sort requires less than 100 megabytes of memory.

For instructions on enabling allowDiskUse for queries issued through a MongoDB driver, defer to the documentation for your preferred MongoDB 4.4-compatible driver.

Starting in MongoDB 4.4,

  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),

  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.

Starting in version MongoDB 4.4,

Starting in MongoDB 4.4, compact only blocks the following metadata operations:

compact does not block MongoDB CRUD Operations for the database it is currently operating on.

Previously, compact blocked all operations for the database it was operating on, including MongoDB CRUD Operations, and was therefore only appropriate for use during scheduled maintenance periods.

Starting in MongoDB 4.4, the force flag forces compact to run on the primary in a replica set.

Previously, the force option, when set to true enabled compact to run on the primary in a replica set and if set to false, returned an error when run on a primary.

Tip

See also:

Starting in MongoDB 4.4, the mongod --repair rebuilds all indexes for the following:

  • Collections with inconsistencies between the collection data and one or more indexes.

  • Salvaged and modified collections.

In earlier versions of MongoDB, the mongod --repair option rebuilt all indexes for all collections.

serverStatus returns flowControl.locksPerKiloOp instead of flowControl.locksPerOp.

serverStatus includes the following new fields in its output:

shardingStatistics.numHostsTargeted reports the number of shards targeted by CRUD operations and aggregation commands. It increments the relevant find, insert, update, delete or aggregate metric with each operation on a cluster.

replSetGetStatus returns the following new fields:

Starting in MongoDB 4.4, the mongo shell method db.auth(<username>, <password>) prompts for the password if you do not pass in the password or the passwordPrompt() method for the <password>.

Starting in MongoDB 4.4, you can specify a $natural sort when running a find operation against a view.

Starting in MongoDB 4.4 running on Linux:

  • When the mongod and mongos processes receive a SIGUSR2 signal, backtrace details are added to the logs for each process thread.

  • Backtrace details show the function calls for the process, which can be used for diagnostics and provided to MongoDB Support if required.

The backtrace functionality is available for these architectures:

  • x86_64

  • arm64 (starting in MongoDB 4.4.15, 5.0.10, and 6.0)

For more information, see Generate a Backtrace.

Starting in MongoDB 4.4, FTDC now reports utilization data for a mongod running in a container from the perspective of the container, as opposed to the host operating system. See Full Time Diagnostic Data Capture for more information.

Starting in MongoDB 4.4, mongod logs a startup warning if a platform's configured ulimit value for number of open files is under 64000. Previously, a warning would only be logged if this value was under 1000. See Recommended ulimit Settings for more information.

MongoDB 4.4 adds the replanReason field to database profiler output and diagnostic log messages. The replanReason field contains the reason the query system evicted a cached plan.

The dbStats command and its mongo shell helper db.stats() return:

The collStats command, its mongo shell helper db.collection.stats(), and the $collStats aggregation stage return:

Starting in MongoDB 4.4, the following database commands can accept a hint argument to specify the index to use:

See:

Starting in MongoDB 4.4, MongoDB allows JavaScript execution on mongos instances. To disable JavaScript execution on a mongos instance:

Earlier versions of MongoDB do not allow JavaScript execution on mongos instances.

Note

Requires featureCompatibilityVersion 4.4+

Each mongod in the replica set or sharded cluster must have featureCompatibilityVersion set to at least 4.4 to configure global default read and write concern.

Starting in MongoDB 4.4, replica sets and sharded clusters support configuring global default read and write concern settings. Clients which do not explicitly specify a given read or write concern setting inherit the corresponding global default setting.

To configure default global default read or write concern, MongoDB adds the setDefaultRWConcern administrative command. For replica sets, issue the command against the primary member. For sharded clusters, issue the command from a mongos.

To retrieve the global default read or write concern settings, MongoDB adds the getDefaultRWConcern administrative command.

Starting in MongoDB 4.4, read concern objects may include a provenance field, indicating where the read concern originated.

The following table shows the possible read concern provenance values and their significance:

Provenance
Description
clientSupplied
The read concern was specified in the application.
customDefault
The read concern originated from a custom defined default value. See setDefaultRWConcern.
implicitDefault
The read concern originated from the server in absence of all other read concern specifications.

If a read operation is logged or profiled, the operation entry contains the read concern object, including the provenance field.

MongoDB does not recommended specifying the provenance field in requests to the server. This field should only be used for diagnostic purposes.

Starting in MongoDB 4.4, write concern objects may include a provenance field, indicating where the write concern originated.

The following table shows the possible write concern provenance values and their significance:

Provenance
Description
clientSupplied
The write concern was specified in the application.
customDefault
The write concern originated from a custom defined default value. See setDefaultRWConcern.
getLastErrorDefaults
The write concern originated from the replica set's settings.getLastErrorDefaults field.
implicitDefault
The write concern originated from the server in absence of all other write concern specifications.

If a write operation is logged or profiled, the operation entry contains the write concern object, including the provenance field.

MongoDB does not recommended specifying the provenance field in requests to the server. This field should only be used for diagnostic purposes.

MongoDB 4.4 Enterprise introduces two new configuration settings to enhance the initial connection to a KMIP server, as part of Kerberos authentication:

To control the number of times the mongod retries a failed initial connection to the KMIP server:

To control the timeout, in milliseconds, to wait for the initial response from the KMIP server before giving up, or retrying:

These settings are available in MongoDB Enterprise only.

The new processUmask startup option for mongod allows you to set permissions through umask for groups and other users when honorSystemUmask is set to false.

Starting with MongoDB 4.4, the mapReduce command and the db.collection.mapReduce() shell method ignore the verbose option.

Starting with MongoDB 4.4, you can use the explain command or the db.collection.explain() shell method to preview the results of mapReduce or db.collection.mapReduce().

Starting in version 4.4: