Navigation
This version of the documentation is archived and no longer supported.

aggregate

Definition

aggregate

Performs aggregation operation using the aggregation pipeline. The pipeline allows users to process data from a collection or other source with a sequence of stage-based manipulations.

Tip

In the mongo Shell, this command can also be run through the db.aggregate and db.collection.aggregate helper methods or with the watch helper method.

Helper methods are convenient for mongo users, but they may not return the same level of information as database commands. In cases where the convenience is not needed or the additional return fields are required, use the database command.

Syntax

The command has following syntax:

Changed in version 3.6.

{
  aggregate: "<collection>" || 1,
  pipeline: [ <stage>, <...> ],
  explain: <boolean>,
  allowDiskUse: <boolean>,
  cursor: <document>,
  maxTimeMS: <int>,
  bypassDocumentValidation: <boolean>,
  readConcern: <document>,
  collation: <document>,
  hint: <string or document>,
  comment: <string>,
  writeConcern: <document>
}

Command Fields

The aggregate command takes the following fields as arguments:

Field Type Description
aggregate string The name of the collection or view that acts as the input for the aggregation pipeline. Use 1 for collection agnostic commands.
pipeline array An array of aggregation pipeline stages that process and transform the document stream as part of the aggregation pipeline.
explain boolean

Optional. Specifies to return the information on the processing of the pipeline.

Not available in multi-document transactions.

allowDiskUse boolean

Optional. Enables writing to temporary files. When set to true, most aggregation stages can write data to the _tmp subdirectory in the dbPath directory with the following exceptions:

  • $graphLookup stage
  • $addToSet accumulator expression used in the $group stage (Starting in version 4.2.3, 4.0.14, 3.6.17)
  • $push accumulator expression used in the $group stage (Starting in version 4.2.3, 4.0.14, 3.6.17)

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.

cursor document

Specify a document that contains options that control the creation of the cursor object.

Changed in version 3.6: MongoDB 3.6 removes the use of aggregate command without the cursor option unless the command includes the explain option. Unless you include the explain option, you must specify the cursor option.

  • To indicate a cursor with the default batch size, specify cursor: {}.
  • To indicate a cursor with a non-default batch size, use cursor: { batchSize: <num> }.
maxTimeMS non-negative integer

Optional. Specifies a time limit in milliseconds for processing operations on a cursor. If you do not specify a value for maxTimeMS, operations will not time out. A value of 0 explicitly specifies the default unbounded behavior.

MongoDB terminates operations that exceed their allotted time limit using the same mechanism as db.killOp(). MongoDB only terminates an operation at one of its designated interrupt points.

bypassDocumentValidation boolean

Optional. Applicable only if you specify the $out or $merge aggregation stages.

Enables aggregate to bypass document validation during the operation. This lets you insert documents that do not meet the validation requirements.

New in version 3.2.

readConcern document

Optional. Specifies the read concern.

Starting in MongoDB 3.6, the readConcern option has the following syntax: readConcern: { level: <value> }

Possible read concern levels are:

For more formation on the read concern levels, see Read Concern Levels.

Starting in MongoDB 4.2, the $out stage cannot be used in conjunction with read concern "linearizable". That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include the $out stage in the pipeline.

The $merge stage cannot be used in conjunction with read concern "linearizable". That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include the $merge stage in the pipeline.

collation document

Optional.

Specifies the collation to use for the operation.

Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.

The collation option has the following syntax:

collation: {
   locale: <string>,
   caseLevel: <boolean>,
   caseFirst: <string>,
   strength: <int>,
   numericOrdering: <boolean>,
   alternate: <string>,
   maxVariable: <string>,
   backwards: <boolean>
}

When specifying collation, the locale field is mandatory; all other collation fields are optional. For descriptions of the fields, see Collation Document.

If the collation is unspecified but the collection has a default collation (see db.createCollection()), the operation uses the collation specified for the collection.

If no collation is specified for the collection or for the operations, MongoDB uses the simple binary comparison used in prior versions for string comparisons.

You cannot specify multiple collations for an operation. For example, you cannot specify different collations per field, or if performing a find with a sort, you cannot use one collation for the find and another for the sort.

New in version 3.4.

hint string or document

Optional. The index to use for the aggregation. The index is on the initial collection/view against which the aggregation is run.

Specify the index either by the index name or by the index specification document.

Note

The hint does not apply to $lookup and $graphLookup stages.

New in version 3.6.

comment string

Optional. Users can specify an arbitrary string to help trace the operation through the database profiler, currentOp, and logs.

New in version 3.6.

writeConcern document

Optional. A document that expresses the write concern to use with the $out or $merge stage.

Omit to use the default write concern with the $out or $merge stage.

MongoDB 3.6 removes the use of aggregate command without the cursor option unless the command includes the explain option. Unless you include the explain option, you must specify the cursor option.

  • To indicate a cursor with the default batch size, specify cursor: {}.
  • To indicate a cursor with a non-default batch size, use cursor: { batchSize: <num> }.

For more information about the aggregation pipeline Aggregation Pipeline, Aggregation Reference, and Aggregation Pipeline Limits.

Sessions

New in version 4.0.

For cursors created inside a session, you cannot call getMore outside the session.

Similarly, for cursors created outside of a session, you cannot call getMore inside a session.

Session Idle Timeout

MongoDB drivers and mongosh associate all operations with a server session, with the exception of unacknowledged write operations. For operations not explicitly associated with a session (i.e. using Mongo.startSession()), MongoDB drivers and the mongo shell creates an implicit session and associates it with the operation.

If a session is idle for longer than 30 minutes, the MongoDB server marks that session as expired and may close it at any time. When the MongoDB server closes the session, it also kills any in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout or a maxTimeMS greater than 30 minutes.

For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Session.startSession() and periodically refresh the session using the refreshSessions command. See Session Idle Timeout for more information.

Transactions

aggregate can be used inside multi-document transactions.

However, the following stages are not allowed within transactions:

You also cannot specify the explain option.

  • For cursors created outside of a transaction, you cannot call getMore inside the transaction.
  • For cursors created in a transaction, you cannot call getMore outside the transaction.

Important

In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transactions should not be a replacement for effective schema design. For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. That is, for many scenarios, modeling your data appropriately will minimize the need for multi-document transactions.

For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.

Client Disconnection

For aggregate operation that do not include the $out or $merge stages:

Starting in MongoDB 4.2, if the client that issued aggregate disconnects before the operation completes, MongoDB marks aggregate for termination using killOp.

Example

Changed in version 3.4: MongoDB 3.6 removes the use of aggregate command without the cursor option unless the command includes the explain option. Unless you include the explain option, you must specify the cursor option.

  • To indicate a cursor with the default batch size, specify cursor: {}.
  • To indicate a cursor with a non-default batch size, use cursor: { batchSize: <num> }.

Rather than run the aggregate command directly, most users should use the db.collection.aggregate() helper provided in the mongo shell or the equivalent helper in their driver. In 2.6 and later, the db.collection.aggregate() helper always returns a cursor.

Except for the first two examples which demonstrate the command syntax, the examples in this page use the db.collection.aggregate() helper.

Aggregate Data with Multi-Stage Pipeline

A collection articles contains documents such as the following:

{
   _id: ObjectId("52769ea0f3dc6ead47c9a1b2"),
   author: "abc123",
   title: "zzz",
   tags: [ "programming", "database", "mongodb" ]
}

The following example performs an aggregate operation on the articles collection to calculate the count of each distinct element in the tags array that appears in the collection.

db.runCommand( {
   aggregate: "articles",
   pipeline: [
      { $project: { tags: 1 } },
      { $unwind: "$tags" },
      { $group: { _id: "$tags", count: { $sum : 1 } } }
   ],
   cursor: { }
} )

In the mongo shell, this operation can use the db.collection.aggregate() helper as in the following:

db.articles.aggregate( [
   { $project: { tags: 1 } },
   { $unwind: "$tags" },
   { $group: { _id: "$tags", count: { $sum : 1 } } }
] )

Use $currentOp on an Admin Database

The following example runs a pipeline with two stages on the admin database. The first stage runs the $currentOp operation and the second stage filters the results of that operation.

db.adminCommand( {
   aggregate : 1,
   pipeline : [ {
      $currentOp : { allUsers : true, idleConnections : true } }, {
      $match : { shard : "shard01" }
      }
   ],
   cursor : { }
} )

Note

The aggregate command does not specify a collection and instead takes the form {aggregate: 1}. This is because the initial $currentOp stage does not draw input from a collection. It produces its own data that the rest of the pipeline uses.

The new db.aggregate() helper has been added to assist in running collectionless aggregations such as this. The above aggregation could also be run like this example.

Return Information on the Aggregation Operation

The following aggregation operation sets the optional field explain to true to return information about the aggregation operation.

db.orders.aggregate([
      { $match: { status: "A" } },
      { $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
      { $sort: { total: -1 } }
   ],
   { explain: true }
)

Note

The explain output is subject to change between releases.

See also

db.collection.aggregate() method

Aggregate Data using External Sort

Each individual pipeline stage has a limit of 100 megabytes of RAM. By default, if a stage exceeds this limit, MongoDB produces an error. To allow pipeline processing to take up more space, set the allowDiskUse option to true to enable writing data to temporary files, as in the following example:

db.stocks.aggregate( [
      { $sort : { cusip : 1, date: 1 } }
   ],
   { allowDiskUse: true }
)

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.

Aggregate Data Specifying Batch Size

To specify an initial batch size, specify the batchSize in the cursor field, as in the following example:

db.orders.aggregate( [
      { $match: { status: "A" } },
      { $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
      { $sort: { total: -1 } },
      { $limit: 2 }
   ],
   { cursor: { batchSize: 0 } }
)

The {batchSize: 0 } document specifies the size of the initial batch size only. Specify subsequent batch sizes to OP_GET_MORE operations as with other MongoDB cursors. A batchSize of 0 means an empty first batch and is useful if you want to quickly get back a cursor or failure message, without doing significant server-side work.

Specify a Collation

New in version 3.4.

Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.

A collection myColl has the following documents:

{ _id: 1, category: "café", status: "A" }
{ _id: 2, category: "cafe", status: "a" }
{ _id: 3, category: "cafE", status: "a" }

The following aggregation operation includes the Collation option:

db.myColl.aggregate(
   [ { $match: { status: "A" } }, { $group: { _id: "$category", count: { $sum: 1 } } } ],
   { collation: { locale: "fr", strength: 1 } }
);

For descriptions on the collation fields, see Collation Document.

Hint an Index

New in version 3.6.

Create a collection foodColl with the following documents:

db.foodColl.insert([
   { _id: 1, category: "cake", type: "chocolate", qty: 10 },
   { _id: 2, category: "cake", type: "ice cream", qty: 25 },
   { _id: 3, category: "pie", type: "boston cream", qty: 20 },
   { _id: 4, category: "pie", type: "blueberry", qty: 15 }
])

Create the following indexes:

db.foodColl.createIndex( { qty: 1, type: 1 } );
db.foodColl.createIndex( { qty: 1, category: 1 } );

The following aggregation operation includes the hint option to force the usage of the specified index:

db.foodColl.aggregate(
   [ { $sort: { qty: 1 }}, { $match: { category: "cake", qty: 10  } }, { $sort: { type: -1 } } ],
   { hint: { qty: 1, category: 1 } }
)

Override Default Read Concern

To override the default read concern level, use the readConcern option. The getMore command uses the readConcern level specified in the originating aggregate command.

You cannot use the $out or the $merge stage in conjunction with read concern "linearizable". That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include either stages in the pipeline.

The following operation on a replica set specifies a read concern of "majority" to read the most recent copy of the data confirmed as having been written to a majority of the nodes.

Important

  • To use read concern level of "majority", replica sets must use WiredTiger storage engine.

    You can disable read concern "majority" for a deployment with a three-member primary-secondary-arbiter (PSA) architecture; however, this has implications for change streams (in MongoDB 4.0 and earlier only) and transactions on sharded clusters. For more information, see Disable Read Concern Majority.

  • Starting in MongoDB 4.2, you can specify read concern level "majority" for an aggregation that includes an $out stage.

  • Regardless of the read concern level, the most recent data on a node may not reflect the most recent version of the data in the system.

db.restaurants.aggregate(
   [ { $match: { rating: { $lt: 5 } } } ],
   { readConcern: { level: "majority" } }
)

To ensure that a single thread can read its own writes, use "majority" read concern and "majority" write concern against the primary of the replica set.