Definition
serverStatusThe
serverStatuscommand returns a document that provides an overview of the database's state. Monitoring applications can run this command at a regular interval to collect statistics about the instance.
Compatibility
This command is available in deployments hosted in the following environments:
MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud
Note
This command is supported in all MongoDB Atlas clusters. For information on Atlas support for all commands, see Unsupported Commands.
MongoDB Enterprise: The subscription-based, self-managed version of MongoDB
MongoDB Community: The source-available, free-to-use, and self-managed version of MongoDB
Syntax
The command has the following syntax:
db.runCommand( { serverStatus: 1 } )
The value (i.e. 1 above) does not affect the operation of the
command. The db.serverStatus() command returns a large amount of
data. To return a specific object or field from the output append the
object or field name to the command.
For example:
db.runCommand({ serverStatus: 1}).metrics db.runCommand({ serverStatus: 1}).metrics.commands db.runCommand({ serverStatus: 1}).metrics.commands.update
mongosh provides the db.serverStatus()
wrapper for the serverStatus command.
Tip
Much of the output of serverStatus is also displayed
dynamically by mongostat. See the
mongostat command for more information.
Behavior
By default, serverStatus excludes in its output:
some content in the repl document.
mirroredReads document.
To include fields that are excluded by default, specify the top-level
field and set it to 1 in the command. To exclude fields that are included
by default, specify the field and set to 0. You can specify either top-level
or embedded fields.
For example, the following operation excludes the repl,
metrics and locks information in the output.
db.runCommand( { serverStatus: 1, repl: 0, metrics: 0, locks: 0 } )
For example, the following operation excludes the embedded histogram
field in the output.
db.runCommand( { serverStatus: 1, metrics: { query: { multiPlanner: { histograms: false } } } } )
The following example includes all repl information in the output:
db.runCommand( { serverStatus: 1, repl: 1 } )
Initialization
The statistics reported by serverStatus are reset when the
mongod server is restarted.
This command will always return a value, even on a fresh database. The
related command db.serverStatus() does not always return a
value unless a counter has started to increment for a particular
metric.
After you run an update query, db.serverStatus() and
db.runCommand({ serverStatus: 1}) both return the same values.
{ arrayFilters : Long("0"), failed : Long("0"), pipeline : Long("0"), total : Long("1") }
Include mirroredReads
By default, the mirroredReads information is not included in
the output. To return mirroredReads information, you must
explicitly specify the inclusion:
db.runCommand( { serverStatus: 1, mirroredReads: 1 } )
Output
Note
The output fields vary depending on the version of MongoDB,
underlying operating system platform, the storage engine, and the
kind of node, including mongos, mongod or
replica set member.
For the serverStatus output specific to the version of
your MongoDB, refer to the appropriate version of the MongoDB Manual.
asserts
asserts: { regular: <num>, warning: <num>, msg: <num>, user: <num>, rollovers: <num> },
assertsA document that reports on the number of assertions raised since the MongoDB process started. Assertions are internal checks for errors that occur while the database is operating and can help diagnose issues with the MongoDB server. Non-zero asserts values indicate assertion errors, which are uncommon and not an immediate cause for concern. Errors that generate asserts can be recorded in the log file or returned directly to a client application for more information.
asserts.regularThe number of regular assertions raised since the MongoDB process started. Examine the MongoDB log for more information.
asserts.msgThe number of message assertions raised since the MongoDB process started. Examine the log file for more information about these messages.
asserts.userThe number of "user asserts" that have occurred since the last time the MongoDB process started. These are errors that user may generate, such as out of disk space or duplicate key. You can prevent these assertions by fixing a problem with your application or deployment. Server logs may have limited information about "user asserts." To learn more information about the source of "user asserts," check the application logs for application errors.
asserts.rolloversThe number of times that the assert counters have rolled over since the last time the MongoDB process started. The counters will roll over to zero after 2 30 assertions. Use this value to provide context to the other values in the
assertsdata structure.
bucketCatalog
bucketCatalog: { numBuckets: <num>, numOpenBuckets: <num>, numIdleBuckets: <num>, memoryUsage: <num>, numBucketInserts: <num>, numBucketUpdates: <num>, numBucketsOpenedDueToMetadata: <num>, numBucketsClosedDueToCount: <num>, numBucketsClosedDueToSchemaChange: <num>, numBucketsClosedDueToSize: <num>, numBucketsClosedDueToTimeForward: <num>, numBucketsClosedDueToTimeBackward: <num>, numBucketsClosedDueToMemoryThreshold: <num>, numCommits: <num>, numMeasurementsGroupCommitted: <num>, numWaits: <num>, numMeasurementsCommitted: <num>, avgNumMeasurementsPerCommit: <num>, numBucketsClosedDueToReopening: <num>, numBucketsArchivedDueToMemoryThreshold: <num>, numBucketsArchivedDueToTimeBackward: <num>, numBucketsReopened: <num>, numBucketsKeptOpenDueToLargeMeasurements: <num>, numBucketsClosedDueToCachePressure: <num>, numBucketsFrozen: <num>, numCompressedBucketsConvertedToUnsorted: <num>, numBucketsFetched: <num>, numBucketsQueried: <num>, numBucketFetchesFailed: <num>, numBucketQueriesFailed: <num>, numBucketReopeningsFailed: <num>, numDuplicateBucketsReopened: <num>, stateManagement: { bucketsManaged: <num>, currentEra: <num>, erasWithRemainingBuckets: <num>, trackedClearOperations: <num> } }
New in version 5.0.
A document that reports metrics related to the internal storage of time series collections.
The bucketCatalog returns the following metrics:
Metric | Description |
|---|---|
| The total number of tracked buckets. Expected to be equal to the sum of
|
| The number of tracked buckets with a full representation stored in-cache, ready to receive new documents. |
| The number of buckets that are open and currently without an uncommitted document insertion pending. A subset of numOpenBuckets. |
| The number of tracked buckets with a minimal representation stored in-cache that can be efficiently reopened to receive new documents. |
| The number of bytes used by internal bucketing data structures. |
| The number of new buckets created. |
| The number times an existing bucket was updated to include additional documents. |
| The number of buckets opened because a document arrived with a
|
| The number of buckets closed due to reaching their document count limit. |
| The number of buckets closed because the schema of an incoming document was incompatible with that of the documents in the open bucket. |
| The number of buckets closed because an incoming document would make the bucket exceed its size limit. |
| The number of buckets closed because a document arrived with a
|
| The number of buckets closed because a document arrived with a
|
| The number of buckets closed because the set of active buckets didn't fit within the allowed bucket catalog cache size. |
| The number of bucket-level commits to the time series collection. |
| The number of commits that included measurements from concurrent insert commands. |
| The number of times an operation waited on another thread to either reopen a bucket or finish a group commit. |
| The number of documents committed to the time series collection. |
| The average number of documents per commit. |
| The number of buckets closed because a suitable bucket was re-opened instead. |
| The number of buckets archived because the set of active buckets didn't fit within the allowed bucket catalog cache size. |
| The number of buckets archived because a document arrived with a
|
| The number of buckets re-opened because a document arrived that didn't match any open buckets, but did match an existing non-full bucket. |
| The number of buckets that would have been closed due size, but were kept open due to not containing a minimum number of documents required to achieve reasonable compression. |
| The number of buckets closed because their size exceeds the bucket
catalog's dynamic bucket size limit
derived from available storage engine cache size and |
| The number of frozen buckets. Buckets are frozen if attempting to compress the bucket would corrupt its contents. |
| The number of compressed buckets that contain documents not sorted by
their respective |
| The number of archived buckets fetched to check if they were suitable for re-opening. |
| The total number of buckets queried to see if they could hold an incoming document. |
| The number of archived buckets fetched that were not suitable for re-opening. |
| The number of queries for a suitable open bucket that failed due to lack of candidate availability. |
| The number of attempted bucket reopenings that failed due to reasons including conflicts with concurrent operations, malformed buckets, and more. |
| The number of re-opened buckets that are duplicates of currently open buckets. |
| A document that tracks bucket catalog state information. |
| The total number of buckets that are being tracked for conflict management. This includes open buckets in the bucket catalog as well as any buckets that are being directly written to, including by update and delete commands. |
| The current era of the bucket catalog. The bucket catalog starts at era 0 and increments when a bucket is cleared. Attempting to insert into a bucket will either cause it to be removed if it was cleared, or update it to the current era. |
| The number of eras with tracked buckets. |
| The number of times the a set of buckets has been cleared, but the removal of those buckets was deferred. This can happen due to events such as dropping a collection, moving a chunk in a sharded collection, or an election. |
You can also use the $collStats aggregation pipeline stage to find time series
metrics. To learn more, see storageStats Output on Time Series Collections.
catalogStats
New in version 5.1.
catalogStats: { collections: <num>, capped: <num>, views: <num>, timeseries: <num>, internalCollections: <num>, internalViews: <num>, systemProfile: <num> }
catalogStats.internalCollectionsThe total number of system collections (collections on the
config,admin, orlocaldatabases).
catalogStats.internalViewsThe total number of views of system collections (collections on the
config,admin, orlocaldatabases).
catalogStats.systemProfileThe total number of
profilecollections on all databases.
changeStreamPreImages
New in version 5.0.
changeStreamPreImages : { purgingJob : { totalPass : <num>, docsDeleted : <num>, bytesDeleted : <num>, scannedCollections : <num>, scannedInternalCollections : <num>, maxTimestampEligibleForTruncate : <timestamp>, maxStartWallTimeMillis : <num>, timeElapsedMillis : <num>, }, expireAfterSeconds : <num> }
A document that reports metrics related to change stream pre-images.
changeStreamPreImages.purgingJobNew in version 7.1.
A document that reports metrics related to the purging jobs for change stream pre-images. Purging jobs are background processes that the system uses to remove pre-images asynchronously.
The
changeStreamPreImages.purgingJobfield returns the following metrics:MetricDescriptiontotalPassTotal number of deletion passes completed by the purging job.
docsDeletedCumulative number of pre-image documents deleted by the purging job.
bytesDeletedCumulative size in bytes of all deleted documents from all pre-image collections by the purging job.
scannedCollectionsCumulative number of pre-image collections scanned by the purging job.
In single-tenant environments, this number is the same as
totalPasssince each tenant has one pre-image collection.scannedInternalCollectionsCumulative number of internal pre-image collections scanned by the purging job. Internal collections are the collections within the pre-image collections stored in
config.system.preimages.maxTimestampEligibleForTruncateMost recent timestamp up to which old pre-images can be truncated to reduce storage space. Pre-images older than
maxTimestampEligibleForTruncatecan be truncated.New in version 8.1.
maxStartWallTimeMillisMaximum wall time in milliseconds from the first document of each pre-image collection.
timeElapsedMillisCumulative time in milliseconds of all deletion passes by the purging job.
changeStreamPreImages.expireAfterSecondsNew in version 7.1.
Amount of time in seconds that MongoDB retains pre-images. If
expireAfterSecondsis not defined, this metric does not appear in theserverStatusoutput.
connections
connections : { current : <num>, available : <num>, totalCreated : <num>, rejected : <num>, // Added in MongoDB 6.3 active : <num>, threaded : <num>, exhaustIsMaster : <num>, exhaustHello : <num>, awaitingTopologyChanges : <num>, loadBalanced : <num>, queuedForEstablishment : <num>, // Added in MongoDB 8.2 *(also available in 8.1.1, 8.0.12, and 7.0.23)* establishmentRateLimit : { // Added in MongoDB 8.2 *(also available in 8.1.1, 8.0.12, and 7.0.23)* rejected: <num>, exempted: <num>, interruptedDueToClientDisconnect: <num> } }
connectionsA document that reports on the status of the connections. Use these values to assess the current load and capacity requirements of the server.
connections.currentThe number of incoming connections from clients to the database server. This number includes the current shell session. Consider the value of
connections.availableto add more context to this datum.The value will include all incoming connections including any shell connections or connections from other servers, such as replica set members or
mongosinstances.
connections.availableThe number of unused incoming connections available. Consider this value in combination with the value of
connections.currentto understand the connection load on the database, and the UNIXulimitSettings for Self-Managed Deployments document for more information about system thresholds on available connections.
connections.totalCreatedCount of all incoming connections created to the server. This number includes connections that have since closed.
connections.rejectedNew in version 6.3.
The number of incoming connections the server rejected because the server doesn't have the capacity to accept additional connections or the
net.maxIncomingConnectionssetting is reached.
connections.queuedForEstablishmentNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connections currently queued and waiting for establishment. This metric is relevant when connection establishment rate limiting is enabled using the
ingressConnectionEstablishmentRateLimiterEnabledparameter.
connections.establishmentRateLimitNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
A document that contains metrics related to the ingress connection establishment rate limiter. These metrics provide insights into how the rate limiter handles connection requests when
ingressConnectionEstablishmentRateLimiterEnabledis set totrue. For more information on rate limiting, see Configure the Ingress Connection Establishment Rate Limiter.
connections.establishmentRateLimit.rejectedNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connections the server rejects due to connection establishment rate limiting. This metric shows how many connection attempts the server rejected rejected because they exceeded the rate limits set by the ingress connection establishment rate limiter parameters.
connections.establishmentRateLimit.exemptedNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connections that bypassed the rate limiter because they originated from IP addresses or CIDR ranges specified in the
ingressConnectionEstablishmentRateLimiterBypassparameter. The server does note rate limit these connections and establishes them immediately, regardless of current queue size or rate limits.
connections.establishmentRateLimit.interruptedDueToClientDisconnectNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connections that were interrupted while waiting in the establishment queue because the client disconnected before establishment could complete. A high value for this metric sometimes indicates that the client's
connectTimeoutMSsetting is too short relative to the queue wait time, which is affected byingressConnectionEstablishmentMaxQueueDepthandingressConnectionEstablishmentRatePerSec. If this value is high, consider adjusting these parameters using the following formula:maxQueueDepth < (establishmentRatePerSec / 1000) * (connectTimeoutMs - avgEstablishmentTimeMs).
connections.activeThe number of active client connections to the server. Active client connections refers to client connections that currently have operations in progress.
connections.threadedThe number of incoming connections from clients that are assigned to threads that service client requests.
New in version 5.0.
connections.exhaustIsMasterThe number of connections whose last request was an
isMasterrequest with exhaustAllowed.Note
If you are running MongoDB 5.0 or later, do not use the
isMastercommand. Instead, usehello.
connections.exhaustHelloThe number of connections whose last request was a
hellorequest with exhaustAllowed.New in version 5.0.
defaultRWConcern
The defaultRWConcern section provides information on the local copy
of the global default read or write concern settings. The data may be
stale or out of date. See getDefaultRWConcern for more
information.
defaultRWConcern : { defaultReadConcern : { level : <string> }, defaultWriteConcern : { w : <string> | <int>, wtimeout : <int>, j : <bool> }, defaultWriteConcernSource: <string>, defaultReadConcernSource: <string>, updateOpTime : Timestamp, updateWallClockTime : Date, localUpdateWallClockTime : Date }
defaultRWConcern.defaultReadConcernThe last known global default read concern setting.
If
serverStatusdoes not return this field, the global default read concern has either not been set or has not yet propagated to the instance.
defaultRWConcern.defaultReadConcern.levelThe last known global default read concern level setting.
If
serverStatusdoes not return this field, the global default for this setting has either not been set or has not yet propagated to the instance.
defaultRWConcern.defaultWriteConcernThe last known global default write concern setting.
If
serverStatusdoes not return this field, the global default write concern has either not been set or has not yet propagated to the instance.
defaultRWConcern.defaultWriteConcern.wThe last known global default w setting.
If
serverStatusdoes not return this field, the global default for this setting has either not been set or has not yet propagated to the instance.
defaultRWConcern.defaultWriteConcern.wtimeoutThe last known global default wtimeout setting.
If
serverStatusdoes not return this field, the global default for this setting has either not been set or has not yet propagated to the instance.
defaultRWConcern.defaultWriteConcernSourceThe source of the default write concern. By default, the value is
"implicit". Once you set the default write concern withsetDefaultRWConcern, the value becomes"global".New in version 5.0.
defaultRWConcern.defaultReadConcernSourceThe source of the default read concern. By default, the value is
"implicit". Once you set the default read concern withsetDefaultRWConcern, the value becomes"global".New in version 5.0.
defaultRWConcern.updateOpTimeThe timestamp when the instance last updated its copy of any global read or write concern settings. If the
defaultRWConcern.defaultReadConcernanddefaultRWConcern.defaultWriteConcernfields are absent, this field indicates the timestamp when the defaults were last unset.
defaultRWConcern.updateWallClockTimeThe wall clock time when the instance last updated its copy of any global read or write concern settings. If the
defaultRWConcern.defaultReadConcernanddefaultRWConcern.defaultWriteConcernfields are absent, this field indicates the time when the defaults were last unset.
defaultRWConcern.localUpdateWallClockTimeThe local system wall clock time when the instance last updated its copy of any global read or write concern setting. If this field is the only field under
defaultRWConcern, the instance has never had knowledge of a global default read or write concern setting.
electionMetrics
The electionMetrics section provides information on elections
called by this mongod instance in a bid to become the
primary:
electionMetrics : { stepUpCmd : { called : Long("<num>"), successful : Long("<num>") }, priorityTakeover : { called : Long("<num>"), successful : Long("<num>") }, catchUpTakeover : { called : Long("<num>"), successful : Long("<num>") }, electionTimeout : { called : Long("<num>"), successful : Long("<num>") }, freezeTimeout : { called : Long("<num>"), successful : Long("<num>") }, numStepDownsCausedByHigherTerm : Long("<num>"), numCatchUps : Long("<num>"), numCatchUpsSucceeded : Long("<num>"), numCatchUpsAlreadyCaughtUp : Long("<num>"), numCatchUpsSkipped : Long("<num>"), numCatchUpsTimedOut : Long("<num>"), numCatchUpsFailedWithError : Long("<num>"), numCatchUpsFailedWithNewTerm : Long("<num>"), numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmd : Long("<num>"), averageCatchUpOps : <double> }
electionMetrics.stepUpCmdMetrics on elections that were called by the
mongodinstance as part of an election handoff when the primary stepped down.The
stepUpCmdincludes both the number of elections called and the number of elections that succeeded.
electionMetrics.priorityTakeoverMetrics on elections that were called by the
mongodinstance because itspriorityis higher than the primary's.The
electionMetrics.priorityTakeoverincludes both the number of elections called and the number of elections that succeeded.
electionMetrics.catchUpTakeoverMetrics on elections called by the
mongodinstance because it is more current than the primary.The
catchUpTakeoverincludes both the number of elections called and the number of elections that succeeded.
electionMetrics.electionTimeoutMetrics on elections called by the
mongodinstance because it has not been able to reach the primary withinsettings.electionTimeoutMillis.The
electionTimeoutincludes both the number of elections called and the number of elections that succeeded.
electionMetrics.freezeTimeoutMetrics on elections called by the
mongodinstance after itsfreeze period(during which the member cannot seek an election) has expired.The
electionMetrics.freezeTimeoutincludes both the number of elections called and the number of elections that succeeded.
electionMetrics.numStepDownsCausedByHigherTermNumber of times the
mongodinstance stepped down because it saw a higher term (specifically, other member(s) participated in additional elections).
electionMetrics.numCatchUpsNumber of elections where the
mongodinstance as the newly-elected primary had to catch up to the highest known oplog entry.
electionMetrics.numCatchUpsSucceededNumber of times the
mongodinstance as the newly-elected primary successfully caught up to the highest known oplog entry.
electionMetrics.numCatchUpsAlreadyCaughtUpNumber of times the
mongodinstance as the newly-elected primary concluded its catchup process because it was already caught up when elected.
electionMetrics.numCatchUpsSkippedNumber of times the
mongodinstance as the newly-elected primary skipped the catchup process.
electionMetrics.numCatchUpsTimedOutNumber of times the
mongodinstance as the newly-elected primary concluded its catchup process because of thesettings.catchUpTimeoutMillislimit.
electionMetrics.numCatchUpsFailedWithErrorNumber of times the newly-elected primary's catchup process failed with an error.
electionMetrics.numCatchUpsFailedWithNewTermNumber of times the newly-elected primary's catchup process concluded because another member(s) had a higher term (specifically, other member(s) participated in additional elections).
electionMetrics.numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmdNumber of times the newly-elected primary's catchup process concluded because the
mongodreceived thereplSetAbortPrimaryCatchUpcommand.
extra_info
extra_info : { note : 'fields vary by platform', page_faults : <num> },
extra_info.page_faultsThe total number of page faults. The
extra_info.page_faultscounter may increase dramatically during moments of poor performance and may correlate with limited memory environments and larger data sets. Limited and sporadic page faults do not necessarily indicate an issue.Windows differentiates "hard" page faults involving disk I/O from "soft" page faults that only require moving pages in memory. MongoDB counts both hard and soft page faults in this statistic.
flowControl
flowControl : { enabled : <boolean>, targetRateLimit : <int>, timeAcquiringMicros : Long("<num>"), locksPerKiloOp : <double>, sustainerRate : <int>, isLagged : <boolean>, isLaggedCount : <int>, isLaggedTimeMicros : Long("<num>") },
flowControlA document that returns statistics on the Flow Control. With flow control enabled, as the
majority commitpoint lag grows close to theflowControlTargetLagSeconds, writes on the primary must obtain tickets before taking locks. As such, the metrics returned are meaningful when run on the primary.
flowControl.enabledA boolean that indicates whether Flow Control is enabled (
true) or disabled (false).See also
enableFlowControl.
flowControl.targetRateLimitWhen run on the primary, the maximum number of tickets that can be acquired per second.
When run on a secondary, the returned number is a placeholder.
flowControl.timeAcquiringMicrosWhen run on the primary, the total time write operations have waited to acquire a ticket.
When run on a secondary, the returned number is a placeholder.
flowControl.locksPerKiloOpWhen run on the primary, an approximation of the number of locks taken per 1000 operations.
When run on a secondary, the returned number is a placeholder.
flowControl.sustainerRateWhen run on the primary, an approximation of operations applied per second by the secondary that is sustaining the commit point.
When run on a secondary, the returned number is a placeholder.
flowControl.isLaggedWhen run on the primary, a boolean that indicates whether flow control has engaged. Flow control engages when the majority committed lag is greater than some percentage of the configured
flowControlTargetLagSeconds.Replication lag can occur without engaging flow control. An unresponsive secondary might lag without the replica set receiving sufficient load to engage flow control, leaving the
flowControl.isLaggedvalue atfalse.For additional information, see Flow Control.
flowControl.isLaggedCountWhen run on a primary, the number of times flow control has engaged since the last restart. Flow control engages when the majority committed lag is greater than some percentage of the
flowControlTargetLagSeconds.When run on a secondary, the returned number is a placeholder.
flowControl.isLaggedTimeMicrosWhen run on the primary, the amount of time flow control has spent being engaged since the last restart. Flow control engages when the majority committed lag is greater than some percentage of the
flowControlTargetLagSeconds.When run on a secondary, the returned number is a placeholder.
globalLock
globalLock : { totalTime : Long("<num>"), currentQueue : { total : <num>, readers : <num>, writers : <num> }, activeClients : { total : <num>, readers : <num>, writers : <num> } },
globalLockA document that reports on the database's lock state.
Generally, the locks document provides more detailed data on lock uses.
globalLock.totalTimeThe time, in microseconds, since the database last started and created the
globalLock. This is approximately equivalent to the total server uptime.
globalLock.currentQueueA document that provides information concerning the number of operations queued because of a lock.
globalLock.currentQueue.totalThe total number of operations queued waiting for the lock (i.e., the sum of
globalLock.currentQueue.readersandglobalLock.currentQueue.writers).A consistently small queue, particularly of shorter operations, should cause no concern. The
globalLock.activeClientsreaders and writers information provides context for this data.
globalLock.currentQueue.readersThe number of operations that are currently queued and waiting for the read lock. A consistently small read queue, particularly of shorter operations, should cause no concern.
globalLock.currentQueue.writersThe number of operations that are currently queued and waiting for the write lock. A consistently small write queue, particularly of shorter operations, is no cause for concern.
globalLock.activeClientsA document that provides information about the number of connected clients and the read and write operations performed by these clients.
Use this data to provide context for the
globalLock.currentQueuedata.
globalLock.activeClients.totalThe total number of internal client connections to the database including system threads as well as queued readers and writers. This metric will be higher than the total of
activeClients.readersandactiveClients.writersdue to the inclusion of system threads.
indexBuilds
indexBuilds : { total : <num>, killedDueToInsufficientDiskSpace : <num>, failedDueToDataCorruption : <num> },
indexBuildsProvides metrics on index builds after the server last started.
indexBuilds.killedDueToInsufficientDiskSpaceTotal number of index builds that were ended because of insufficient disk space. Starting in MongoDB 7.1, you can set the minimum amount of disk space required for building indexes using the
indexBuildMinAvailableDiskSpaceMBparameter.New in version 7.1.
indexBulkBuilder
indexBulkBuilder: { count: <long>, resumed: <long>, filesOpenedForExternalSort: <long>, filesClosedForExternalSort: <long>, spilledRanges: <long>, bytesSpilledUncompressed: <long>, bytesSpilled: <long>, numSorted: <long>, bytesSorted: <long>, memUsage: <long> }
indexBulkBuilderProvides metrics for index bulk builder operations. Use these metrics to diagnose index build issues with
createIndexes, collection cloning during initial sync, index builds that resume after startup, and statistics on disk usage by the external sorter.
indexBuildBuilder.bytesSpilledNew in version 6.0.4.
The number of bytes written to disk by the external sorter.
indexBuilder.bytesSpilledUncompressedNew in version 6.0.4.
The number of bytes to be written to disk by the external sorter before compression.
indexBulkBuilder.filesClosedForExternalSortThe number of times the external sorter closed a file handle to spill data to disk. Combine this value with
filesOpenedForExternalSortto determine the number of open file handles in use by the external sorter.
indexBulkBuilder.filesOpenedForExternalSortThe number of times the external sorter opened a file handle to spill data to disk. Combine this value with
filesClosedForExternalSortto determine the number of open file handles in use by the external sorter.
indexBulkBuilder.resumedThe number of times the bulk builder was created for a resumable index build.
indexBulkBuilder.spilledRangesNew in version 6.0.4.
The number of times the external sorter spilled to disk.
indexStats
indexStats: { count: Long("<num>"), features: { '2d': { count: Long("<num>"), accesses: Long("<num>") }, '2dsphere': { count: Long("<num>"), accesses: Long("<num>") }, '2dsphere_bucket': { count: Long("<num>"), accesses: Long("<num>") }, collation: { count: Long("<num>"), accesses: Long("<num>") }, compound: { count: Long("<num>"), accesses: Long("<num>") }, hashed: { count: Long("<num>"), accesses: Long("<num>") }, id: { count: Long("<num>"), accesses: Long("<num>") }, normal: { count: Long("<num>"), accesses: Long("<num>") }, partial: { count: Long("<num>"), accesses: Long("<num>") }, prepareUnique: { count: Long("<num>"), accesses: Long("<num>") }, // Added in 8.1 (and 8.0.4 and 7.0.14) single: { count: Long("<num>"), accesses: Long("<num>") }, sparse: { count: Long("<num>"), accesses: Long("<num>") }, text: { count: Long("<num>"), accesses: Long("<num>") }, ttl: { count: Long("<num>"), accesses: Long("<num>") }, unique: { count: Long("<num>"), accesses: Long("<num>") }, wildcard: { count: Long("<num>"), accesses: Long("<num>") } } }
indexStatsA document that reports statistics on all indexes on databases and collections in non-system namespaces only.
indexStatsdoes not report statistics on indexes in theadmin,local, andconfigdatabases.New in version 6.0.
indexStats.featuresA document that provides counters for each index type and the number of accesses on each index. Each index type under
indexStats.featureshas acountfield that counts the total number of indexes for that type, and anaccessesfield that counts the number of accesses on that index.New in version 6.0.
Instance Information
host : <string>, advisoryHostFQDNs : <array>, version : <string>, process : <'mongod'|'mongos'>, service : <'router'|'shard'>, pid : Long("<num>"), uptime : <num>, uptimeMillis : Long("<num>"), uptimeEstimate : Long("<num>"), localTime : ISODate("<Date>"),
hostThe system's hostname. In Unix/Linux systems, this should be the same as the output of the
hostnamecommand.
serviceThe role of the current MongoDB process. Possible values are
routerorshard.New in version 8.0.
locks
locks : { <type> : { acquireCount : { <mode> : Long("<num>"), ... }, acquireWaitCount : { <mode> : Long("<num>"), ... }, timeAcquiringMicros : { <mode> : Long("<num>"), ... }, deadlockCount : { <mode> : Long("<num>"), ... } }, ...
locksA document that reports for each lock
<type>, data on lock<modes>.The possible lock
<types>are:Lock TypeDescriptionParallelBatchWriterModeRepresents a lock for parallel batch writer mode.
In earlier versions, PBWM information was reported as part of the
Globallock information.ReplicationStateTransitionRepresents lock taken for replica set member state transitions.
GlobalRepresents global lock.
DatabaseRepresents database lock.
CollectionRepresents collection lock.
MutexRepresents mutex.
MetadataRepresents metadata lock.
DDLDatabaseRepresents a DDL database lock.
New in version 7.1.
DDLCollectionRepresents a DDL collection lock.
New in version 7.1.
oplogRepresents lock on the oplog.
The possible
<modes>are:Lock ModeDescriptionRRepresents Shared (S) lock.
WRepresents Exclusive (X) lock.
rRepresents Intent Shared (IS) lock.
wRepresents Intent Exclusive (IX) lock.
All values are of the
Long()type.
locks.<type>.acquireWaitCountNumber of times the
locks.<type>.acquireCountlock acquisitions encountered waits because the locks were held in a conflicting mode.
locks.<type>.timeAcquiringMicrosCumulative wait time in microseconds for the lock acquisitions.
locks.<type>.timeAcquiringMicrosdivided bylocks.<type>.acquireWaitCountgives an approximate average wait time for the particular lock mode.
logicalSessionRecordCache
logicalSessionRecordCache : { activeSessionsCount : <num>, sessionsCollectionJobCount : <num>, lastSessionsCollectionJobDurationMillis : <num>, lastSessionsCollectionJobTimestamp : <Date>, lastSessionsCollectionJobEntriesRefreshed : <num>, lastSessionsCollectionJobEntriesEnded : <num>, lastSessionsCollectionJobCursorsClosed : <num>, transactionReaperJobCount : <num>, lastTransactionReaperJobDurationMillis : <num>, lastTransactionReaperJobTimestamp : <Date>, lastTransactionReaperJobEntriesCleanedUp : <num>, sessionCatalogSize : <num> },
logicalSessionRecordCacheProvides metrics around the caching of server sessions.
logicalSessionRecordCache.activeSessionsCountThe number of all active local sessions cached in memory by the
mongodormongosinstance since the last refresh period.
logicalSessionRecordCache.sessionsCollectionJobCountThe number that tracks the number of times the refresh process has run on the
config.system.sessionscollection.
logicalSessionRecordCache.lastSessionsCollectionJobDurationMillisThe length in milliseconds of the last refresh.
logicalSessionRecordCache.lastSessionsCollectionJobTimestampThe time at which the last refresh occurred.
logicalSessionRecordCache.lastSessionsCollectionJobEntriesRefreshedThe number of sessions that were refreshed during the last refresh.
logicalSessionRecordCache.lastSessionsCollectionJobEntriesEndedThe number of sessions that ended during the last refresh.
logicalSessionRecordCache.lastSessionsCollectionJobCursorsClosedThe number of cursors that were closed during the last
config.system.sessionscollection refresh.
logicalSessionRecordCache.transactionReaperJobCountThe number that tracks the number of times the transaction record cleanup process has run on the
config.transactionscollection.
logicalSessionRecordCache.lastTransactionReaperJobDurationMillisThe length (in milliseconds) of the last transaction record cleanup.
logicalSessionRecordCache.lastTransactionReaperJobTimestampThe time of the last transaction record cleanup.
logicalSessionRecordCache.lastTransactionReaperJobEntriesCleanedUpThe number of entries in the
config.transactionscollection that were deleted during the last transaction record cleanup.
logicalSessionRecordCache.sessionCatalogSize- For a
mongodinstance, - The size of its in-memory cache of the
config.transactionsentries. This corresponds to retryable writes or transactions whose sessions have not expired within thelocalLogicalSessionTimeoutMinutes.
- For a
- For a
mongosinstance, - The number of the in-memory cache of its sessions that have had
transactions within the most recent
localLogicalSessionTimeoutMinutesinterval.
- For a
mem
mem : { bits : <int>, resident : <int>, virtual : <int>, supported : <boolean> },
memA document that reports on the system architecture of the
mongodand current memory use.
mem.bitsA number, either
64or32, that indicates whether the MongoDB instance is compiled for 64-bit or 32-bit architecture.
mem.residentThe value of
mem.residentis roughly equivalent to the amount of RAM, in mebibyte (MiB), currently used by the database process. During normal use, this value tends to grow. In dedicated database servers, this number tends to approach the total amount of system memory.
mem.virtualmem.virtualdisplays the quantity, in mebibyte (MiB), of virtual memory used by themongodprocess.
mem.supportedA boolean that indicates whether the underlying system supports extended memory information. If this value is false and the system does not support extended memory information, then other
memvalues may not be accessible to the database server.
mem.noteThe field
mem.noteappears ifmem.supportedis false.The
mem.notefield contains the text:'not all mem info support on this platform'.
metrics
metrics : { abortExpiredTransactions: { passes: <integer>, successfulKills: <integer>, timedOutKills: <integer> }, apiVersions: { <appName1>: <string>, <appName2>: <string>, <appName3>: <string> }, aggStageCounters : { <aggregation stage> : Long("<num>") }, changeStreams: { largeEventsFailed: Long("<num>"), largeEventsSplit: Long("<num>"), showExpandedEvents: Long("<num>") }, commands: { <command>: { failed: Long("<num>"), validator: { total: Long("<num>"), failed: Long("<num>"), jsonSchema: Long("<num>") }, total: Long("<num>"), rejected: Long("<num>") } }, cursor : { moreThanOneBatch : Long("<num>"), timedOut : Long("<num>"), totalOpened : Long("<num>"), lifespan : { greaterThanOrEqual10Minutes : Long("<num>"), lessThan10Minutes : Long("<num>"), lessThan15Seconds : Long("<num>"), lessThan1Minute : Long("<num>"), lessThan1Second : Long("<num>"), lessThan30Seconds : Long("<num>"), lessThan5Seconds : Long("<num>") }, open : { noTimeout : Long("<num>"), pinned : Long("<num>"), multiTarget : Long("<num>"), singleTarget : Long("<num>"), total : Long("<num>") } }, document : { deleted : Long("<num>"), inserted : Long("<num>"), returned : Long("<num>"), updated : Long("<num>") }, dotsAndDollarsFields : { inserts : Long("<num>"), updates : Long("<num>") }, getLastError : { wtime : { num : <num>, totalMillis : <num> }, wtimeouts : Long("<num>"), default : { unsatisfiable : Long("<num>"), wtimeouts : Long("<num>") } }, mongos : { cursor : { moreThanOneBatch : Long("<num>"), totalOpened : Long("<num>") } }, network : { // Added in MongoDB 6.3 totalEgressConnectionEstablishmentTimeMillis : Long("<num>"), totalIngressTLSConnections : Long("<num>"), totalIngressTLSHandshakeTimeMillis : Long("<num>"), totalTimeForEgressConnectionAcquiredToWireMicros : Long("<num>"), totalTimeToFirstNonAuthCommandMillis : Long("<num>") "averageTimeToCompletedTLSHandshakeMicros": Long("<num>"), // Added in MongoDB 8.2 "averageTimeToCompletedHelloMicros": Long("<num>"), // Added in MongoDB 8.2 "averageTimeToCompletedAuthMicros": Long("<num>") // Added in MongoDB 8.2 }, operation : { killedDueToClientDisconnect : Long("<num>"), // Added in MongoDB 7.1 killedDueToDefaultMaxTimeMSExpired : Long("<num>"), killedDueToMaxTimeMSExpired : Long("<num>"), // Added in MongoDB 7.2 killedDueToRangeDeletion: Long("<num>"), // Added in MongoDB 8.2 numConnectionNetworkTimeouts : Long("<num>"), // Added in MongoDB 6.3 scanAndOrder : Long("<num>"), totalTimeWaitingBeforeConnectionTimeoutMillis : Long("<num>"), // Added in MongoDB 6.3 unsendableCompletedResponses : Long("<num>"), // Added in MongoDB 7.1 writeConflicts : Long("<num>") }, operatorCounters : { expressions : { <command> : Long("<num>") }, match : { <command> : Long("<num>") } }, query: { allowDiskUseFalse: Long("<num>"), updateOneOpStyleBroadcastWithExactIDCount: Long("<num>"), bucketAuto: { spilledBytes: Long("<num>"), spilledDataStorageSize: Long("<num>"), spilledRecords: Long("<num>"), spills: Long("<num>") }, lookup: { hashLookup: Long("<num>"), hashLookupSpillToDisk: Long("<num>"), indexedLoopJoin: Long("<num>"), nestedLoopJoin: Long("<num>") }, multiPlanner: { classicCount: Long("<num>"), classicMicros: Long("<num>"), classicWorks: Long("<num>"), sbeCount: Long("<num>"), sbeMicros: Long("<num>"), sbeNumReads: Long("<num>"), histograms: { classicMicros: [ { lowerBound: Long("0"), count: Long("<num>") }, { < Additional histogram groups not shown. > }, { lowerBound: Long("1073741824"), count: Long("<num>")> }> ], classicNumPlans: [ { lowerBound: Long("0"), count: Long("<num>") }, { < Additional histogram groups not shown. > }, { lowerBound: Long("32"), count: Long("<num>") } ], classicWorks: [ { lowerBound: Long("0"), count: Long("<num>") }, { < Additional histogram groups not shown. > }, { lowerBound: Long("32768"), count: Long("<num>") } ], sbeMicros: [ { lowerBound: Long("0"), count: Long("<num>") }, { < Additional histogram groups not shown. > }, { lowerBound: Long("1073741824"), count: Long("<num>") } ], sbeNumPlans: [ { lowerBound: Long("0"), count: Long("<num>") }, { < Additional histogram groups not shown. > }, { lowerBound: Long("32"), count: Long("<num>") } ], sbeNumReads: [ { lowerBound: Long("0"), count: Long("<num>") }, { < Additional histogram groups not shown. > }, { lowerBound: Long("32768"), count: Long("<num>") } ] } }, planCache: { classic: { hits: Long("<num>"), misses: Long("<num>"), replanned: Long("<num>") }, sbe: { hits: Long("<num>"), misses: Long("<num>"), replanned: Long("<num>") } }, queryFramework: { aggregate: { classicHybrid: Long("<num>"), classicOnly: Long("<num>"), cqf: Long("<num>"), sbeHybrid: Long("<num>"), sbeOnly: Long("<num>") }, find: { classic: Long("<num>"), cqf: Long("<num>"), sbe: Long("<num>") } } }, queryExecutor: { scanned : Long("<num>"), scannedObjects : Long("<num>"), collectionScans : { nonTailable : Long("<num>"), total : Long("<num>") }, profiler : { collectionScans : { nonTailable : Long("<num>"), tailable : Long("<num>"), total : Long("<num>") } } }, record : { moves : Long("<num>") }, repl : { executor : { pool : { inProgressCount : <num> }, queues : { networkInProgress : <num>, sleepers : <num> }, unsignaledEvents : <num>, shuttingDown : <boolean>, networkInterface : <string> }, apply : { attemptsToBecomeSecondary : Long("<num>"), batchSize: <num>, batches : { num : <num>, totalMillis : <num> }, ops : Long("<num>") }, write : { batchSize: <num>, batches : { num : <num>, totalMillis : <num> } }, buffer : { write: { count : Long("<num>"), maxSizeBytes : Long("<num>"), sizeBytes : Long("<num>") }, apply: { count : Long("<num>"), sizeBytes : Long("<num>"), maxSizeBytes : Long("<num>"), maxCount: Long("<num>") }, }, initialSync : { completed : Long("<num>"), failedAttempts : Long("<num>"), failures : Long("<num>") }, network : { bytes : Long("<num>"), getmores : { num : <num>, totalMillis : <num> }, notPrimaryLegacyUnacknowledgedWrites : Long("<num>"), notPrimaryUnacknowledgedWrites : Long("<num>"), oplogGetMoresProcessed : { num : <num>, totalMillis : <num> }, ops : Long("<num>"), readersCreated : Long("<num>"), replSetUpdatePosition : { num : Long("<num>") } }, reconfig : { numAutoReconfigsForRemovalOfNewlyAddedFields : Long("<num>") }, stateTransition : { lastStateTransition : <string>, totalOperationsKilled : Long("<num>"), totalOperationsRunning : Long("<num>") }, syncSource : { numSelections : Long("<num>"), numTimesChoseSame : Long("<num>"), numTimesChoseDifferent : Long("<num>"), numTimesCouldNotFind : Long("<num>") }, waiters : { opTime : Long("<num>"), replication : Long("<num>"), replCoordMutexTotalWaitTimeInOplogServerStatusMillis: Long("<num>") } }, storage : { freelist : { search : { bucketExhausted : <num>, requests : <num>, scanned : <num> } } }, ttl : { deletedDocuments : Long("<num>"), passes : Long("<num>"), subPasses : Long("<num>") } }
metricsA document that returns various statistics that reflect the current use and state of a running
mongodinstance.
metrics.abortExpiredTransactionsDocument that returns statistics on the current state of the
abortExpiredTransactionsthread.
metrics.abortExpiredTransactions.passesIndicates the number of successful passes aborting transactions older than the
transactionLifetimeLimitSecondsparameter.If the
passesvalue stops incrementing, it indicates that theabortExpiredTransactionsthread may be stuck.
metrics.abortExpiredTransactions.successfulKillsNumber of expired transactions successfully ended by MongoDB.
A session is checked out from a session pool to run database operations.
AbortExpiredTransactionsSessionCheckoutTimeoutsets the maximum number of milliseconds for a session to be checked out when attempting to end an expired transaction.If the expired transaction is successfully ended, MongoDB increments
metrics.abortExpiredTransactions.successfulKills. If the transaction isn't successfully ended because it timed out when attempting to check out a session, MongoDB incrementsmetrics.abortExpiredTransactions.timedOutKills.New in version 8.1: (also available in 8.0.13)
metrics.abortExpiredTransactions.timedOutKillsNumber of expired transactions unsuccessfully ended by MongoDB because it timed out when attempting to check out a session.
A session is checked out from a session pool to run database operations.
AbortExpiredTransactionsSessionCheckoutTimeoutsets the maximum number of milliseconds for a session to be checked out when attempting to end an expired transaction.If the expired transaction is successfully ended, MongoDB increments
metrics.abortExpiredTransactions.successfulKills. If the transaction isn't successfully ended because it timed out when attempting to check out a session, MongoDB incrementsmetrics.abortExpiredTransactions.timedOutKills.New in version 8.1: (also available in 8.0.13)
metrics.aggStageCountersA document that reports on the use of aggregation pipeline stages. The fields in
metrics.aggStageCountersare the names of aggregation pipeline stages. For each pipeline stage,serverStatusreports the number of times that stage has been executed.Updated in version 5.2 (and 5.0.6).
metrics.apiVersionsA document that contains:
The name of each client application
The Stable API version that each application was configured with within the last 24-hour period
Consider the following when viewing
metrics.apiVersions:The possible returned values for each
appnameare:default: The command was issued without a Stable API version specified.1: The command was issued with Stable API version 1.
Note
You may see both return values for an
appnamebecause you can specify a Stable API version at the command level. Some of your commands may have been issued with no Stable API version, while others were issued with version 1.API version metrics are retained for 24 hours. If no commands are issued with a specific API version from an application in the past 24 hours, that
appnameand API version will be removed from the metrics. This also applies to thedefaultAPI version metric.Set the
appnamewhen connecting to a MongoDB instance by specifying theappnamein the connection URI.?appName=ZZZsets theappnametoZZZZ.Drivers accessing the Stable API can set a default
appname.If no
appnameis configured, a default value will be automatically populated based on the product. For example, for a MongoDB Compass connection with noappnamein the URI, the metric returns:'MongoDB Compass': [ 'default' ].
New in version 5.0.
metrics.operatorCountersA document that reports on the use of aggregation pipeline operators and expressions.
metrics.operatorCounters.expressionsA document with a number that indicates how often Expressions ran.
To get metrics for a specific operator, such as the greater-than operator (
$gt), append the operator to the command:db.runCommand( { serverStatus: 1 } ).metrics.operatorCounters.expressions.$gt New in version 5.0.
metrics.operatorCounters.matchA document with a number that indicates how often match expressions ran.
Match expression operators also increment as part of an aggregation pipeline
$matchstage. If the$matchstage uses the$exproperator, the counter for$exprincrements, but the component counters do not increment.Consider the following query:
db.matchCount.aggregate( [ { $match: { $expr: { $gt: [ "$_id", 0 ] } } } ] ) The counter for
$exprincrements when the query runs. The counter for$gtdoes not.
metrics.changeStreams.largeEventsSplitThe number of change stream events larger than 16 MB that were split into smaller fragments. Events are only split if you use the
$changeStreamSplitLargeEventpipeline stage.New in version 7.0: (Also available in 6.0.9)
metrics.changeStreamsA document that reports information about change stream events larger than 16 MB.
New in version 7.0.
metrics.changeStreams.largeEventsFailedThe number of change stream events that caused a
BSONObjectTooLargeexception because the event was larger than 16 MB. To prevent the exception, see$changeStreamSplitLargeEvent.New in version 7.0: (Also available in 6.0.9 and 5.0.19)
metrics.changeStreams.showExpandedEventsThe number of change stream cursors with the showExpandedEvents option set to
true.The counter for
showExpandedEventsincrements when you:Open a change stream cursor.
Run the
explaincommand on a change stream cursor.
New in version 7.1.
metrics.commandsA document that reports on the use of database commands. The fields in
metrics.commandsare the names of database commands. For each command, theserverStatusreports the total number of executions and the number of failed executions.metrics.commandsincludesreplSetStepDownWithForce(i.e. thereplSetStepDowncommand withforce: true) as well as the overallreplSetStepDown. In earlier versions, the command reported only overallreplSetStepDownmetrics.
metrics.commands.<command>.failedThe number of times
<command>failed on thismongod.
metrics.commands.<create or collMod>.validatorFor the
createandcollModcommands, a document that reports on non-emptyvalidatorobjects passed to the command to specify validation rules or expressions for the collection.
metrics.commands.<create or collMod>.validator.totalThe number of times a non-empty
validatorobject was passed as an option to the command on thismongod.
metrics.commands.<create or collMod>.validator.failedThe number of times a call to the command on this
mongodfailed with a non-emptyvalidatorobject due to a schema validation error.
metrics.commands.<create or collMod>.validator.jsonSchemaThe number of times a
validatorobject with a$jsonSchemawas passed as an option to the command on thismongod.
metrics.commands.<command>.totalThe number of times
<command>executed on thismongod.
metrics.commands.<command>.rejectedThe number of times
<command>was rejected on thismongodbecause the command or operation has an associated query setting where therejectfield istrue.To set the
rejectfield, usesetQuerySettings.New in version 8.0.
metrics.commands.update.pipelineThe number of times an aggregation pipeline was used to update documents on this
mongod. Subtract this value from the total number of updates to get the number of updates made with document syntax.The
pipelinecounter is only available forupdateandfindAndModifyoperations.
metrics.commands.findAndModify.pipelineThe number of times
findAndModify()was used in an aggregation pipeline to update documents on thismongod.The
pipelinecounter is only available forupdateandfindAndModifyoperations.
metrics.commands.update.arrayFiltersThe number of times an arrayFilter was used to update documents on this
mongod.The
arrayFilterscounter is only available forupdateandfindAndModifyoperations.
metrics.commands.findAndModify.arrayFiltersThe number of times an arrayFilter was used with
findAndModify()to update documents on thismongod.The
arrayFilterscounter is only available forupdateandfindAndModifyoperations.
metrics.documentA document that reflects document access and modification patterns. Compare these values to the data in the
opcountersdocument, which track total number of operations.
metrics.document.updatedThe total number of documents matched for update operations. This value is not necessarily the same as the number of documents modified by updates.
metrics.dotsAndDollarsFieldsA document with a number that indicates how often insert or update operations ran using a dollar (
$) prefixed name. The value does not report the exact number of operations.When an upsert operation creates a new document, it is considered to be an
insertrather than anupdate.New in version 5.0.
metrics.getLastErrorA document that reports on write concern use.
metrics.getLastError.wtimeA document that reports write concern operation counts with a
wargument greater than1.
metrics.getLastError.wtime.numThe total number of operations with a specified write concern (i.e.
w) that wait for one or more members of a replica set to acknowledge the write operation (i.e. awvalue greater than1.)
metrics.getLastError.wtime.totalMillisThe total amount of time in milliseconds that the
mongodhas spent performing write concern operations with a write concern (i.e.w) that waits for one or more members of a replica set to acknowledge the write operation (i.e. awvalue greater than1.)
metrics.getLastError.wtimeoutsThe number of times that write concern operations have timed out as a result of the
wtimeoutthreshold. This number increments for both default and non-default write concern specifications.
metrics.getLastError.defaultA document that reports on when a default write concern was used (meaning, a non-
clientSuppliedwrite concern). The possible origins of a default write concern are:implicitDefaultcustomDefaultgetLastErrorDefaults
Refer to the following table for information on each possible write concern origin, or
provenance:ProvenanceDescriptionclientSuppliedThe write concern was specified in the application.
customDefaultThe write concern originated from a custom defined default value. See
setDefaultRWConcern.getLastErrorDefaultsThe write concern originated from the replica set's
settings.getLastErrorDefaultsfield.implicitDefaultThe write concern originated from the server in absence of all other write concern specifications.
metrics.getLastError.default.unsatisfiableNumber of times that a non-
clientSuppliedwrite concern returned theUnsatisfiableWriteConcernerror code.
metrics.mongosA document that contains metrics about
mongos.
metrics.mongos.cursorA document that contains metrics for cursors used by
mongos.
metrics.mongos.cursor.moreThanOneBatchThe total number of cursors that have returned more than one batch since
mongosstarted. Additional batches are retrieved using thegetMorecommand.New in version 5.0.
metrics.mongos.cursor.totalOpenedThe total number of cursors that have been opened since
mongosstarted, including cursors currently open. Differs frommetrics.cursor.open.total, which is the number of currently open cursors only.New in version 5.0.
metrics.network.totalEgressConnectionEstablishmentTimeMillisNew in version 6.3.
The total time in milliseconds to establish server connections.
metrics.network.totalIngressTLSConnectionsNew in version 6.3.
The total number of incoming connections to the server that use TLS. The number is cumulative and is the total after the server was started.
metrics.network.totalIngressTLSHandshakeTimeMillisNew in version 6.3.
The total time in milliseconds that incoming connections to the server have to wait for the TLS network handshake to complete. The number is cumulative and is the total after the server was started.
metrics.network.totalTimeForEgressConnectionAcquiredToWireMicrosNew in version 6.3.
The total time in microseconds that operations wait between acquisition of a server connection and writing the bytes to send to the server over the network. The number is cumulative and is the total after the server was started.
metrics.network.totalTimeToFirstNonAuthCommandMillisNew in version 6.3.
The total time in milliseconds from accepting incoming connections to the server and receiving the first operation that isn't part of the connection authentication handshake. The number is cumulative and is the total after the server was started.
metrics.network.averageTimeToCompletedTLSHandshakeMicrosNew in version 8.2: (also available in 8.1.1)
The average time in microseconds that it takes to complete a TLS handshake for incoming connections.
metrics.network.averageTimeToCompletedHelloMicrosNew in version 8.2: (also available in 8.1.1)
The time in microseconds between the beginning of connection establishment and the completion of the
hellocommand. You can use this metric to tune theingressConnectionEstablishmentMaxQueueDepthandingressConnectionEstablishmentRatePerSecto ensure that there is proper time allotted to complete the connection establishment after exiting the queue.
metrics.network.averageTimeToCompletedAuthMicrosNew in version 8.2: (also available in 8.1.1)
The time in microseconds the SASL auth exchange takes to be completed after the beginning of connection establishment.
metrics.operationA document that holds counters for several types of update and query operations that MongoDB handles using special operation types.
metrics.operation.killedDueToClientDisconnectNew in version 7.1.
Total number of operations cancelled before completion because the client disconnected.
metrics.operation.killedDueToDefaultMaxTimeMSExpiredNew in version 8.0.
Total number of operations that timed out due to the cluster-level default timeout,
defaultMaxTimeMS.
metrics.operation.killedDueToMaxTimeMSExpiredNew in version 7.2.
Total number of operations that timed out due to the operation-level timeout,
cursor.maxTimeMS().
metrics.operation.killedDueToRangeDeletionNew in version 8.2.
Total number of operations terminated because of orphan range cleanup. To learn more, see
terminateSecondaryReadsOnOrphanCleanup.
metrics.operation.numConnectionNetworkTimeoutsNew in version 6.3.
Total number of operations that failed because of server connection acquisition time out errors.
metrics.operation.scanAndOrderThe total number of queries that return sorted numbers that cannot perform the sort operation using an index.
metrics.operation.totalTimeWaitingBeforeConnectionTimeoutMillisNew in version 6.3.
Total time in milliseconds that operations waited before failing because of server connection acquisition time out errors.
metrics.operation.unsendableCompletedResponsesNew in version 7.1.
Total number of operations that completed server-side but did not send their response to the client because the connection between the client and server failed or disconnected.
metrics.query.bucketAuto.spilledBytesThe number of in-memory bytes spilled to disk by the
$bucketAutostage.New in version 8.2.
metrics.query.bucketAuto.spilledDataStorageSizeThe total disk space, in bytes, used by the spilled data from the
$bucketAutostage.New in version 8.2.
metrics.query.bucketAuto.spilledRecordsThe number of records spilled to disk by the
$bucketAutostage.New in version 8.2.
metrics.query.bucketAuto.spillsThe number of times the
$bucketAutostage spilled to disk.New in version 8.2.
metrics.query.lookupA document that provides detailed data on the use of the
$lookupstage with the slot-based query execution engine. To learn more, see$lookupOptimization.These metrics are primarily intended for internal use by MongoDB.
New in version 6.1
metrics.query.multiPlannerProvides detailed query planning data for the slot-based query execution engine and the classic query engine. For more information on the slot-based query execution engine see: Slot-Based Query Execution Engine Pipeline Optimizations.
These metrics are primarily intended for internal use by MongoDB.
New in version 6.0.0 and 5.0.9
metrics.query.sort.spillToDiskThe total number of writes to disk caused by sort stages.
New in version 6.2.
query.multiPlanner.classicMicrosAggregates the total number of microseconds spent in the classic multiplanner.
query.multiPlanner.classicWorksAggregates the total number of "works" performed in the classic multiplanner.
query.multiPlanner.classicCountAggregates the total number of invocations of the classic multiplanner.
query.multiPlanner.sbeMicrosAggregates the total number of microseconds spent in the slot-based execution engine multiplanner.
query.multiPlanner.sbeNumReadsAggregates the total number of reads done in the slot-based execution engine multiplanner.
query.multiPlanner.sbeCountAggregates the total number of invocations of the slot-based execution engine multiplanner.
query.multiPlanner.histograms.classicMicrosA histogram measuring the number of microseconds spent in an invocation of the classic multiplanner.
query.multiPlanner.histograms.classicWorksA histogram measuring the number of "works" performed during an invocation of the classic multiplanner.
query.multiPlanner.histograms.classicNumPlansA histogram measuring the number of plans in the candidate set during an invocation of the classic multiplanner.
query.multiPlanner.histograms.sbeMicrosA histogram measuring the number of microseconds spent in an invocation of the slot-based execution engine multiplanner.
query.multiPlanner.histograms.sbeNumReadsA histogram measuring the number of reads during an invocation of the slot-based execution engine multiplanner.
query.multiPlanner.histograms.sbeNumPlansA histogram measuring the number of plans in the candidate set during an invocation of the slot-based execution engine multiplanner.
query.planning.fastPath.expressThe number of queries that use an optimized index scan plan consisting of one of the following plan stages:
EXPRESS_CLUSTERED_IXSCANEXPRESS_DELETEEXPRESS_IXSCANEXPRESS_UPDATE
For more information on query plans, see Explain Results.
New in version 8.1.
query.planning.fastPath.idHackThe number of queries that contain the
_idfield. For these queries, MongoDB uses the default index on the_idfield and skips all query plan analysis.New in version 8.1.
query.queryFramework.aggregateA document that reports on the number of aggregation operations run on each query framework. The subfields in
query.queryFramework.aggregateindicate the number of times each framework was used to perform an aggregation operation.
query.queryFramework.findA document that reports on the number of find operations run on each query framework. The subfields in
query.queryFramework.findindicate the number of times each framework was used to perform a find operation.
metrics.queryExecutor.scannedThe total number of index items scanned during queries and query-plan evaluation. This counter is the same as
totalKeysExaminedin the output ofexplain().
metrics.queryExecutor.scannedObjectsThe total number of documents scanned during queries and query-plan evaluation. This counter is the same as
totalDocsExaminedin the output ofexplain().
metrics.queryExecutor.collectionScansA document that reports on the number of queries that performed a collection scan.
metrics.queryExecutor.collectionScans.nonTailableThe number of queries that performed a collection scan that did not use a tailable cursor.
metrics.queryExecutor.collectionScans.totalThe total number queries that performed a collection scan. The total consists of queries that did and did not use a tailable cursor.
metrics.queryExecutor.profiler.collectionScans.nonTailableThe number of queries that performed a collection scan on a
profilecollection that did not use a tailable cursor.
metrics.queryExecutor.profiler.collectionScans.tailableThe number of queries that performed a collection scan on a
profilecollection that used a tailable cursor.
metrics.queryExecutor.profiler.collectionScans.totalThe total number of queries that performed a collection scan on a
profilecollection. This includes queries that used both tailable and non-tailable cursors.
metrics.recordA document that reports on data related to record allocation in the on-disk memory files.
metrics.replA document that reports metrics related to the replication process.
metrics.repldocument appears on allmongodinstances, even those that aren't members of replica sets.
metrics.repl.applyA document that reports on the application of operations from the replication oplog.
metrics.repl.apply.batchSizeThe total number of oplog operations applied. The
metrics.repl.apply.batchSizeis incremented with the number of operations in a batch at the batch boundaries instead of being incremented by one after each operation.For finer granularity, see
metrics.repl.apply.ops.
metrics.repl.apply.batchesmetrics.repl.apply.batchesreports on the oplog application process on secondaries members of replica sets. See Multithreaded Replication for more information on the oplog application processes.
metrics.repl.apply.batches.totalMillisThe total amount of time in milliseconds the
mongodhas spent applying operations from the oplog.
metrics.repl.apply.opsThe total number of oplog operations applied.
metrics.repl.apply.opsis incremented after each operation.
metrics.repl.write.batchSizeTotal number of entries written to the oplog. This metric updates with the number of entries in each batch as the member finishes writing the batch to the oplog.
New in version 8.0.
metrics.repl.write.batchesDocument that reports on the oplog writing process for secondary members.
New in version 8.0.
metrics.repl.write.batches.numTotal number of batches written across all databases.
New in version 8.0.
metrics.repl.write.batches.totalMillisTotal time in milliseconds the member has spent writing entries to the oplog.
New in version 8.0.
metrics.repl.bufferMongoDB buffers oplog operations from the replication sync source buffer before applying oplog entries in a batch.
metrics.repl.bufferprovides a way to track oplog buffers. See Multithreaded Replication for more information on the oplog application process.Changed in version 8.0.
Starting in MongoDB 8.0, secondaries now update the local oplog and apply changes to the database in parallel. For each batch of oplog entries, MongoDB uses two buffers:
The
writebuffer receives new oplog entries from the primary. The writer adds these entries to the local oplog and sends them to the applier.The
applybuffer receives new oplog entries from the writer. The applier uses these entries to update the local database.
This is a breaking change as it deprecates the older
metrics.repl.bufferstatus metrics.
metrics.repl.buffer.applyProvides information on the status of the oplog apply buffer.
New in version 8.0.
metrics.repl.buffer.apply.countThe current number of operations in the oplog apply buffer.
New in version 8.0.
metrics.repl.buffer.apply.maxCountMaximum number of operations in the oplog apply buffer.
mongodsets this value using a constant, which is not configurable.New in version 8.0.
metrics.repl.buffer.apply.maxSizeBytesMaximum size of the apply buffer.
mongodsets this size using a constant, which is not configurable.New in version 8.0.
metrics.repl.buffer.apply.sizeBytesThe current size of the contents of the oplog apply buffer.
New in version 8.0.
metrics.repl.buffer.countDeprecated since version 8.0.
Starting in MongoDB 8.0, secondaries use separate buffers to write and apply oplog entries. For the current number of operations in the oplog buffers, see the
apply.countorwrite.countstatus metrics.
metrics.repl.buffer.maxSizeBytesDeprecated since version 8.0.
Starting in MongoDB 8.0, secondaries use separate buffers to write and apply oplog entries. For the maximum size of the buffers, see the
apply.maxSizeBytesorwrite.maxSizeBytesstatus metrics.
metrics.repl.buffer.sizeBytesDeprecated since version 8.0.
Starting in MongoDB 8.0, secondaries use separate buffers to write and apply oplog entries. For the current size of the oplog buffers, see the
apply.sizeBytesorwrite.sizeBytesstatus metrics.
metrics.repl.buffer.writeProvides information on the status of the oplog write buffer.
New in version 8.0.
metrics.repl.buffer.write.countThe current number of operations in the oplog write buffer.
New in version 8.0.
metrics.repl.buffer.write.maxSizeBytesMaximum size of the write buffer.
mongodsets this value using a constant, which is not configurable.New in version 8.0.
metrics.repl.buffer.write.sizeBytesThe current size of the contents of the oplog write buffer.
New in version 8.0.
metrics.repl.networkmetrics.repl.networkreports network use by the replication process.
metrics.repl.network.bytesmetrics.repl.network.bytesreports the total amount of data read from the replication sync source.
metrics.repl.network.getmoresmetrics.repl.network.getmoresreports on thegetmoreoperations, which are requests for additional results from the oplog cursor as part of the oplog replication process.
metrics.repl.network.getmores.nummetrics.repl.network.getmores.numreports the total number ofgetmoreoperations, which are operations that request an additional set of operations from the replication sync source.
metrics.repl.network.getmores.totalMillismetrics.repl.network.getmores.totalMillisreports the total amount of time required to collect data fromgetmoreoperations.Note
This number can be quite large, as MongoDB will wait for more data even if the
getmoreoperation does not initial return data.
metrics.repl.network.getmores.numEmptyBatchesThe number of empty
oplogbatches a secondary receives from its sync source. A secondary receives an empty batch if it is fully synced with its source and either:The
getmoretimes out waiting for more data, orThe sync source's majority commit point has advanced since the last batch sent to this secondary.
For a primary, if the instance was previously a secondary, the number reports on the empty batches received when it was a secondary. Otherwise, for a primary, this number is
0.
metrics.repl.network.notPrimaryLegacyUnacknowledgedWritesThe number of unacknowledged (
w: 0) legacy write operations (see Opcodes) that failed because the currentmongodis not inPRIMARYstate.
metrics.repl.network.notPrimaryUnacknowledgedWritesThe number of unacknowledged (
w: 0) write operations that failed because the currentmongodis not inPRIMARYstate.
metrics.repl.network.oplogGetMoresProcessedA document that reports the number of
getMorecommands to fetch the oplog that a node processed as a sync source.
metrics.repl.network.oplogGetMoresProcessed.numThe number of
getMorecommands to fetch the oplog that a node processed as a sync source.
metrics.repl.network.oplogGetMoresProcessed.totalMillisThe time, in milliseconds, that a node spent processing the
getMorecommands counted inmetrics.repl.network.oplogGetMoresProcessed.num.
metrics.repl.network.readersCreatedThe total number of oplog query processes created. MongoDB will create a new oplog query any time an error occurs in the connection, including a timeout, or a network operation. Furthermore,
metrics.repl.network.readersCreatedwill increment every time MongoDB selects a new source for replication.
metrics.repl.network.replSetUpdatePositionA document that reports the number of
replSetUpdatePositioncommands a node sent to its sync source.
metrics.repl.network.replSetUpdatePosition.numThe number of
replSetUpdatePositioncommands a node sent to its sync source.replSetUpdatePositioncommands are internal replication commands that communicate replication progress from nodes to their sync sources.Note
Replica set members in the
STARTUP2state do not send thereplSetUpdatePositioncommand to their sync source.
metrics.repl.reconfigA document containing the number of times that member
newlyAddedfields were automatically removed by the primary. When a member is first added to the replica set, the member'snewlyAddedfield is set totrue.New in version 5.0.
metrics.repl.reconfig.numAutoReconfigsForRemovalOfNewlyAddedFieldsThe number of times that
newlyAddedmember fields were automatically removed by the primary. When a member is first added to the replica set, the member'snewlyAddedfield is set totrue. After the primary receives the member's heartbeat response indicating the member state isSECONDARY,RECOVERING, orROLLBACK, the primary automatically removes the member'snewlyAddedfield. ThenewlyAddedfields are stored in thelocal.system.replsetcollection.New in version 5.0.
metrics.repl.stateTransitionInformation on user operations when the member undergoes one of the following transitions that can stop user operations:
The member steps up to become a primary.
The member steps down to become a secondary.
The member is actively performing a rollback.
metrics.repl.stateTransition.lastStateTransitionThe transition being reported:
State ChangeDescription"stepUp"The member steps up to become a primary.
"stepDown"The member steps down to become a secondary.
"rollback"The member is actively performing a rollback.
""The member has not undergone any state changes.
metrics.repl.stateTransition.totalOperationsKilledThe total number of operations stopped during the
mongodinstance's state change.New in version 7.3:
totalOperationsKilledreplacesuserOperationsKilled
metrics.repl.stateTransition.totalOperationsRunningThe total number of operations that remained running during the
mongodinstance's state change.New in version 7.3:
totalOperationsRunningreplacesuserOperationsRunning
metrics.repl.stateTransition.userOperationsKilledDeprecated since version 7.3:
totalOperationsKilledreplacesuserOperationsKilled.
metrics.repl.stateTransition.userOperationsRunningDeprecated since version 7.3:
totalOperationsRunningreplacesuserOperationsRunning.
metrics.repl.syncSourceInformation on a replica set node's sync source selection process.
metrics.repl.syncSource.numSelectionsNumber of times a node attempted to choose a node to sync from among the available sync source options. A node attempts to choose a node to sync from if, for example, the sync source is re-evaluated or the node receives an error from its current sync source.
metrics.repl.syncSource.numTimesChoseSameNumber of times a node kept its original sync source after re-evaluating if its current sync source was optimal.
metrics.repl.syncSource.numTimesChoseDifferentNumber of times a node chose a new sync source after re-evaluating if its current sync source was optimal.
metrics.repl.syncSource.numTimesCouldNotFindNumber of times a node could not find an available sync source when attempting to choose a node to sync from.
metrics.repl.timestamps.oldestTimestampThe timestamp for the oldest snapshot. A snapshot is a copy of the data in a
mongodinstance at a specific point in time.New in version 8.1.
metrics.repl.waiters.replicationThe number of threads waiting for replicated or journaled write concern acknowledgments.
New in version 7.3.
metrics.repl.waiters.opTimeThe number of threads queued for local replication optime assignments.
New in version 7.3.
metrics.repl.waiters.replCoordMutexTotalWaitTimeInOplogServerStatusMillisThe average wait time in milliseconds to acquire the replication coordinator mutex. MongoDB measures this time when it generates the server status oplog section. This metric helps you identify potential replication performance issues related to mutex contention.
New in version 8.2.
metrics.storage.freelist.search.bucketExhaustedThe number of times that
mongodhas examined the free list without finding a large record allocation.
metrics.storage.freelist.search.requestsThe number of times
mongodhas searched for available record allocations.
metrics.storage.freelist.search.scannedThe number of available record allocations
mongodhas searched.
metrics.ttlA document that reports on the operation of the resource use of the ttl index process.
metrics.ttl.deletedDocumentsThe total number of documents deleted from collections with a ttl index.
metrics.ttl.invalidTTLIndexSkipsNumber of TTL deletes skipped due to a TTL secondary index being present, but not valid for TTL deletion.
0indicates all secondary TTL indexes are eligible for TTL deletion.A non-zero value indicates there is an invalid secondary TTL index.
If there is an invalid secondary TTL index, you must manually modify the secondary index to use automatic TTL deletion.
New in version 8.1.
metrics.ttl.passesNumber of passes performed by the TTL background process to check for expired documents. A pass is complete when the TTL monitor has deleted as many candidate documents as it can find from all TTL indexes. For more information on the TTL index deletion process, see Deletion Process.
metrics.ttl.subPassesNumber of sub-passes performed by the TTL background process to check for expired documents. For more information on the TTL index deletion process, see Deletion Process.
metrics.cursor.moreThanOneBatchThe total number of cursors that have returned more than one batch since the server process started. Additional batches are retrieved using the
getMorecommand.New in version 5.0.
metrics.cursor.timedOutThe total number of cursors that have timed out since the server process started. If this number is large or growing at a regular rate, this may indicate an application error.
metrics.cursor.totalOpenedThe total number of cursors that have been opened since the server process started, including cursors currently open. Differs from
metrics.cursor.open.total, which is the number of currently open cursors only.New in version 5.0.
metrics.cursor.lifespanA document that reports the number of cursors that have lifespans within specified time periods. The cursor lifespan is the time period from when the cursor is created to when the cursor is killed using the
killCursorscommand or the cursor has no remaining objects in the batch.The lifespan time periods are:
< 1 second
>= 1 second to < 5 seconds
>= 5 seconds to < 15 seconds
>= 15 seconds to < 30 seconds
>= 30 seconds to < 1 minute
>= 1 minute to < 10 minutes
>= 10 minutes
New in version 5.0.
metrics.cursor.lifespan.greaterThanOrEqual10MinutesThe number of cursors with a lifespan >= 10 minutes.
New in version 5.0.
metrics.cursor.lifespan.lessThan10MinutesThe number of cursors with a lifespan >= 1 minute to < 10 minutes.
New in version 5.0.
metrics.cursor.lifespan.lessThan15SecondsThe number of cursors with a lifespan >= 5 seconds to < 15 seconds.
New in version 5.0.
metrics.cursor.lifespan.lessThan1MinuteThe number of cursors with a lifespan >= 30 seconds to < 1 minute.
New in version 5.0.
metrics.cursor.lifespan.lessThan1SecondThe number of cursors with a lifespan < 1 second.
New in version 5.0.
metrics.cursor.lifespan.lessThan30SecondsThe number of cursors with a lifespan >= 15 seconds to < 30 seconds.
New in version 5.0.
metrics.cursor.lifespan.lessThan5SecondsThe number of cursors with a lifespan >= 1 second to < 5 seconds.
New in version 5.0.
metrics.cursor.open.noTimeoutThe number of open cursors with the option
DBQuery.Option.noTimeoutset to prevent timeout after a period of inactivity.
metrics.cursor.open.totalThe number of cursors that MongoDB is maintaining for clients. Because MongoDB exhausts unused cursors, typically this value small or zero. However, if there is a queue, or stale tailable cursors, or a large number of operations this value may increase.
metrics.cursor.open.singleTargetThe total number of cursors that only target a single shard. Only
mongosinstances reportmetrics.cursor.open.singleTargetvalues.
metrics.cursor.open.multiTargetThe total number of cursors that only target more than one shard. Only
mongosinstances reportmetrics.cursor.open.multiTargetvalues.
mirroredReads
Available on mongod only.
"mirroredReads" : { "seen" : <num>, "sent" : <num> },
mirroredReadsAvailable on mongod only.
A document that reports on mirrored reads. To return
mirroredReadsinformation, you must explicitly specify the inclusion:db.runCommand( { serverStatus: 1, mirroredReads: 1 } ) mirroredReads.processedAsSecondaryNew in version 6.2.
The number of mirrored reads processed by this member while a secondary.
Tip
mirrorReadsParameter
mirroredReads.seenThe number of operations that support mirroring received by this member.
Tip
mirrorReadsParameter
mirroredReads.sentThe number of mirrored reads sent by this member when primary. For example, if a read is mirrored and sent to two secondaries, the number of mirrored reads is
2.Tip
mirrorReadsParameter
network
network : { egress : { bytesIn : Long("<num>"), bytesOut : Long("<num>"), physicalBytesIn : Long("<num>"), physicalBytesOut : Long("<num>"), numRequests : Long("<num>"), } bytesIn : Long("<num>"), bytesOut : Long("<num>"), physicalBytesIn : Long("<num>"), physicalBytesOut : Long("<num>"), numSlowDNSOperations : Long("<num>"), numSlowSSLOperations : Long("<num>"), numRequests : Long("<num>"), tcpFastOpen : { kernelSetting : Long("<num>"), serverSupported : <bool>, clientSupported : <bool>, accepted : Long("<num>") }, compression : { snappy : { compressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") }, decompressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") } }, zstd : { compressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") }, decompressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") } }, zlib : { compressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") }, decompressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") } } }, serviceExecutors : { passthrough : { threadsRunning : <num>, clientsInTotal : <num>, clientsRunning : <num>, clientsWaitingForData : <num> }, fixed : { threadsRunning : <num>, clientsInTotal : <num>, clientsRunning : <num>, clientsWaitingForData : <num> } }, listenerProcessingTime : { durationMicros : <num> } // Added in MongoDB 6.3 }
networkA document that reports data on MongoDB's network use. These statistics measure ingress and egress connections, specifically the traffic seen by the
mongodormongosover network connections initiated by clients or othermongodormongosinstances.
network.egressReports data on the traffic from egress connections initiated by this
mongodormongosinstance. In most cases, the egress connections aremongoscommunicating withmongodfor sharding, ormongodcommunicating withmongodfor replication. It is also possible thatmongodormongosare communicating with external services, such asmongot.
network.egress.bytesInThe total number of logical bytes that
mongodormongosinstances have received over network connections that they have initiated to other nodes/services. Logical bytes are the exact number of bytes that a given file contains.
network.egress.bytesOutThe total number of logical bytes that
mongodormongosinstances have sent over network connections that they have initiated to other nodes/services. Logical bytes correspond to the number of bytes that a given file contains.
network.egress.physicalBytesInThe total number of physical bytes that
mongodormongosinstances has received over network connections that they have initiated to other nodes/services. Physical bytes are the number of bytes that actually reside on disk.
network.egress.physicalBytesOutThe total number of physical bytes that
mongodormongosinstances have sent over network connections that they have initiated to other nodes/services. Physical bytes are the number of bytes that actually reside on disk.
network.egress.numRequestsThe total number of distinct requests that
mongodormongoshave sent and received responses to. Use this value to provide context for thenetwork.egress.bytesInandnetwork.egress.bytesOutvalues to ensure that MongoDB's network utilization is consistent with expectations and application use.
network.bytesInThe total number of logical bytes that the server has received over network connections initiated by clients or other
mongodormongosinstances. Logical bytes are the exact number of bytes that a given file contains.
network.bytesOutThe total number of logical bytes that the server has sent over network connections initiated by clients or other
mongodormongosinstances. Logical bytes correspond to the number of bytes that a given file contains.
network.physicalBytesInThe total number of physical bytes that the server has received over network connections initiated by clients or other
mongodormongosinstances. Physical bytes are the number of bytes that actually reside on disk.
network.physicalBytesOutThe total number of physical bytes that the server has sent over network connections initiated by clients or other
mongodormongosinstances. Physical bytes are the number of bytes that actually reside on disk.
network.numSlowDNSOperationsThe total number of DNS resolution operations which took longer than 1 second.
network.numSlowSSLOperationsThe total number of SSL handshake operations which took longer than 1 second.
network.numRequestsThe total number of distinct requests that the server has received. Use this value to provide context for the
network.bytesInandnetwork.bytesOutvalues to ensure that MongoDB's network utilization is consistent with expectations and application use.
network.tcpFastOpenA document that reports data on MongoDB's support and use of TCP Fast Open (TFO) connections.
network.tcpFastOpen.kernelSettingLinux only
Returns the value of
/proc/sys/net/ipv4/tcp_fastopen:0- TCP Fast Open is disabled on the system.1- TCP Fast Open is enabled for outgoing connections.2- TCP Fast Open is enabled for incoming connections.3- TCP Fast Open is enabled for incoming and outgoing connections.
network.tcpFastOpen.serverSupportedReturns
trueif the host operating system supports inbound TCP Fast Open (TFO) connections.Returns
falseif the host operating system does not support inbound TCP Fast Open (TFO) connections.
network.tcpFastOpen.clientSupportedReturns
trueif the host operating system supports outbound TCP Fast Open (TFO) connections.Returns
falseif the host operating system does not support outbound TCP Fast Open (TFO) connections.
network.tcpFastOpen.acceptedThe total number of accepted incoming TCP Fast Open (TFO) connections to the
mongodormongossince themongodormongoslast started.
network.compressionA document that reports on the amount of data compressed and decompressed by each network compressor library.
network.compression.snappyA document that returns statistics on the number of bytes that have been compressed and decompressed with the snappy library.
network.compression.zstdA document that returns statistics on the number of bytes that have been compressed and decompressed with the zstd library.
network.compression.zlibA document that returns statistics on the number of bytes that have been compressed and decompressed with the zlib library.
network.serviceExecutorsNew in version 5.0.
A document that reports data on the service executors, which run operations for client requests.
network.serviceExecutors.passthroughNew in version 5.0.
A document that reports data about the threads and clients for the
passthroughservice executor. Thepassthroughservice executor creates a new thread for each client and destroys the thread after the client ends.
network.serviceExecutors.passthrough.threadsRunningNew in version 5.0.
Number of threads running in the
passthroughservice executor.
network.serviceExecutors.passthrough.clientsInTotalNew in version 5.0.
Total number of clients allocated to the
passthroughservice executor. A client can be allocated to thepassthroughservice executor and not currently running requests.
network.serviceExecutors.passthrough.clientsRunningNew in version 5.0.
Number of clients currently using the
passthroughservice executor to run requests.
network.serviceExecutors.passthrough.clientsWaitingForDataNew in version 5.0.
Number of clients using the
passthroughservice executor that are waiting for incoming data from the network.
network.serviceExecutors.fixedNew in version 5.0.
A document that reports data about the threads and clients for the
fixedservice executor. Thefixedservice executor has a fixed number of threads. A thread is temporarily assigned to a client and the thread is preserved after the client ends.
network.serviceExecutors.fixed.threadsRunningNew in version 5.0.
Number of threads running in the
fixedservice executor.
network.serviceExecutors.fixed.clientsInTotalNew in version 5.0.
Total number of clients allocated to the
fixedservice executor. A client can be allocated to thefixedservice executor and not currently running requests.
network.serviceExecutors.fixed.clientsRunningNew in version 5.0.
Number of clients currently using the
fixedservice executor to run requests.
network.serviceExecutors.fixed.clientsWaitingForDataNew in version 5.0.
Number of clients using the
fixedservice executor that are waiting for incoming data from the network.
opLatencies
opLatencies : { reads : <document>, writes : <document>, commands : <document>, transactions : <document> },
opLatenciesA document containing operation latencies for the instance as a whole. See latencyStats Document for a description of this document.
Starting in MongoDB 6.2, the
opLatenciesmetric reports for bothmongodandmongosinstances. Latencies reported bymongosinclude operation latency time and communication time between themongodandmongosinstances.To include the histogram in the
opLatenciesoutput, run the following command:db.runCommand( { serverStatus: 1, opLatencies: { histograms: true } } ).opLatencies
opWorkingTime
opWorkingTime : { commands : <document>, reads : <document>, writes : <document>, transactions : <document> }
opWorkingTimeDocument that includes information on operation execution for the instance. See latencyStats Document for a description of this document.
The fields under
opWorkingTimeare measured inworkingMillis, which is the amount of time that MongoDB spends working on that operation. This means that factors such as waiting for locks and flow control don't affectopWorkingTime.To include the histogram in the
opWorkingTimeoutput, run the following command:db.runCommand( { serverStatus: 1, opWorkingTime: { histogram: true } } ).opWorkingTime New in version 8.0.
opWorkingTime.commandsDocument that reports execution statistics for database commands.
New in version 8.0.
opWorkingTime.readsDocument that reports execution statistics for read operations.
New in version 8.0.
opReadConcernCounters
Only for mongod instances
opReadConcernCounters : { available : Long("<num>"), linearizable : Long("<num>"), local : Long("<num>"), majority : Long("<num>"), snapshot : Long("<num>"), none : Long("<num>") }
opReadConcernCountersRemoved in version 5.0. Replaced by
readConcernCounters.A document that reports on the read concern level specified by query operations to the
mongodinstance since it last started.SpecifiedwDescription"available"Number of query operations that specified read concern level
"available"."linearizable"Number of query operations that specified read concern level
"linearizable"."local"Number of query operations that specified read concern level
"local"."majority"Number of query operations that specified read concern level
"majority"."snapshot"Number of query operations that specified read concern level
"snapshot"."none"Number of query operations that did not specify a read concern level and instead used the default read concern level.
The sum of the
opReadConcernCountersequalsopcounters.query.
opWriteConcernCounters
Only for mongod instances
opWriteConcernCounters : { insert : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... }, none : Long("<num>"), noneInfo : { CWWC : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... } }, implicitDefault : { wmajority : Long("<num>") wnum : { <num> : Long("<num>"), ... } } } }, update : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... }, none : Long("<num>"), noneInfo : { CWWC : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... } wtag : { <tag1> : Long("<num>"), ... } }, implicitDefault : { wmajority : Long("<num>") wnum : { <num> : Long("<num>"), ... } } } }, delete : { wmajority : Long("<num>") wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... }, none : Long("<num>"), noneInfo : { CWWC : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... } }, implicitDefault : { wmajority : Long("<num>") wnum : { <num> : Long("<num>"), ... } } } } }
opWriteConcernCountersA document that reports on the write concerns specified by write operations to the
mongodinstance since it last started.More specifically, the
opWriteConcernCountersreports on the w: <value> specified by the write operations. The journal flag option (j) and the timeout option (wtimeout) of the write concerns does not affect the count. The count is incremented even if the operation times out.Note
Only available when
reportOpWriteConcernCountersInServerStatusparameter is set totrue(falseby default).
opWriteConcernCounters.insertA document that reports on the w: <value> specified by insert operations to the
mongodinstance since it last started:Note
Only available when
reportOpWriteConcernCountersInServerStatusparameter is set totrue(falseby default).insert : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... }, none : Long("<num>"), noneInfo : { CWWC : { wmajority : Long("<num>"), wnum : {}, wtag : {} }, implicitDefault : { wmajority : Long("<num>") wnum : {} } } }, SpecifiedwDescription"wmajority"Number of insert operations that specified
w: "majority"."wnum"Number of insert operations that specified
w: <num>. The counts are grouped by the specific``<num>``."wtag"Number of insert operations that specified
w: <tag>. The counts are grouped by the specific<tag>."none"Number of insert operations that did not specify
wvalue. These operations use the defaultwvalue of "majority"."noneInfo"Number of non-transaction query operations that use default write concerns. The metrics track usage of the
cluster wide write concern(the global default write concern) and the implicit-default write concern.The sum of the values in
opWriteConcernCounters.noneInfoshould equal the value ofopWriteConcernCounters.none.The sum of the
opWriteConcernCounters.insertequalsopcounters.insert.
opWriteConcernCounters.updateA document that reports on the w: <value> specified by update operations to the
mongodinstance since it last started:Note
Only available when
reportOpWriteConcernCountersInServerStatusparameter is set totrue(falseby default).update : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... }, none : Long("<num>"), noneInfo : { CWWC : { wmajority : Long("<num>"), wnum : {}, wtag : {} }, implicitDefault : { wmajority : Long("<num>") wnum : {} } } }, SpecifiedwDescription"wmajority"Number of update operations that specified
w: "majority"."wnum"Number of update operations that specified
w: <num>. The counts are grouped by the specific<num>."wtag"Number of update operations that specified
w: <tag>. The counts are grouped by the specific<tag>."none"Number of update operations that did not specify
wvalue. These operations use the defaultwvalue of1."noneInfo"Number of non-transaction query operations that use default write concerns. The metrics track usage of the
cluster wide write concern(the global default write concern) and the implicit-default write concern.The sum of the values in
opWriteConcernCounters.noneInfoshould equal the value ofopWriteConcernCounters.none.The sum of the
opWriteConcernCounters.updateequalsopcounters.update.
opWriteConcernCounters.deleteA document that reports on the w: <value> specified by delete operations to the
mongodinstance since it last started:Note
Only available when
reportOpWriteConcernCountersInServerStatusparameter is set totrue(falseby default).delete : { wmajority : Long("<num>"), wnum : { <num> : Long("<num>"), ... }, wtag : { <tag1> : Long("<num>"), ... }, none : Long("<num>"), noneInfo : { CWWC : { wmajority : Long("<num>"), wnum : {}, wtag : {} }, implicitDefault : { wmajority : Long("<num>") wnum : {} } } } SpecifiedwDescription"wmajority"Number of delete operations that specified
w: "majority"."wnum"Number of delete operations that specified
w: <num>. The counts are grouped by the specific<num>."wtag"Number of delete operations that specified
w: <tag>. The counts are grouped by the specific<tag>."none"Number of delete operations that did not specify
wvalue. These operations use the defaultwvalue of1."noneInfo"Number of non-transaction query operations that use default write concerns. The metrics track usage of the
cluster wide write concern(the global default write concern) and the implicit-default write concern.The sum of the values in
opWriteConcernCounters.noneInfoshould equal the value ofopWriteConcernCounters.none.The sum of the
opWriteConcernCounters.deleteequalsopcounters.delete.
opcounters
opcounters : { insert : Long("<num>"), query : Long("<num>"), update : Long("<num>"), delete : Long("<num>"), getmore : Long("<num>"), command : Long("<num>"), },
opcountersA document that reports on database operations by type since the
mongodinstance last started.These numbers will grow over time until next restart. Analyze these values over time to track database utilization.
Note
The data in
opcounterstreats operations that affect multiple documents, such as bulk insert or multi-update operations, as a single operation. Seemetrics.documentfor more granular document-level operation tracking.Additionally, these values reflect received operations, and increment even when operations are not successful.
opcounters.insertThe total number of insert operations received since the
mongodinstance last started.
opcounters.queryThe total number of queries received since the
mongodinstance last started. Starting in MongoDB 7.1, aggregations count as query operations and increment this value.
opcounters.updateThe total number of update operations received since the
mongodinstance last started.
opcounters.deleteThe total number of delete operations since the
mongodinstance last started.
opcounters.getmoreThe total number of
getMoreoperations since themongodinstance last started. This counter can be high even if the query count is low. Secondary nodes sendgetMoreoperations as part of the replication process.
opcounters.commandThe total number of commands issued to the database since the
mongodinstance last started.opcounters.commandcounts all commands except the following:
opcounters.deprecatedopQuerycounts the number of requests for opcodes that are deprecated in MongoDB 5.0 but are temporarily supported. This section only appears in thedb.serverStatus()output when a deprecated opcode has been used.The counter is reset when
mongodstarts.deprecated: { opQuery: Long("<num>"), }
opcountersRepl
The returned opcountersRepl.* values are type NumberLong.
opcountersRepl : { insert : Long("<num>"), query : Long("<num>"), update : Long("<num>"), delete : Long("<num>"), getmore : Long("<num>"), command : Long("<num>"), },
opcountersReplA document that reports on database replication operations by type since the
mongodinstance last started.These values only appear when the current host is a member of a replica set.
These values will differ from the
opcountersvalues because of how MongoDB serializes operations during replication. See Replication for more information on replication.These numbers will grow over time in response to database use until next restart. Analyze these values over time to track database utilization.
The returned opcountersRepl.* values are type NumberLong.
opcountersRepl.insertThe total number of replicated insert operations since the
mongodinstance last started.The returned opcountersRepl.* values are type NumberLong.
opcountersRepl.queryThe total number of replicated queries since the
mongodinstance last started.The returned opcountersRepl.* values are type NumberLong.
opcountersRepl.updateThe total number of replicated update operations since the
mongodinstance last started.The returned opcountersRepl.* values are type NumberLong.
opcountersRepl.deleteThe total number of replicated delete operations since the
mongodinstance last started.The returned opcountersRepl.* values are type NumberLong.
opcountersRepl.getmoreThe total number of
getMoreoperations since themongodinstance last started. This counter can be high even if the query count is low. Secondary nodes sendgetMoreoperations as part of the replication process.The returned opcountersRepl.* values are type NumberLong.
opcountersRepl.commandThe total number of replicated commands issued to the database since the
mongodinstance last started.The returned opcountersRepl.* values are type NumberLong.
oplogTruncation
oplogTruncation : { totalTimeProcessingMicros : Long("<num>"), processingMethod : <string>, oplogMinRetentionHours : <double> totalTimeTruncatingMicros : Long("<num>"), truncateCount : Long("<num>") },
oplogTruncationA document that reports on oplog truncations.
The field only appears when the current instance is a member of a replica set and uses either the WiredTiger Storage Engine or In-Memory Storage Engine for Self-Managed Deployments.
Available in the WiredTiger Storage Engine.
oplogTruncation.totalTimeProcessingMicrosThe total time taken, in microseconds, to scan or sample the oplog to determine the oplog truncation points.
totalTimeProcessingMicrosis only meaningful if themongodinstance started on existing data files (i.e. not meaningful for In-Memory Storage Engine for Self-Managed Deployments).See
oplogTruncation.processingMethodAvailable in the WiredTiger Storage Engine.
oplogTruncation.processingMethodThe method used at start up to determine the oplog truncation points. The value can be either
"sampling"or"scanning".processingMethodis only meaningful if themongodinstance started on existing data files (i.e. not meaningful for In-Memory Storage Engine for Self-Managed Deployments).Available in the WiredTiger Storage Engine.
oplogTruncation.oplogMinRetentionHoursThe minimum retention period for the oplog in hours. If the oplog has exceeded the oplog size, the
mongodonly truncates oplog entries older than the configured retention value.Only visible if the
mongodis a member of a replica set and:The
mongodwas started with the--oplogMinRetentionHourscommand line option or thestorage.oplogMinRetentionHoursconfiguration file option,or
The minimum retention period was configured after startup using
replSetResizeOplog.
oplogTruncation.totalTimeTruncatingMicrosThe cumulative time spent, in microseconds, performing oplog truncations.
Available in the WiredTiger Storage Engine.
oplogTruncation.truncateCountThe cumulative number of oplog truncations.
Available in the WiredTiger Storage Engine.
planCache
New in version 7.0.
planCache : { totalQueryShapes : Long("<num>"), totalSizeEstimateBytes : Long("<num>"), classic : { hits : Long("<num>"), misses : Long("<num>"), replanned : Long("<num>"), replanned_plan_is_cached_plan : Long("<num>"), skipped : Long("<num>") }, sbe : { hits : Long("<num>"), misses: Long("<num>"), replanned : Long("<num>"), replanned_plan_is_cached_plan : Long("<num>"), skipped : Long("<num>") } }
planCache.totalQueryShapesApproximate number of plan cache query shapes
Prior to version 7.2, information on the number of plan cache query shapes was stored in the
query.planCacheTotalQueryShapesfield.New in version 7.2.
planCache.totalSizeEstimateBytesTotal size of the plan cache in bytes.
Prior to version 7.2, information on the plan cache size was stored in the
query.planCacheTotalSizeEstimateBytesfield.New in version 7.2.
planCache.classic.hitsNumber of classic execution engine query plans found in the query cache and reused to avoid the query planning phase.
planCache.classic.missesNumber of classic execution engine query plans which were not found in the query cache and went through the query planning phase.
planCache.classic.replannedNumber of classic execution engine query plans that were discarded and re-optimized.
New in version 8.0: (Also available in 7.0.22)
planCache.classic.replanned_plan_is_cached_planNumber of times the server performed a replan operation for the classic execution engine that produced a plan identical to one already in the query cache.
New in version 8.2.
planCache.classic.skippedNumber of classic execution engine query plans that were not found in the query cache because the query is ineligible for caching.
New in version 7.3.
planCache.sbe.hitsNumber of slot-based execution engine query plans found in the query cache and reused to avoid the query planning phase.
planCache.sbe.missesNumber of slot-based execution engine plans which were not found in the query cache and went through the query planning phase.
planCache.sbe.replannedNumber of slot-based execution engine query plans that were discarded and re-optimized.
New in version 8.0: (Also available in 7.0.22)
profiler
profiler: { totalWrites: <integer>, activeWriters: <integer> }
profiler.totalWritesTotal number of writes to
profilecollections on all databases.
queryStats
New in version 7.1.
queryStats: { numEvicted: Long("<num>"), numRateLimitedRequests: Long("<num>"), queryStatsStoreSizeEstimateBytes: Long("<num>"), numQueryStatsStoreWriteErrors: Long("<num>"), numHmacApplicationErrors: Long("<num>") },
queryStatsA document that contains metrics for the
$queryStatsaggregation stage.
queryStats.numEvictedNumber of queries that the
$queryStatsvirtual collection has evicted due to space contraints.
queryStats.numRateLimitedRequestsNumber of times that query stats were not recorded for a query due to rate limiting.
queryStats.queryStatsStoreSizeEstimateBytesCurrent estimated size of objects in the
$queryStatsvirtual collection.
queryAnalyzers
New in version 7.0.
queryAnalyzers: { activeCollections: <integer>, totalCollections: <integer>, totalSampledReadsCount: <integer>, totalSampledWritesCount: <integer>, totalSampledReadsBytes: <integer>, totalSampledWritesBytes: <integer> }
queryAnalyzers.activeCollectionsNumber of collections the query analyzer actively samples.
queues
As an operation proceeds through its stages, it may enter a queue if the number of concurrent operations at the current stage exceeds a maximum threshold. This prevents excessive resource contention and provides observability into the state of the database.
New in version 8.0.
queues: { execution: { write: { out: Long("<num>"), available: Long("<num>"), totalTickets: Long("<num>"), exempt: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") }, normalPriority: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") } }, read: { out: Long("<num>"), available: Long("<num>"), totalTickets: Long("<num>"), exempt: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") }, normalPriority: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") } }, monitor: { timesDecreased: Long("<num>"), timesIncreased: Long("<num>"), totalAmountDecreased: Long("<num>"), totalAmountIncreased: Long("<num>"), resizeDurationMicros: Long("<num>") } }, ingress: { out: Long("<num>"), available: Long("<num>"), totalTickets: Long("<num>"), exempt: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") }, normalPriority: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") } }, ingressSessionEstablishment: { // Added in MongoDB 8.2 "addedToQueue": Long("<num>"), "removedFromQueue": Long("<num>"), "interruptedInQueue": Long("<num>") "rejectedAdmissions": Long("<num>"), "exemptedAdmissions": Long("<num>"), "successfulAdmissions": Long("<num>"), "attemptedAdmissions": Long("<num>"), "averageTimeQueuedMicros": Long("<num>"), "totalAvailableTokens": Long("<num>") } }
queues.executionNew in version 8.0.
A document that returns monitoring and queue information for operations waiting to be scheduled for execution within the storage layer (concurrent transactions).
These settings are MongoDB-specific. To change the settings for concurrent reads and write transactions (read and write tickets), see
storageEngineConcurrentReadTransactionsandstorageEngineConcurrentWriteTransactions.Important
Starting in version 7.0, MongoDB uses a default algorithm to dynamically adjust the maximum number of concurrent storage engine transactions (including both read and write tickets) to optimize database throughput during overload.
The following table summarizes how to identify overload scenarios for MongoDB post-7.0 and for earlier releases:
VersionDiagnosing Overload Scenarios7.0 and later
A large number of queued operations that persists for a prolonged period of time likely indicates an overload.
A concurrent storage engine transaction (ticket) availability of 0 for a prolonged period of time does not indicate an overload.
6.0 and earlier
A large number of queued operations that persists for a prolonged period of time likely indicates an overload.
A concurrent storage engine transaction (ticket) availibility of 0 for a prolonged period of time likely indicates an overload.
queues.execution.writeA document that returns Queue Information for concurrent write transactions (write tickets) allowed into the WiredTiger storage engine.
queues.execution.readA document that returns Queue Information for concurrent read transactions (read tickets) allowed into the WiredTiger storage engine.
A document that returns monitoring metrics for adjustments that the system has made to the number of allowed concurrent transactions (tickets).
The number of times the queue size was decreased.
The number of times the queue size was increased.
The total amount of operations the queue decreased by.
The total number of operations the queue increased by.
The cumulative time in milliseconds that the system spent resizing the queue.
queues.ingressNew in version 8.0.
A document that returns Queue Information for ingress admission control. Use these values to protect and mitigate against resource overload by limiting the number of operations waiting for entry to the database from the network.
The maximum number of allowed concurrent operations is constrained by
ingressAdmissionControllerTicketPoolSize.
queues.ingressSessionEstablishmentNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
A document that contains information about the ingress session establishment queue. This includes metrics related to connections established and processed through the connection establishment rate limiter.
queues.ingressSessionEstablishment.addedToQueueNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connections that the server adds to the ingress session establishment queue. This metric tracks connections that are processed through the rate limiter queue when the rate limiter is enabled.
queues.ingressSessionEstablishment.removedFromQueueNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connections that the server removes from the ingress session establishment queue after acquiring a connection establishment token. This metric tracks connections that have completed their wait in the rate limiter queue.
queues.ingressSessionEstablishment.interruptedInQueueNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connections that halt while waiting in the queue, typically due to client disconnects or server shutdown.
queues.ingressSessionEstablishment.rejectedAdmissionsNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connection attempts that the server rejects because the queue depth exceeded the
ingressConnectionEstablishmentMaxQueueDepthlimit. When this happens, the server immediately closes the connection rather than queuing it.
queues.ingressSessionEstablishment.exemptedAdmissionsNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The number of incoming connection attempts that bypass the rate limiter due to being on the
ingressConnectionEstablishmentRateLimiterBypasslist. Connections from IP addresses or CIDR ranges specified iningressConnectionEstablishmentRateLimiterBypassare not subject to rate limiting.
queues.ingressSessionEstablishment.successfulAdmissionsNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The total number of incoming connection attempts that the rate limiter successfully processes, either immediately or after waiting in the queue.
queues.ingressSessionEstablishment.attemptedAdmissionsNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The total number of incoming connection attempts on the rate limiter.
queues.ingressSessionEstablishment.averageTimeQueuedMicrosNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The average time in microseconds that connections spend waiting in the queue before the server processes them. This metric uses an exponentially-weighted moving average formula and can be used to tune the
ingressConnectionEstablishmentMaxQueueDepth. The value roughly equals(maxQueueDepth / establishRatePerSec) * 1e6.
queues.ingressSessionEstablishment.totalAvailableTokensNew in version 8.2: (also available in 8.1.1, 8.0.12, and 7.0.23)
The current number of available tokens in the token bucket. This represents the capacity for immediately processing new connections without queuing. When this value is
0, new connections must wait in the queue, or are rejected if the queue is full.
Queue Information
out: Long("<num>"), available: Long("<num>"), totalTickets: Long("<num>"), exempt: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") }, normalPriority: { addedToQueue: Long("<num>"), removedFromQueue: Long("<num>"), queueLength: Long("<num>"), startedProcessing: Long("<num>"), processing: Long("<num>"), finishedProcessing: Long("<num>"), totalTimeProcessingMicros: Long("<num>"), canceled: Long("<num>"), newAdmissions: Long("<num>"), totalTimeQueuedMicros: Long("<num>") }
querySettings
New in version 8.0.
querySettings: { count: <num>, rejectCount: <num>, size: <num> }
querySettingsDocument with configuration counts and usage for query settings.
Starting in MongoDB 8.0, use query settings instead of adding index filters. Index filters are deprecated starting in MongoDB 8.0.
Query settings have more functionality than index filters. Also, index filters aren't persistent and you cannot easily create index filters for all cluster nodes. To add query settings and explore examples, see
setQuerySettings.
readConcernCounters
New in version 5.0.
readConcernCounters : { nonTransactionOps : { none : Long("<num>"), noneInfo : { CWRC : { local : Long("<num>"), available : Long("<num>"), majority : Long("<num>") }, implicitDefault : { local : Long("<num>"), available : Long("<num>") } }, local : Long("<num>"), available : Long("<num>"), majority : Long("<num>"), snapshot : { withClusterTime : Long("<num>"), withoutClusterTime : Long("<num>") }, linearizable : Long("<num>") }, transactionOps : { none : Long("<num>"), noneInfo : { CWRC : { local : Long("<num>"), available : Long("<num>"), majority : Long("<num>") }, implicitDefault : { local : Long("<num>"), available : Long("<num>") } }, local : Long("<num>"), majority : Long("<num>"), snapshot : { withClusterTime : Long("<num>"), withoutClusterTime : Long("<num>") } } },
readConcernCountersA document that reports on the read concern level specified by query operations. This document contains the
readConcernCounters.nonTransactionOpsandreadConcernCounters.transactionOpsdocuments.
readConcernCounters.nonTransactionOpsA document that reports on the read concern level specified by non-transaction query operations performed after the database server last started.
readConcernCounters.nonTransactionOps.noneNumber of non-transaction query operations that did not specify a read concern level and instead used either:
the default read concern level, or
the global default read concern configuration if it was set by the
setDefaultRWConcerncommand.
readConcernCounters.nonTransactionOps.noneInfoThe number of non-transaction query operations that use the global default read concern and an implicit-default read concern.
The sum of the values in
readConcernCounters.nonTransactionOps.noneInfoshould equal the value ofreadConcernCounters.nonTransactionOps.none.
readConcernCounters.nonTransactionOps.localNumber of non-transaction query operations that specified the
"local"read concern level.
readConcernCounters.nonTransactionOps.availableNumber of non-transaction query operations that specified the
"available"read concern level.
readConcernCounters.nonTransactionOps.majorityNumber of non-transaction query operations that specified the
"majority"read concern level.
readConcernCounters.nonTransactionOps.snapshotDocument containing non-transaction query operations that specified the
"snapshot"read concern level.
readConcernCounters.nonTransactionOps.snapshot.withClusterTimeNumber of non-transaction query operations that specified the
"snapshot"read concern level and the cluster time, which specified a point in time.
readConcernCounters.nonTransactionOps.snapshot.withoutClusterTimeNumber of non-transaction query operations that specified the
"snapshot"read concern level without the cluster time, which means a point in time was omitted and the server will read the most recently committed snapshot available to the node.
readConcernCounters.nonTransactionOps.linearizableNumber of non-transaction query operations that specified the
"linearizable"read concern level.
readConcernCounters.transactionOpsA document that reports on the read concern level specified by transaction query operations performed after the database server last started.
readConcernCounters.transactionOps.noneNumber of transaction query operations that did not specify a read concern level and instead used the default read concern level or the global default read or write concern configuration added with the
setDefaultRWConcerncommand.
readConcernCounters.transactionOps.noneInfoInformation about the global default read concern and implicit-default read concern used by transaction query operations.
readConcernCounters.transactionOps.localNumber of transaction query operations that specified the
"local"read concern level.
readConcernCounters.transactionOps.availableNumber of transaction query operations that specified the
"available"read concern level.
readConcernCounters.transactionOps.majorityNumber of transaction query operations that specified the
"majority"read concern level.
readConcernCounters.transactionOps.snapshotDocument containing transaction query operations that specified the
"snapshot"read concern level.
readConcernCounters.transactionOps.snapshot.withClusterTimeNumber of transaction query operations that specified the
"snapshot"read concern level and the cluster time, which specified a point in time.
readConcernCounters.transactionOps.snapshot.withoutClusterTimeNumber of transaction query operations that specified the
"snapshot"read concern level without the cluster time, which means a point in time was omitted and the server will read the most recently committed snapshot available to the node.
readPreferenceCounters
Available starting in MongoDB 7.2 (and 7.0.3, 6.0.11).
Available on mongod only.
readPreferenceCounters : { executedOnPrimary : { primary : { internal : Long("<num>"), external : Long("<num>") }, primaryPreferred : { internal : Long("<num>"), external : Long("<num>") }, secondary : { internal : Long("<num>"), external : Long("<num>") }, secondaryPreferred : { internal : Long("<num>"), external : Long("<num>") }, nearest : { internal : Long("<num>"), external : Long("<num>") }, tagged : { internal : Long("<num>"), external : Long("<num>") } }, executedOnSecondary : { primary : { internal : Long("<num>"), external : Long("<num>") }, primaryPreferred : { internal : Long("<num>"), external : Long("<num>") }, secondary : { internal : Long("<num>"), external : Long("<num>") }, secondaryPreferred : { internal : Long("<num>"), external : Long("<num>") }, nearest : { internal : Long("<num>"), external : Long("<num>") }, tagged : { internal : Long("<num>"), external : Long("<num>") } } }
readPreferenceCountersAvailable on mongod only.
A document that reports the number of operations received by this
mongodnode with the specified read preference.The
taggedsub-field refers to any read preference passed in with a tag.
repl
repl : { hosts : [ <string>, <string>, <string> ], setName : <string>, setVersion : <num>, isWritablePrimary : <boolean>, secondary : <boolean>, primary : <hostname>, me : <hostname>, electionId : ObjectId(""), userWriteBlockReason : <num>, userWriteBlockModeCounters: { Unspecified: <num>, ClusterToClusterMigrationInProgress: <num>, DiskUseThresholdExceeded: <num> }, primaryOnlyServices: { ReshardingRecipientService: { state: <string>, numInstances: <num> }, RenameCollectionParticipantService: { state: <string>, numInstances: <num> }, ShardingDDLCoordinator: { state: <string>, numInstances: <num> }, ReshardingDonorService: { state: <string>, numInstances: <num> } }, rbid : <num>, replicationProgress : [ { rid : <ObjectId>, optime : { ts: <timestamp>, term: <num> }, host : <hostname>, memberId : <num> }, ... ] timestamps : { oldestTimestamp: <timestamp> } }
replA document that reports on the replica set configuration.
replonly appear when the current host is a replica set. See Replication for more information on replication.
repl.setNameA string with the name of the current replica set. This value reflects the
--replSetcommand line argument, orreplSetNamevalue in the configuration file.
repl.isWritablePrimaryA boolean that indicates whether the current node is the primary of the replica set.
repl.secondaryA boolean that indicates whether the current node is a secondary member of the replica set.
repl.primaryThe hostname and port information (
"host:port") of the current primary member of the replica set.
repl.userWriteBlockReasonA numeric value that represents the reason why user writes are blocked. This field is relevant only when you set
userWriteBlockModeto2to enable write-blocking.Possible values are:
0: Unspecified1:ClusterToClusterMigrationInProgress2:DiskUseThresholdExceeded
This field corresponds to the
reasonparameter specified in thesetUserWriteBlockModecommand when write-blocking is enabled.
repl.userWriteBlockModeCountersA document that contains counters tracking the number of times write-blocking is enabled with different reasons since the server started.
repl.userWriteBlockModeCounters.UnspecifiedThe number of times write-blocking is enabled with the reason
Unspecifiedsince the server started.
repl.userWriteBlockModeCounters.ClusterToClusterMigrationInProgressThe number of times write-blocking is enabled with the reason
ClusterToClusterMigrationInProgresssince the server started.
repl.userWriteBlockModeCounters.DiskUseThresholdExceededThe number of times write-blocking is enabled with the reason
DiskUseThresholdExceededsince the server started.
repl.primaryOnlyServicesDocument that contains the number and status of instances of each primary service active on the server. Primary services can only start when a server is primary but can continue running to completion after the server changes state.
New in version 5.0.
repl.primaryOnlyServices.ReshardingRecipientServiceDocument that contains the state and number of instances of the
ReshardingRecipientService.Recipients are the shards,that would own the chunks after as a result of the resharding operation, according to the new shard key and zones.
The resharding coordinator instructs each donor and recipient shard primary, to rename the temporary sharded collection. The temporary collection becomes the new resharded collection.
New in version 5.0.
repl.primaryOnlyServices.RenameCollectionParticipantServiceDocument that contains the state and number of instances of the
RenameCollectionParticipantService.The
RenameCollectionParticipantServiceensures that, after a shard receives a renameCollection request, the shard is able to resume the local rename in case of system failure.New in version 5.0.
repl.primaryOnlyServices.ShardingDDLCoordinatorDocument that contains the state and number of instances of the
ShardingDDLCoordinator.The
ShardingDDLCoordinatorservice manages DDL operations for primary databases such as: create database, drop database, renameCollection.The
ShardingDDLCoordinatorensures that one DDL operation for each database can happen at any one specific point in time within a sharded cluster.New in version 5.0.
repl.primaryOnlyServices.ReshardingDonorServiceDocument that contains the state and number of instances of the
ReshardingDonorService.Donors are the shards that own chunks of the sharded collection before the rename operation completes.
The resharding coordinator instructs each donor and recipient shard primary, to rename the temporary sharded collection. The temporary collection becomes the new resharded collection.
New in version 5.0.
repl.rbidRollback identifier. Used to determine if a rollback has happened for this
mongodinstance.
repl.replicationProgressAn array with one document for each member of the replica set that reports replication process to this member. Typically this is the primary, or secondaries if using chained replication.
To include this output, you must pass the
reploption to theserverStatus, as in the following:db.serverStatus({ "repl": 1 }) db.runCommand({ "serverStatus": 1, "repl": 1 }) The content of the
repl.replicationProgresssection depends on the source of each member's replication. This section supports internal operation and is for internal and diagnostic use only.
repl.replicationProgress[n].ridAn ObjectId used as an ID for the members of the replica set. For internal use only.
repl.replicationProgress[n].optimeInformation regarding the last operation from the oplog that the member applied, as reported from this member.
security
security : { authentication : { saslSupportedMechsReceived : <num>, mechanisms : { MONGODB-X509 : { speculativeAuthenticate : { received : Long("<num>"), successful : Long("<num>") }, authenticate : { received : Long("<num>"), successful : Long("<num>") } }, SCRAM-SHA-1 : { speculativeAuthenticate : { received : Long("<num>"), successful : Long("<num>") }, authenticate : { received : Long("<num>"), successful : Long("<num>") } }, SCRAM-SHA-256 : { speculativeAuthenticate : { received : Long("<num>"), successful : Long("<num>") }, authenticate : { received : Long("<num>"), successful : Long("<num>") } } } }, SSLServerSubjectName: <string>, SSLServerHasCertificateAuthority: <boolean>, SSLServerCertificateExpirationDate: <date> },
securityA document that reports on:
The number of times a given authentication mechanism has been used to authenticate against the
mongodormongosinstance.The
mongod/mongosinstance's TLS/SSL certificate. (Only appears formongodormongosinstance with support for TLS)
security.authentication.saslSupportedMechsReceivedNew in version 5.0.
The number of times a
hellorequest includes a validhello.saslSupportedMechsfield.
security.authentication.mechanismsA document that reports on the number of times a given authentication mechanism has been used to authenticate against the
mongodormongosinstance. The values in the document distinguish standard authentication and speculative authentication. [1]Note
The fields in the
mechanismsdocument depend on the configuration of theauthenticationMechanismsparameter. Themechanismsdocument includes a field for each authentication mechanism supported by yourmongodormongosinstance.The following example shows the shape of the
mechanismsdocument for a deployment that only supports X.509 authentication.
security.authentication.mechanisms.MONGODB-X509A document that reports on the number of times X.509 has been used to authenticate against the
mongodormongosinstance.Includes total number of
X.509authentication attempts and the subset of those attempts which were speculative. [1]
security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.receivedNumber of speculative authentication attempts received using X.509. Includes both successful and failed speculative authentication attempts. [1]
security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.successfulNumber of successful speculative authentication attempts received using X.509. [1]
security.authentication.mechanisms.MONGODB-X509.authenticate.receivedNumber of successful and failed authentication attempts received using X.509. This value includes speculative authentication attempts received using X.509.
security.authentication.mechanisms.MONGODB-X509.authenticate.successfulNumber of successful authentication attempts received using x.508. This value includes successful speculative authentication attempts which used X.509.
[1] (1, 2, 3, 4) Speculative authentication minimizes the number of network round trips during the authentication process to optimize performance. security.SSLServerSubjectNameThe subject name associated with the
mongod/mongosinstance's TLS/SSL certificate.
sharding
{ configsvrConnectionString : 'csRS/cfg1.example.net:27019,cfg2.example.net:27019,cfg2.example.net:27019', lastSeenConfigServerOpTime : { ts : <timestamp>, t : Long("<num>") }, maxChunkSizeInBytes : Long("<num>") }
shardingA document with data regarding the sharded cluster. The
lastSeenConfigServerOpTimeis present only for amongosor a shard member, not for a config server.
sharding.lastSeenConfigServerOpTimeThe latest optime of the CSRS primary that the
mongosor the shard member has seen. The optime document includes:ts, the Timestamp of the operation.t, thetermin which the operation was originally generated on the primary.
The
lastSeenConfigServerOpTimeis present only if the sharded cluster uses CSRS.
sharding.maxChunkSizeInBytesThe maximum size limit for a range to migrate. If this value has been updated recently on the config server, the
maxChunkSizeInBytesmay not reflect the most recent value.
shardingStatistics
When run on a member of a shard:
shardingStatistics : { countStaleConfigErrors : Long("<num>"), countDonorMoveChunkStarted : Long("<num>"), countDonorMoveChunkCommitted : Long("<num>"), countDonorMoveChunkAborted : Long("<num>"), totalDonorMoveChunkTimeMillis : Long("<num>"), totalDonorChunkCloneTimeMillis : Long("<num>"), totalCriticalSectionCommitTimeMillis : Long("<num>"), totalCriticalSectionTimeMillis : Long("<num>"), countDocsClonedOnRecipient : Long("<num>"), countBytesClonedOnRecipient : Long("<num>"), countDocsClonedOnCatchUpOnRecipient : Long("<num>"), countBytesClonedOnCatchUpOnRecipient : Long("<num>"), countDocsClonedOnDonor : Long("<num>"), countRecipientMoveChunkStarted : Long("<num>"), countDocsDeletedByRangeDeleter : Long("<num>"), countDonorMoveChunkLockTimeout : Long("<num>"), unfinishedMigrationFromPreviousPrimary : Long("<num>"), chunkMigrationConcurrency : Long("<num>"), countTransitionToDedicatedConfigServerStarted : Long("<num>"), // Added in MongoDB 8.0 countTransitionToDedicatedConfigServerCompleted : Long("<num>"), // Added in MongoDB 8.0 countTransitionFromDedicatedConfigServerCompleted : Long("<num>"), // Added in MongoDB 8.0 catalogCache : { numDatabaseEntries : Long("<num>"), numCollectionEntries : Long("<num>"), countStaleConfigErrors : Long("<num>"), totalRefreshWaitTimeMicros : Long("<num>"), numActiveIncrementalRefreshes : Long("<num>"), countIncrementalRefreshesStarted : Long("<num>"), numActiveFullRefreshes : Long("<num>"), countFullRefreshesStarted : Long("<num>"), countFailedRefreshes : Long("<num>") }, rangeDeleterTasks : <num>, configServerInShardCache : <boolean>, // Added in MongoDB 8.0 resharding : { countStarted : Long("1"), countSucceeded : Long("1"), countFailed : Long("0"), countCanceled : Long("0"), lastOpEndingChunkImbalance : Long("0"), active : { documentsCopied : Long("0"), bytesCopied : Long("0"), countWritesToStashCollections : Long("0"), countWritesDuringCriticalSection : Long("0"), countReadsDuringCriticalSection : Long("0"), oplogEntriesFetched : Long("0"), oplogEntriesApplied : Long("0"), insertsApplied : Long("0"), updatesApplied : Long("0"), deletesApplied : Long("0") }, oldestActive : { coordinatorAllShardsHighestRemainingOperationTimeEstimatedMillis : Long("0"), coordinatorAllShardsLowestRemainingOperationTimeEstimatedMillis : Long("0"), recipientRemainingOperationTimeEstimatedMillis : Long("0") }, latencies : { collectionCloningTotalRemoteBatchRetrievalTimeMillis : Long("0"), collectionCloningTotalRemoteBatchesRetrieved : Long("0"), collectionCloningTotalLocalInsertTimeMillis : Long("0"), collectionCloningTotalLocalInserts : Long("0"), oplogFetchingTotalRemoteBatchRetrievalTimeMillis : Long("0"), oplogFetchingTotalRemoteBatchesRetrieved : Long("0"), oplogFetchingTotalLocalInsertTimeMillis : Long("0"), oplogFetchingTotalLocalInserts : Long("0"), oplogApplyingTotalLocalBatchRetrievalTimeMillis : Long("0"), oplogApplyingTotalLocalBatchesRetrieved : Long("0"), oplogApplyingTotalLocalBatchApplyTimeMillis : Long("0"), oplogApplyingTotalLocalBatchesApplied : Long("0") }, currentInSteps : { countInstancesInCoordinatorState1Initializing : Long("0"), countInstancesInCoordinatorState2PreparingToDonate : Long("0"), countInstancesInCoordinatorState3Cloning : Long("0"), countInstancesInCoordinatorState4Applying : Long("0"), countInstancesInCoordinatorState5BlockingWrites : Long("0"), countInstancesInCoordinatorState6Aborting : Long("0"), countInstancesInCoordinatorState7Committing : Long("-1"), countInstancesInRecipientState1AwaitingFetchTimestamp : Long("0"), countInstancesInRecipientState2CreatingCollection : Long("0"), countInstancesInRecipientState3Cloning : Long("0"), countInstancesInRecipientState4Applying : Long("0"), countInstancesInRecipientState5Error : Long("0"), countInstancesInRecipientState6StrictConsistency : Long("0"), countInstancesInRecipientState7Done : Long("0"), countInstancesInDonorState1PreparingToDonate : Long("0"), countInstancesInDonorState2DonatingInitialData : Long("0"), countInstancesInDonorState3DonatingOplogEntries : Long("0"), countInstancesInDonorState4PreparingToBlockWrites : Long("0"), countInstancesInDonorState5Error : Long("0"), countInstancesInDonorState6BlockingWrites : Long("0"), countInstancesInDonorState7Done : Long("0") } } } },
When run on a mongos:
shardingStatistics : { numHostsTargeted: { find : { allShards: Long("<num>"), manyShards: Long("<num>"), oneShard: Long("<num>"), unsharded: Long("<num>") }, insert: { allShards: Long("<num>"), manyShards: Long("<num>"), oneShard: Long("<num>"), unsharded: Long("<num>") }, update: { allShards: Long("<num>"), manyShards: Long("<num>"), oneShard: Long("<num>"), unsharded: Long("<num>") }, delete: { allShards: Long("<num>"), manyShards: Long("<num>"), oneShard: Long("<num>"), unsharded: Long("<num>") }, aggregate: { allShards: Long("<num>"), manyShards: Long("<num>"), oneShard: Long("<num>"), unsharded: Long("<num>") } } }, catalogCache : { numDatabaseEntries : Long("<num>"), numCollectionEntries : Long("<num>"), countStaleConfigErrors : Long("<num>"), totalRefreshWaitTimeMicros : Long("<num>"), numActiveIncrementalRefreshes : Long("<num>"), countIncrementalRefreshesStarted : Long("<num>"), numActiveFullRefreshes : Long("<num>"), countFullRefreshesStarted : Long("<num>"), countFailedRefreshes : Long("<num>") }, configServerInShardCache : <boolean> // Added in MongoDB 8.0 }
shardingStatistics.countStaleConfigErrorsThe total number of times that threads hit stale config exception. Since a stale config exception triggers a refresh of the metadata, this number is roughly proportional to the number of metadata refreshes.
Only present when run on a shard.
shardingStatistics.countDonorMoveChunkStartedThe total number of times that MongoDB starts the
moveChunkcommand ormoveRangecommand on the primary node of the shard as part of the range migration procedure. This increasing number does not consider whether the chunk migrations succeed or not.Only present when run on a shard.
shardingStatistics.countDonorMoveChunkCommittedThe total number of chunk migrations that MongoDB commits on the primary node of the shard.
The chunk migration is performed by
moveChunkandmoveRangecommands in a range migration procedure.Only available on a shard.
Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).
shardingStatistics.countDonorMoveChunkAbortedThe total number of chunk migrations that MongoDB aborts on the primary node of the shard.
The chunk migration is performed by
moveChunkandmoveRangecommands in a range migration procedure.Only available on a shard.
Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).
shardingStatistics.totalDonorMoveChunkTimeMillisCumulative time in milliseconds to move chunks from the current shard to another shard. For each chunk migration, the time starts when a
moveRangeormoveChunkcommand starts, and ends when the chunk is moved to another shard in a range migration procedure.Only available on a shard.
Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).
shardingStatistics.totalDonorChunkCloneTimeMillisThe cumulative time, in milliseconds, that the clone phase of the range migration procedure takes on the primary node of the shard. Specifically, for each migration on this shard, the tracked time starts with the
moveRangeandmoveChunkcommands and ends before the destination shard enters acatchupphase to apply changes that occurred during the range migration procedure.Only present when run on a shard.
shardingStatistics.totalCriticalSectionCommitTimeMillisThe cumulative time, in milliseconds, that the update metadata phase of the range migrations procedure takes on the primary node of the shard. During the update metadata phase, MongoDB blocks all operations on the collection.
Only present when run on a shard.
shardingStatistics.totalCriticalSectionTimeMillisThe cumulative time, in milliseconds, that the catch-up phase and the update metadata phase of the range migration procedure takes on the primary node of the shard.
To calculate the duration of the catch-up phase, subtract
totalCriticalSectionCommitTimeMillisfromtotalCriticalSectionTimeMillis:totalCriticalSectionTimeMillis - totalCriticalSectionCommitTimeMillis Only present when run on a shard.
shardingStatistics.countDocsClonedOnRecipientThe cumulative, always-increasing count of documents that MongoDB clones on the primary node of the recipient shard.
Only present when run on a shard.
shardingStatistics.countBytesClonedOnRecipientThe cumulative number of bytes that MongoDB clones on the primary node of the recipient shard during the range migration procedure.
For details about data synchronization, see Replica Set Data Synchronization.
Only available on a shard.
Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).
shardingStatistics.countDocsClonedOnCatchUpOnRecipientThe cumulative number of documents that MongoDB clones on the primary node of the recipient shard during the catch-up phase of the range migration procedure.
For details about data synchronization, see Replica Set Data Synchronization.
Only available on a shard.
Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).
shardingStatistics.countBytesClonedOnCatchUpOnRecipientThe cumulative number of bytes that MongoDB clones on the primary node of the recipient shard during the catch-up phase of the range migration procedure.
For details about data synchronization, see Replica Set Data Synchronization.
Only available on a shard.
Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).
shardingStatistics.countDocsClonedOnDonorThe cumulative, always-increasing count of documents that MongoDB clones on the primary node of the donor shard.
Only present when run on a shard.
shardingStatistics.countRecipientMoveChunkStartedCumulative, always-increasing count of chunks this member, acting as the primary of the recipient shard, has started to receive (whether the move has succeeded or not).
Only present when run on a shard.
shardingStatistics.countDocsDeletedByRangeDeleterThe cumulative, always-increasing count of documents that MongoDB deletes on the primary node of the donor shard during chunk migration.
Only present when run on a shard.
Changed in version 7.1.
shardingStatistics.countDonorMoveChunkLockTimeoutThe cumulative, always-increasing count of chunk migrations that MongoDB aborts on the primary node of the donor shard due to lock acquisition timeouts.
Only present when run on a shard.
shardingStatistics.unfinishedMigrationFromPreviousPrimaryThe number of unfinished migrations left by the previous primary after an election. This value is only updated after the newly-elected
mongodcompletes the transition to primary.Only present when run on a shard.
shardingStatistics.chunkMigrationConcurrencyThe number of threads on the source shard and the receiving shard for performing chunk migration operations.
Only present when run on a shard.
Available starting in MongoDB 6.3 (and 5.0.15).
shardingStatistics.catalogCacheA document with statistics about the cluster's routing information cache.
shardingStatistics.catalogCache.numDatabaseEntriesThe total number of database entries that are currently in the catalog cache.
shardingStatistics.catalogCache.numCollectionEntriesThe total number of collection entries (across all databases) that are currently in the catalog cache.
shardingStatistics.catalogCache.countStaleConfigErrorsThe total number of times that threads hit stale config exception. A stale config exception triggers a refresh of the metadata.
shardingStatistics.catalogCache.totalRefreshWaitTimeMicrosThe cumulative time, in microseconds, that threads had to wait for a refresh of the metadata.
shardingStatistics.catalogCache.numActiveIncrementalRefreshesThe number of incremental catalog cache refreshes that are currently waiting to complete.
shardingStatistics.countIncrementalRefreshesStartedThe cumulative number of incremental refreshes that have started.
shardingStatistics.catalogCache.numActiveFullRefreshesThe number of full catalog cache refreshes that are currently waiting to complete.
shardingStatistics.catalogCache.countFullRefreshesStartedThe cumulative number of full refreshes that have started.
shardingStatistics.catalogCache.countFailedRefreshesThe cumulative number of full or incremental refreshes that have failed.
shardingStatistics.countTransitionToDedicatedConfigServerStartedNumber of times the
transitionToDedicatedConfigServercommand has started.Only present when run on a config server node.
New in version 8.0.
shardingStatistics.countTransitionToDedicatedConfigServerCompletedNumber of times the
transitionToDedicatedConfigServercommand has finished.Only present when run on a config server node.
New in version 8.0.
shardingStatistics.countTransitionFromDedicatedConfigServerCompletedNumber of times the
transitionFromDedicatedConfigServercommand has finished.Only present when run on a config server node.
New in version 8.0.
shardingStatistics.rangeDeleterTasksThe current total of the queued chunk range deletion tasks that are ready to run or are running as part of the range migration procedure.
Inspect the documents in the
config.rangeDeletionscollection for information about the chunk ranges pending deletion from a shard after a chunk migration.Only present when run on a shard member.
shardingStatistics.configServerInShardCacheA boolean that indicates whether the config server is a config shard. This value periodically refreshes, so the value of
configServerInShardCachemight be stale for up to approximately one minute in a healthy cluster. If the node can't communicate with the config server,configServerInShardCachemay remain stale for a longer period.
shardingStatistics.reshardingA document with statistics about resharding operations.
Each shard returns its own resharding operation statistics. If a shard is not involved in a resharding operation, then that shard will not contain statistics about the resharding operation.
Only present when run on a shard or config server.
New in version 5.0.
shardingStatistics.resharding.countStartedThe sum of
countSucceeded,countFailed, andcountCanceled. The sum is further incremented by1if a resharding operation has started but has not yet completed. Sum is set to 0 whenmongodis started or restarted.Only present when run on a shard or config server.
New in version 5.0.
shardingStatistics.resharding.countSucceededNumber of successful resharding operations. Number is set to 0 when
mongodis started or restarted.Only present when run on a shard or config server.
New in version 5.0.
shardingStatistics.resharding.countFailedNumber of failed resharding operations. Number is set to 0 when
mongodis started or restarted.Only present when run on a shard or config server.
New in version 5.0.
shardingStatistics.resharding.countCanceledNumber of canceled resharding operations. Number is set to 0 when
mongodis started or restarted.Only present when run on a shard or config server.
New in version 5.0.
shardingStatistics.resharding.active.documentsCopiedNumber of documents copied from donor shards to recipient shards for the current resharding operation. Number is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
Updated in version 6.1
shardingStatistics.resharding.active.bytesCopiedNumber of bytes copied from donor shards to recipient shards for the current resharding operation. Number is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
Updated in version 6.1
shardingStatistics.resharding.active.countWritesToStashCollectionsDuring resharding, the number of writes to the recipient stash collections.
New in version 6.1.
shardingStatistics.resharding.active.countWritesDuringCriticalSectionNumber of writes perfomed in the critical section for the current resharding operation. The critical section prevents new incoming writes to the collection currently being resharded. Number is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
Updated in version 6.1
shardingStatistics.resharding.active.countReadsDuringCriticalSectionDuring resharding, the number of reads attempted during the donor's critical section.
New in version 6.1.
shardingStatistics.resharding.active.oplogEntriesFetchedNumber of entries fetched from the oplog for the current resharding operation. Number is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
Updated in version 6.1
shardingStatistics.resharding.active.oplogEntriesAppliedNumber of entries applied to the oplog for the current resharding operation. Number is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
Updated in version 6.1
shardingStatistics.resharding.active.insertsAppliedThe total number of insert operations applied during resharding.
New in version 6.1.
shardingStatistics.resharding.active.updatesAppliedThe total number of update operations applied during resharding.
New in version 6.1.
shardingStatistics.resharding.active.deletesAppliedThe total number of delete operations applied during resharding.
New in version 6.1.
shardingStatistics.resharding.oldestActive.coordinatorAllShardsHighestRemainingOperationTimeEstimatedMillisCalculated across all shards, the highest estimate of the number of seconds remaining. If the time estimate cannot be computed, the value is set to -1.
New in version 6.1.
shardingStatistics.resharding.oldestActive.coordinatorAllShardsLowestRemainingOperationTimeEstimatedMillisCalculated across all shards, the lowest estimate of the number of seconds remaining. If the time estimate cannot be computed, the value is set to -1.
New in version 6.1.
shardingStatistics.resharding.oldestActive.recipientRemainingOperationTimeEstimatedMillisEstimated remaining time, in milliseconds, for the current resharding operation. Prior to resharding, or when the time cannot be calculated, the value is set to -1.
If a shard is involved in multiple resharding operations, this field contains the remaining time estimate for the oldest resharding operation where this shard is a recipient.
New in version 6.1.
shardingStatistics.resharding.oldestActive.totalOperationTimeElapsedMillisTotal elapsed time, in milliseconds, for the current resharding operation. Time is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
shardingStatistics.resharding.latencies.collectionCloningTotalRemoteBatchRetrievalTimeMillisTotal time recipients spent retrieving batches of documents from donors, in milliseconds.
New in version 6.1.
shardingStatistics.resharding.latencies.collectionCloningTotalRemoteBatchesRetrievedTotal number of batches of documents recipients retrieved from donors.
New in version 6.1.
shardingStatistics.resharding.latencies.collectionCloningTotalLocalInsertTimeMillisTotal time recipients spent inserting batches of documents from donors, in milliseconds.
New in version 6.1.
shardingStatistics.resharding.latencies.collectionCloningTotalLocalInsertsTotal number of batches of documents from donors that recipients inserted.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogFetchingTotalRemoteBatchRetrievalTimeMillisTotal time recipients spent retrieving batches of oplog entries from donors, in milliseconds.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogFetchingTotalRemoteBatchesRetrievedTotal number of batches of oplog entries recipients retrieved from donors.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogFetchingTotalLocalInsertTimeMillisTotal time recipients spent inserting batches of oplog entries from donors, in milliseconds.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogFetchingTotalLocalInsertsTotal number of batches of oplog entries from donors that recipients inserted.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchRetrievalTimeMillisTotal time recipients spent retrieving batches of oplog entries that were inserted during fetching, in milliseconds.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchesRetrievedTotal number of batches of oplog entries that were inserted during fetching that recipients retrieved.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchApplyTimeMillisTotal time recipients spent applying batches of oplog entries, in milliseconds.
New in version 6.1.
shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchesAppliedTotal number of batches of oplog entries that recipients applied.
New in version 6.1.
shardingStatistics.resharding.totalApplyTimeElapsedMillisTotal elapsed time, in milliseconds, for the apply step of the current resharding operation. In the apply step, recipient shards modify their data based on new incoming writes from donor shards. Time is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
shardingStatistics.resharding.totalCriticalSectionTimeElapsedMillisTotal elapsed time, in milliseconds, for the critical section of the current resharding operation. The critical section prevents new incoming writes to the collection currently being resharded. Time is set to 0 when a new resharding operation starts.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
shardingStatistics.resharding.donorStateState of the donor shard for the current resharding operation. Number is set to 0 when a new resharding operation starts.
Number ReturnedMeaningDescription0
unusedThe shard is not a donor in the current resharding operation.
1
preparing-to-donateThe donor shard is preparing to donate data to the recipient shards.
2
donating-initial-dataThe donor shard is donating data to the recipient shards.
3
donating-oplog-entriesThe donor shard is donating oplog entries to the recipient shards.
4
preparing-to-block-writesThe donor shard is about to prevent new incoming write operations to the collection that is being resharded.
5
errorAn error occurred during the resharding operation.
6
blocking-writesThe donor shard is preventing new incoming write operations and the donor shard has notified all recipient shards that new incoming writes are prevented.
7
doneThe donor shard has dropped the old sharded collection and the resharding operation is complete.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
shardingStatistics.resharding.recipientStateState of the recipient shard for the current resharding operation. Number is set to 0 when a new resharding operation starts.
Number ReturnedMeaningDescription0
unusedShard is not a recipient in the current resharding operation.
1
awaiting-fetch-timestampThe recipient shard is waiting for the donor shards to be prepared to donate their data
2
creating-collectionThe recipient shard is creating the new sharded collection.
3
cloningThe recipient shard is receiving data from the donor shards.
4
applyingThe recipient shard is applying oplog entries to modify its copy of the data based on the new incoming writes from donor shards.
5
errorAn error occurred during the resharding operation.
6
strict-consistencyThe recipient shard has all data changes stored in a temporary collection.
7
doneThe resharding operation is complete.
Only present when run on a shard or config server. Returns 0 on a config server.
New in version 5.0.
shardingStatistics.numHostsTargetedIndicates the number of shards targeted for
CRUDoperations and aggregation commands. When aCRUDoperation or aggregation command is run, the following metrics will be incremented.NameDescriptionallShardsA command targeted all shards
manyShardsA command targeted more than one shard
oneShardA command targeted one shard
unshardedA command was run on an unsharded collection
Note
Running the
serverStatuscommand onmongoswill provide insight into the CRUD and aggregation operations that run on a sharded cluster.Multi-shard operations can either be scatter-gather or shard specific. Multi-shard scatter-gather operations can consume more resources. By using the
shardingStatistics.numHostsTargetedmetrics you can tune the aggregation queries that run on a sharded cluster.
shardingStatistics.resharding.coordinatorStateState of the resharding coordinator for the current resharding operation. The resharding coordinator is a thread that runs on the config server primary. Number is set to 0 when a new resharding operation starts.
Number ReturnedMeaningDescription0
unusedThe shard is not the coordinator in the current resharding operation.
1
initializingThe resharding coordinator has inserted the coordinator document into
config.reshardingOperationsand has added thereshardingFieldsto theconfig.collectionsentry for the original collection.2
preparing-to-donateThe resharding coordinator
has created a
config.collectionsentry for the temporary resharding collection.has inserted entries into
config.chunksfor ranges based on the new shard key.has inserted entries into
config.tagsfor any zones associated with the new shard key.
The coordinator informs participant shards to begin the resharding operation. The coordinator then waits until all donor shards have picked a
minFetchTimestampand are ready to donate.3
cloningThe resharding coordinator informs donor shards to donate data to recipient shards. The coordinator waits for all recipients to finish cloning the data from the donor.
4
applyingThe resharding coordinator informs recipient shards to modify their copies of data based on new incoming writes from donor shards. The coordinator waits for all recipients to finish applying oplog entries.
5
blocking-writesThe resharding coordinator informs donor shards to prevent new incoming write operations to the collection being resharded. The coordinator then waits for all recipients to have all data changes.
6
abortingAn unrecoverable error occurred during the resharding operation or the
abortReshardCollectioncommand (or thesh.abortReshardCollection()method) was run.6
committingThe resharding coordinator removes the
config.collectionsentry for the temporary resharding collection. The coordinator then adds therecipientFieldsto the source collection's entry.Only present when run on a shard or config server.
New in version 5.0.
shardingStatistics.resharding.opStatusStatus for the current resharding operation.
Number ReturnedDescription-1
Resharding operation not in progress.
0
Resharding operation succeeded.
1
Resharding operation failed.
2
Resharding operation canceled.
Only present when run on a shard or config server.
New in version 5.0.
shardingStatistics.resharding.lastOpEndingChunkImbalanceThis field contains the highest numeric difference for (
maxNumChunksInShard - minNumChunksInShard) among all zones for the collection that was processed by the most recent resharding operation.See Range Size.
Only updated on config servers.
New in version 5.0.
shardedIndexConsistency
shardedIndexConsistency : { numShardedCollectionsWithInconsistentIndexes : Long("<num>") },
shardedIndexConsistencyAvailable only on config server instances.
A document that returns results of index consistency checks for sharded collections.
The returned metrics are meaningful only when run on the primary of the config server replica set for a sharded cluster.
Tip
enableShardedIndexConsistencyCheckparametershardedIndexConsistencyCheckIntervalMSparameter
shardedIndexConsistency.numShardedCollectionsWithInconsistentIndexesAvailable only on config server instances.
Number of sharded collections whose indexes are inconsistent across the shards. A sharded collection has an inconsistent index if the collection does not have the exact same indexes (including the index options) on each shard that contains chunks for the collection.
To investigate if a sharded collection has inconsistent indexes, see Find Inconsistent Indexes Across Shards.
The returned metrics are meaningful only when run on the primary of the config server replica set for a sharded cluster.
Tip
enableShardedIndexConsistencyCheckparametershardedIndexConsistencyCheckIntervalMSparameter
spillWiredTiger
spillWiredTiger: { storageSize: <long>, uri: <string>, version: <string>, 'block-manager': { 'blocks read': <num>, 'blocks written': <num>, 'bytes read': <num>, 'bytes written': <num> }, cache: { 'application thread time evicting (usecs)': <num>, 'application threads eviction requested with cache fill ratio < 25%': <num>, 'application threads eviction requested with cache fill ratio >= 75%': <num>, 'application threads page write from cache to disk count': <num>, 'application threads page write from cache to disk time (usecs)': <num>, 'bytes allocated for updates': <num>, 'bytes currently in the cache': <num>, 'bytes read into cache': <num>, 'bytes written from cache': <num>, 'eviction currently operating in aggressive mode': <num>, 'eviction empty score': <num>, 'eviction state': <num>, 'eviction walk target strategy clean pages': <num>, 'eviction walk target strategy dirty pages': <num>, 'eviction walk target strategy pages with updates': <num>, 'forced eviction - pages evicted that were clean count': <num>, 'forced eviction - pages evicted that were dirty count': <num>, 'forced eviction - pages selected count': <num>, 'forced eviction - pages selected unable to be evicted count': <num>, 'hazard pointer blocked page eviction': <num>, 'maximum bytes configured': <num>, 'maximum page size seen at eviction': <num>, 'number of times dirty trigger was reached': <num>, 'number of times eviction trigger was reached': <num>, 'number of times updates trigger was reached': <num>, 'page evict attempts by application threads': <num>, 'page evict failures by application threads': <num>, 'pages queued for eviction': <num>, 'pages queued for urgent eviction': <num>, 'tracked dirty bytes in the cache': <num> } }
spillWiredTigerA document that contains metrics on the WiredTiger spill instance. When MongoDB writes to disk to fulfill certain operations, it utilizes a separate WiredTiger instance, which contains its own in-memory cache. This separate cache isolates operations from the main WiredTiger cache.
The
spillWiredTigerdocument contains a subset of the fields reported in the wiredTiger document. ThespillWiredTigerdocument only appears when using the WiredTiger storage engine. For details on thespillWiredTigermetrics, see the correspondingwiredTigermetric description.
storageEngine
storageEngine : { name : <string>, supportsCommittedReads : <boolean>, persistent : <boolean> },
storageEngine.supportsCommittedReadsA boolean that indicates whether the storage engine supports
"majority"read concern.
storageEngine.persistentA boolean that indicates whether the storage engine does or does not persist data to disk.
tcmalloc
Note
tcmalloc metrics that are only for internal use are omitted from this
page.
tcmalloc : { usingPerCPUCaches : <boolean>, // Added in MongoDB 8.0 maxPerCPUCacheSizeBytes : <integer>, // Added in MongoDB 8.0 generic : { current_allocated_bytes : <integer>, heap_size : <integer>, peak_memory_usage : <integer> // Added in MongoDB 8.0 }, tcmalloc : { central_cache_free : <integer>, cpu_free : <integer>, // Added in MongoDB 8.0 release_rate : <integer>, total_bytes_held : <integer>, // Added in MongoDB 8.0 cpuCache : { 0 : { overflows : <integer>, // Added in MongoDB 8.0 underflows : <integer> // Added in MongoDB 8.0 }, } }, tcmalloc_derived : { total_free_bytes : <integer> // Added in MongoDB 8.0 } }
tcmallocNote
Starting in version 8.0, MongoDB uses an updated version of TCMalloc that improves memory fragmentation and management. See tcmalloc upgrade for more information.
A document that contains information on memory allocation for the server. By default,
tcmallocmetrics are included in theserverStatusoutput. To change the verbosity of thetcmallocsection, specify an integer between0and3(inclusive):If you set verbosity to
0,tcmallocmetrics aren't included in theserverStatusoutput.If you set verbosity to
1, theserverStatusoutput includes the defaulttcmallocmetrics.If you set verbosity to
2, theserverStatusoutput includes defaulttcmallocmetrics and thetcmalloc.tcmalloc.cpuCachesection.If you set verbosity to
3, theserverStatusoutput includes alltcmallocmetrics.
If you specify a value higher than
3, MongoDB sets theverbosityto3.For example, to call
serverStatuswithverbosityset to2, run the following command:db.runCommand( { serverStatus: 1, tcmalloc: 2 } )
tcmalloc.usingPerCPUCachesA boolean that indicates whether TCMalloc is running with per-CPU caches. If
tcmalloc.usingPerCPUCachesisfalse, ensure that:You're using Linux kernel version 4.18 or later.
New in version 8.0.
tcmalloc.generic.peak_memory_usageTotal amount of memory, in bytes, allocated by MongoDB and sampled by TCMalloc.
New in version 8.0.
tcmalloc.generic.current_allocated_bytesTotal number of bytes that are currently allocated to memory and actively used by MongoDB.
tcmalloc.generic.heap_sizeAmount of memory, in bytes, allocated from the operating system. This value includes memory that's currently in use and memory that's been allocated but isn't in use.
tcmalloc.tcmalloc.central_cache_freeAmount of memory, in bytes, held in the central free list. The central free list is a structure that manages free memory for reuse.
tcmalloc.tcmalloc.cpu_freeAmount of free memory, in bytes, available across all CPU caches.
New in version 8.0.
tcmalloc.tcmalloc.total_bytes_heldAmount of memory, in bytes, currently held in caches.
New in version 8.0.
tcmalloc.tcmalloc.release_rateRate, in bytes per second, at which unused memory is released to the operating system. The
tcmallocReleaseRateparameter determines the value oftcmalloc.tcmalloc.release_rate.
tcmalloc.tcmalloc.cpuCacheA document that provides data on each CPU cache.
cpuCachemetrics are excluded at the default verbosity level. To viewcpuCachemetrics, you must set thetcmallocverbosity to at least2.New in version 8.0.
tcmalloc.tcmalloc.cpuCache.N.overflowsNumber of overflows that the CPU cache experienced. Overflows occur when a user deallocates memory and the cache is full.
New in version 8.0.
transactions
transactions : { retriedCommandsCount : Long("<num>"), retriedStatementsCount : Long("<num>"), transactionsCollectionWriteCount : Long("<num>"), currentActive : Long("<num>"), currentInactive : Long("<num>"), currentOpen : Long("<num>"), totalAborted : Long("<num>"), totalCommitted : Long("<num>"), totalStarted : Long("<num>"), totalPrepared : Long("<num>"), totalPreparedThenCommitted : Long("<num>"), totalPreparedThenAborted : Long("<num>"), currentPrepared : Long("<num>"), lastCommittedTransaction : <document> },
transactions : { currentOpen : Long("<num>"), currentActive : Long("<num>"), currentInactive : Long("<num>"), totalStarted : Long("<num>"), totalCommitted : Long("<num>"), totalAborted : Long("<num>"), abortCause : { <String1> : Long("<num>"), <String2> : Long("<num>"), ... }, totalContactedParticipants : Long("<num>"), totalParticipantsAtCommit : Long("<num>"), totalRequestsTargeted : Long("<num>"), commitTypes : { noShards : { initiated : Long("<num>"), successful : Long("<num>"), successfulDurationMicros : Long("<num>") }, singleShard : { initiated : Long("<num>"), successful : Long("<num>"), successfulDurationMicros : Long("<num>") }, singleWriteShard : { initiated : Long("<num>"), successful : Long("<num>"), successfulDurationMicros : Long("<num>") }, readOnly : { initiated : Long("<num>"), successful : Long("<num>"), successfulDurationMicros : Long("<num>") }, twoPhaseCommit : { initiated : Long("<num>"), successful : Long("<num>"), successfulDurationMicros : Long("<num>") }, recoverWithToken : { initiated : Long("<num>"), successful : Long("<num>"), successfulDurationMicros : Long("<num>") } } },
transactionsWhen run on a
mongod, a document with data about the retryable writes and transactions.When run on a
mongos, a document with data about the transactions run on the instance.
transactions.retriedCommandsCountAvailable on mongod only.
The total number of retry attempts that have been received after the corresponding retryable write command has already been committed. That is, a retryable write is attempted even though the write has previously succeeded and has an associated record for the transaction and session in the
config.transactionscollection, such as when the initial write response to the client is lost.Note
MongoDB does not re-execute the committed writes.
The total is across all sessions.
The total does not include any retryable writes that may happen internally as part of a chunk migration.
transactions.retriedStatementsCountAvailable on mongod only.
The total number of write statements associated with the retried commands in
transactions.retriedCommandsCount.Note
MongoDB does not re-execute the committed writes.
The total does not include any retryable writes that may happen internally as part of a chunk migration.
transactions.transactionsCollectionWriteCountAvailable on mongod only.
The total number of writes to the
config.transactionscollection, triggered when a new retryable write statement is committed.For update and delete commands, since only single document operations are retryable, there is one write per statement.
For insert operations, there is one write per batch of documents inserted, except when a failure leads to each document being inserted separately.
The total includes writes to a server's
config.transactionscollection that occur as part of a migration.
transactions.currentActiveAvailable on both mongod and mongos.
The total number of open transactions currently executing a command.
transactions.currentInactiveAvailable on both mongod and mongos.
The total number of open transactions that are not currently executing a command.
transactions.currentOpenAvailable on both mongod and mongos.
The total number of open transactions. A transaction is opened when the first command is run as a part of that transaction, and stays open until the transaction either commits or aborts.
transactions.totalAbortedFor the
mongod, the total number of transactions aborted on this instance since its last startup.For the
mongos, the total number of transactions aborted through this instance since its last startup.
transactions.totalCommittedFor the
mongod, the total number of transactions committed on the instance since its last startup.For the
mongos,the total number of transactions committed through this instance since its last startup.
transactions.totalStartedFor the
mongod, the total number of transactions started on this instance since its last startup.For the
mongos, the total number of transactions started on this instance since its last startup.
transactions.abortCauseAvailable on mongos only.
Breakdown of the
transactions.totalAbortedby cause. If a client issues an explicitabortTransaction, the cause is listed asabort.For example:
totalAborted : Long("5"), abortCause : { abort : Long("1"), DuplicateKey : Long("1"), StaleConfig : Long("3"), SnapshotTooOld : Long("1") },
transactions.totalContactedParticipantsAvailable on mongos only.
The total number of shards contacted for all transactions started through this
mongossince its last startup.The number of shards contacted during the transaction processes can include those shards that may not be included as part of the commit.
transactions.totalParticipantsAtCommitAvailable on mongos only.
Total number of shards involved in the commit for all transactions started through this
mongossince its last startup.
transactions.totalRequestsTargetedAvailable on mongos only.
Total number of network requests targeted by the
mongosas part of its transactions.
transactions.commitTypesAvailable on mongos only.
Breakdown of the commits by types. For example:
noShards : { initiated : Long("0"), successful : Long("0"), successfulDurationMicros : Long("0") }, singleShard : { initiated : Long("5"), successful : Long("5"), successfulDurationMicros : Long("203118") }, singleWriteShard : { initiated : Long("0"), successful : Long("0"), successfulDurationMicros : Long("0") }, readOnly : { initiated : Long("0"), successful : Long("0"), successfulDurationMicros : Long("0") }, twoPhaseCommit : { initiated : Long("1"), successful : Long("1"), successfulDurationMicros : Long("179616") }, recoverWithToken : { initiated : Long("0"), successful : Long("0"), successfulDurationMicros : Long("0") } The types of commit are:
TypeDescriptionnoShardsCommits of transactions that did not contact any shards.
singleShardCommits of transactions that affected a single shard.
singleWriteShardCommits of transactions that contacted multiple shards but whose write operations only affected a single shard.
readOnlyCommits of transactions that only involved read operations.
twoPhaseCommitCommits of transactions that included writes to multiple shards
recoverWithTokenCommits that recovered the outcome of transactions from another instance or after this instance was restarted.
For each commit type, the command returns the following metrics:
MetricsDescriptioninitiatedTotal number of times that commits of this type were initiated.
successfulTotal number of times that commits of this type succeeded.
successfulDurationMicrosTotal time, in microseconds, taken by successful commits of this type.
transactions.totalPreparedAvailable on mongod only.
The total number of transactions in prepared state on this server since the
mongodprocess's last startup.
transactions.totalPreparedThenCommittedAvailable on mongod only.
The total number of transactions that were prepared and committed on this server since the
mongodprocess's last startup.
transactions.totalPreparedThenAbortedAvailable on mongod only.
The total number of transactions that were prepared and aborted on this server since the
mongodprocess's last startup.
transactions.currentPreparedAvailable on mongod only.
The current number of transactions in prepared state on this server.
transactions.lastCommittedTransactionAvailable on mongod only.
The details of the last transaction committed when the
mongodis primary.When returned from a secondary,
lastCommittedTransactionreturns the details of the last transaction committed when that secondary was a primary.lastCommittedTransaction : { operationCount : Long("1"), oplogOperationBytes : Long("211"), writeConcern : { w : "majority", wtimeout : 0 } } MetricsDescriptionoperationCountThe number of write operations in the transaction.
oplogOperationBytesThe size of the corresponding oplog entry or entries for the transaction. [2]
writeConcernThe write concern used for the transaction.
| [2] | MongoDB creates as many oplog entries as necessary to encapsulate all write operations in a transaction. See Oplog Size Limit for details. |
transportSecurity
transportSecurity : { 1.0 : Long("<num>"), 1.1 : Long("<num>"), 1.2 : Long("<num>"), 1.3 : Long("<num>"), unknown : Long("<num>") },
watchdog
watchdog : { checkGeneration : Long("<num>"), monitorGeneration : Long("<num>"), monitorPeriod : <num> }
Note
The watchdog section is only present if the Storage Node Watchdog is enabled.
watchdogA document reporting the status of the Storage Node Watchdog.
watchdog.checkGenerationThe number of times the directories have been checked since startup. Directories are checked multiple times every
monitoringPeriod.
watchdog.monitorGenerationThe number of times the status of all filesystems used by
mongodhas been examined. This is incremented once everymonitoringPeriod.
watchdog.monitorPeriodThe value set by
watchdogPeriodSeconds. This is the period between status checks.
wiredTiger
wiredTiger information only appears if using the WiredTiger storage engine. Some of the statistics roll up for the server.
{ uri : 'statistics:', version: <string>, async : { current work queue length : <num>, maximum work queue length : <num>, number of allocation state races : <num>, number of flush calls : <num>, number of operation slots viewed for allocation : <num>, number of times operation allocation failed : <num>, number of times worker found no work : <num>, total allocations : <num>, total compact calls : <num>, total insert calls : <num>, total remove calls : <num>, total search calls : <num>, total update calls : <num> }, block-manager : { blocks pre-loaded : <num>, blocks read : <num>, blocks written : <num>, bytes read : <num>, bytes written : <num>, bytes written for checkpoint : <num>, mapped blocks read : <num>, mapped bytes read : <num> }, cache : { application threads page read from disk to cache count : <num>, application threads page read from disk to cache time (usecs) : <num>, application threads page write from cache to disk count : <num>, application threads page write from cache to disk time (usecs) : <num>, bytes belonging to page images in the cache : <num>, bytes belonging to the cache overflow table in the cache : <num>, bytes currently in the cache : <num>, bytes dirty in the cache cumulative : <num>, bytes not belonging to page images in the cache : <num>, bytes read into cache : <num>, bytes written from cache : <num>, cache overflow cursor application thread wait time (usecs) : <num>, cache overflow cursor internal thread wait time (usecs) : <num>, cache overflow score : <num>, cache overflow table entries : <num>, cache overflow table insert calls : <num>, cache overflow table max on-disk size : <num>, cache overflow table on-disk size : <num>, cache overflow table remove calls : <num>, checkpoint blocked page eviction : <num>, eviction calls to get a page : <num>, eviction calls to get a page found queue empty : <num>, eviction calls to get a page found queue empty after locking : <num>, eviction currently operating in aggressive mode : <num>, eviction empty score : <num>, eviction passes of a file : <num>, eviction server candidate queue empty when topping up : <num>, eviction server candidate queue not empty when topping up : <num>, eviction server evicting pages : <num>, eviction server slept, because we did not make progress with eviction : <num>, eviction server unable to reach eviction goal : <num>, eviction server waiting for a leaf page : <num>, eviction server waiting for an internal page sleep (usec) : <num>, eviction server waiting for an internal page yields : <num>, eviction state : <num>, eviction walk target pages histogram - 0-9 : <num>, eviction walk target pages histogram - 10-31 : <num>, eviction walk target pages histogram - 128 and higher : <num>, eviction walk target pages histogram - 32-63 : <num>, eviction walk target pages histogram - 64-128 : <num>, eviction walks abandoned : <num>, eviction walks gave up because they restarted their walk twice : <num>, eviction walks gave up because they saw too many pages and found no candidates : <num>, eviction walks gave up because they saw too many pages and found too few candidates : <num>, eviction walks reached end of tree : <num>, eviction walks started from root of tree : <num>, eviction walks started from saved location in tree : <num>, eviction worker thread active : <num>, eviction worker thread created : <num>, eviction worker thread evicting pages : <num>, eviction worker thread removed : <num>, eviction worker thread stable number : <num>, files with active eviction walks : <num>, files with new eviction walks started : <num>, force re-tuning of eviction workers once in a while : <num>, forced eviction - pages evicted that were clean count : <num>, forced eviction - pages evicted that were clean time (usecs) : <num>, forced eviction - pages evicted that were dirty count : <num>, forced eviction - pages evicted that were dirty time (usecs) : <num>, forced eviction - pages selected because of too many deleted items count : <num>, forced eviction - pages selected count : <num>, forced eviction - pages selected unable to be evicted count : <num>, forced eviction - pages selected unable to be evicted time : <num>, hazard pointer blocked page eviction : <num>, hazard pointer check calls : <num>, hazard pointer check entries walked : <num>, hazard pointer maximum array length : <num>, in-memory page passed criteria to be split : <num>, in-memory page splits : <num>, internal pages evicted : <num>, internal pages split during eviction : <num>, leaf pages split during eviction : <num>, maximum bytes configured : <num>, maximum page size at eviction : <num>, modified pages evicted : <num>, modified pages evicted by application threads : <num>, operations timed out waiting for space in cache : <num>, overflow pages read into cache : <num>, page split during eviction deepened the tree : <num>, page written requiring cache overflow records : <num>, pages currently held in the cache : <num>, pages evicted by application threads : <num>, pages queued for eviction : <num>, pages queued for eviction post lru sorting : <num>, pages queued for urgent eviction : <num>, pages queued for urgent eviction during walk : <num>, pages read into cache : <num>, pages read into cache after truncate : <num>, pages read into cache after truncate in prepare state : <num>, pages read into cache requiring cache overflow entries : <num>, pages read into cache requiring cache overflow for checkpoint : <num>, pages read into cache skipping older cache overflow entries : <num>, pages read into cache with skipped cache overflow entries needed later : <num>, pages read into cache with skipped cache overflow entries needed later by checkpoint : <num>, pages requested from the cache : <num>, pages seen by eviction walk : <num>, pages selected for eviction unable to be evicted : <num>, pages walked for eviction : <num>, pages written from cache : <num>, pages written requiring in-memory restoration : <num>, percentage overhead : <num>, tracked bytes belonging to internal pages in the cache : <num>, tracked bytes belonging to leaf pages in the cache : <num>, tracked dirty bytes in the cache : <num>, tracked dirty pages in the cache : <num>, unmodified pages evicted : <num> }, capacity : { background fsync file handles considered : <num>, background fsync file handles synced : <num>, background fsync time (msecs) : <num>, bytes read : <num>, bytes written for checkpoint : <num>, bytes written for eviction : <num>, bytes written for log : <num>, bytes written total : <num>, threshold to call fsync : <num>, time waiting due to total capacity (usecs) : <num>, time waiting during checkpoint (usecs) : <num>, time waiting during eviction (usecs) : <num>, time waiting during logging (usecs) : <num>, time waiting during read (usecs) : <num> }, connection : { auto adjusting condition resets : <num>, auto adjusting condition wait calls : <num>, detected system time went backwards : <num>, files currently open : <num>, memory allocations : <num>, memory frees : <num>, memory re-allocations : <num>, pthread mutex condition wait calls : <num>, pthread mutex shared lock read-lock calls : <num>, pthread mutex shared lock write-lock calls : <num>, total fsync I/Os : <num>, total read I/Os : <num>, total write I/Os : <num> }, cursor : { cached cursor count : <num>, cursor bulk loaded cursor insert calls : <num>, cursor close calls that result in cache : <num>, cursor create calls : <num>, cursor insert calls : <num>, cursor insert key and value bytes : <num>, cursor modify calls : <num>, cursor modify key and value bytes affected : <num>, cursor modify value bytes modified : <num>, cursor next calls : <num>, cursor operation restarted : <num>, cursor prev calls : <num>, cursor remove calls : <num>, cursor remove key bytes removed : <num>, cursor reserve calls : <num>, cursor reset calls : <num>, cursor search calls : <num>, cursor search near calls : <num>, cursor sweep buckets : <num>, cursor sweep cursors closed : <num>, cursor sweep cursors examined : <num>, cursor sweeps : <num>, cursor truncate calls : <num>, cursor update calls : <num>, cursor update key and value bytes : <num>, cursor update value size change : <num>, cursors reused from cache : <num>, open cursor count : <num> }, data-handle : { connection data handle size : <num>, connection data handles currently active : <num>, connection sweep candidate became referenced : <num>, connection sweep dhandles closed : <num>, connection sweep dhandles removed from hash list : <num>, connection sweep time-of-death sets : <num>, connection sweeps : <num>, session dhandles swept : <num>, session sweep attempts : <num> }, lock : { checkpoint lock acquisitions : <num>, checkpoint lock application thread wait time (usecs) : <num>, checkpoint lock internal thread wait time (usecs) : <num>, dhandle lock application thread time waiting (usecs) : <num>, dhandle lock internal thread time waiting (usecs) : <num>, dhandle read lock acquisitions : <num>, dhandle write lock acquisitions : <num>, durable timestamp queue lock application thread time waiting (usecs) : <num>, durable timestamp queue lock internal thread time waiting (usecs) : <num>, durable timestamp queue read lock acquisitions : <num>, durable timestamp queue write lock acquisitions : <num>, metadata lock acquisitions : <num>, metadata lock application thread wait time (usecs) : <num>, metadata lock internal thread wait time (usecs) : <num>, read timestamp queue lock application thread time waiting (usecs) : <num>, read timestamp queue lock internal thread time waiting (usecs) : <num>, read timestamp queue read lock acquisitions : <num>, read timestamp queue write lock acquisitions : <num>, schema lock acquisitions : <num>, schema lock application thread wait time (usecs) : <num>, schema lock internal thread wait time (usecs) : <num>, table lock application thread time waiting for the table lock (usecs) : <num>, table lock internal thread time waiting for the table lock (usecs) : <num>, table read lock acquisitions : <num>, table write lock acquisitions : <num>, txn global lock application thread time waiting (usecs) : <num>, txn global lock internal thread time waiting (usecs) : <num>, txn global read lock acquisitions : <num>, txn global write lock acquisitions : <num> }, log : { busy returns attempting to switch slots : <num>, force archive time sleeping (usecs) : <num>, log bytes of payload data : <num>, log bytes written : <num>, log files manually zero-filled : <num>, log flush operations : <num>, log force write operations : <num>, log force write operations skipped : <num>, log records compressed : <num>, log records not compressed : <num>, log records too small to compress : <num>, log release advances write LSN : <num>, log scan operations : <num>, log scan records requiring two reads : <num>, log server thread advances write LSN : <num>, log server thread write LSN walk skipped : <num>, log sync operations : <num>, log sync time duration (usecs) : <num>, log sync_dir operations : <num>, log sync_dir time duration (usecs) : <num>, log write operations : <num>, logging bytes consolidated : <num>, maximum log file size : <num>, number of pre-allocated log files to create : <num>, pre-allocated log files not ready and missed : <num>, pre-allocated log files prepared : <num>, pre-allocated log files used : <num>, records processed by log scan : <num>, slot close lost race : <num>, slot close unbuffered waits : <num>, slot closures : <num>, slot join atomic update races : <num>, slot join calls atomic updates raced : <num>, slot join calls did not yield : <num>, slot join calls found active slot closed : <num>, slot join calls slept : <num>, slot join calls yielded : <num>, slot join found active slot closed : <num>, slot joins yield time (usecs) : <num>, slot transitions unable to find free slot : <num>, slot unbuffered writes : <num>, total in-memory size of compressed records : <num>, total log buffer size : <num>, total size of compressed records : <num>, written slots coalesced : <num>, yields waiting for previous log file close : <num> }, perf : { file system read latency histogram (bucket 1) - 10-49ms : <num>, file system read latency histogram (bucket 2) - 50-99ms : <num>, file system read latency histogram (bucket 3) - 100-249ms : <num>, file system read latency histogram (bucket 4) - 250-499ms : <num>, file system read latency histogram (bucket 5) - 500-999ms : <num>, file system read latency histogram (bucket 6) - 1000ms+ : <num>, file system write latency histogram (bucket 1) - 10-49ms : <num>, file system write latency histogram (bucket 2) - 50-99ms : <num>, file system write latency histogram (bucket 3) - 100-249ms : <num>, file system write latency histogram (bucket 4) - 250-499ms : <num>, file system write latency histogram (bucket 5) - 500-999ms : <num>, file system write latency histogram (bucket 6) - 1000ms+ : <num>, operation read latency histogram (bucket 1) - 100-249us : <num>, operation read latency histogram (bucket 2) - 250-499us : <num>, operation read latency histogram (bucket 3) - 500-999us : <num>, operation read latency histogram (bucket 4) - 1000-9999us : <num>, operation read latency histogram (bucket 5) - 10000us+ : <num>, operation write latency histogram (bucket 1) - 100-249us : <num>, operation write latency histogram (bucket 2) - 250-499us : <num>, operation write latency histogram (bucket 3) - 500-999us : <num>, operation write latency histogram (bucket 4) - 1000-9999us : <num>, operation write latency histogram (bucket 5) - 10000us+ : <num> }, reconciliation : { fast-path pages deleted : <num>, page reconciliation calls : <num>, page reconciliation calls for eviction : <num>, pages deleted : <num>, split bytes currently awaiting free : <num>, split objects currently awaiting free : <num> }, session : { open session count : <num>, session query timestamp calls : <num>, table alter failed calls : <num>, table alter successful calls : <num>, table alter unchanged and skipped : <num>, table compact failed calls : <num>, table compact successful calls : <num>, table create failed calls : <num>, table create successful calls : <num>, table drop failed calls : <num>, table drop successful calls : <num>, table import failed calls : <num>, table import successful calls : <num>, table rebalance failed calls : <num>, table rebalance successful calls : <num>, table rename failed calls : <num>, table rename successful calls : <num>, table salvage failed calls : <num>, table salvage successful calls : <num>, table truncate failed calls : <num>, table truncate successful calls : <num>, table verify failed calls : <num>, table verify successful calls : <num> }, thread-state : { active filesystem fsync calls : <num>, active filesystem read calls : <num>, active filesystem write calls : <num> }, thread-yield : { application thread time evicting (usecs) : <num>, application thread time waiting for cache (usecs) : <num>, connection close blocked waiting for transaction state stabilization : <num>, connection close yielded for lsm manager shutdown : <num>, data handle lock yielded : <num>, get reference for page index and slot time sleeping (usecs) : <num>, log server sync yielded for log write : <num>, page access yielded due to prepare state change : <num>, page acquire busy blocked : <num>, page acquire eviction blocked : <num>, page acquire locked blocked : <num>, page acquire read blocked : <num>, page acquire time sleeping (usecs) : <num>, page delete rollback time sleeping for state change (usecs) : <num>, page reconciliation yielded due to child modification : <num> }, transaction : { Number of prepared updates : <num>, Number of prepared updates added to cache overflow : <num>, Number of prepared updates resolved : <num>, durable timestamp queue entries walked : <num>, durable timestamp queue insert to empty : <num>, durable timestamp queue inserts to head : <num>, durable timestamp queue inserts total : <num>, durable timestamp queue length : <num>, number of named snapshots created : <num>, number of named snapshots dropped : <num>, prepared transactions : <num>, prepared transactions committed : <num>, prepared transactions currently active : <num>, prepared transactions rolled back : <num>, query timestamp calls : <num>, read timestamp queue entries walked : <num>, read timestamp queue insert to empty : <num>, read timestamp queue inserts to head : <num>, read timestamp queue inserts total : <num>, read timestamp queue length : <num>, rollback to stable calls : <num>, rollback to stable updates aborted : <num>, rollback to stable updates removed from cache overflow : <num>, set timestamp calls : <num>, set timestamp durable calls : <num>, set timestamp durable updates : <num>, set timestamp oldest calls : <num>, set timestamp oldest updates : <num>, set timestamp stable calls : <num>, set timestamp stable updates : <num>, transaction begins : <num>, transaction checkpoint currently running : <num>, transaction checkpoint generation : <num>, transaction checkpoint max time (msecs) : <num>, transaction checkpoint min time (msecs) : <num>, transaction checkpoint most recent time (msecs) : <num>, transaction checkpoint scrub dirty target : <num>, transaction checkpoint scrub time (msecs) : <num>, transaction checkpoint total time (msecs) : <num>, transaction checkpoints : <num>, transaction checkpoints skipped because database was clean : <num>, transaction failures due to cache overflow : <num>, transaction fsync calls for checkpoint after allocating the transaction ID : <num>, transaction fsync duration for checkpoint after allocating the transaction ID (usecs) : <num>, transaction range of IDs currently pinned : <num>, transaction range of IDs currently pinned by a checkpoint : <num>, transaction range of IDs currently pinned by named snapshots : <num>, transaction range of timestamps currently pinned : <num>, transaction range of timestamps pinned by a checkpoint : <num>, transaction range of timestamps pinned by the oldest active read timestamp : <num>, transaction range of timestamps pinned by the oldest timestamp : <num>, transaction read timestamp of the oldest active reader : <num>, transaction sync calls : <num>, transactions committed : <num>, transactions rolled back : <num>, update conflicts : <num> }, concurrentTransactions : { write : { out : <num>, available : <num>, totalTickets : <num> }, read : { out : <num>, available : <num>, totalTickets : <num> }, monitor : { timesDecreased: <num>, timesIncreased: <num>, totalAmountDecreased: <num>, totalAmountIncreased: <num> } }, snapshot-window-settings : { total number of SnapshotTooOld errors : <num>, max target available snapshots window size in seconds : <num>, target available snapshots window size in seconds : <num>, current available snapshots window size in seconds : <num>, latest majority snapshot timestamp available : <string>, oldest majority snapshot timestamp available : <string> } }
Note
The following is not an exhaustive list.
wiredTiger.asyncA document that returns statistics related to the asynchronous operations API. This is unused by MongoDB.
wiredTiger.cacheA document that returns statistics on the cache and page evictions from the cache.
The following describes some of the key
wiredTiger.cachestatistics:wiredTiger.cache.bytes currently in the cacheSize in bytes of the data currently in cache. This value should not be greater than the
maximum bytes configuredvalue.
wiredTiger.cache.tracked dirty bytes in the cacheSize in bytes of the dirty data in the cache. This value should be less than the
bytes currently in the cachevalue.
wiredTiger.cache.pages read into cacheNumber of pages read into the cache.
wiredTiger.cache.pages read into cachewith thewiredTiger.cache.pages written from cachecan provide an overview of the I/O activity.
wiredTiger.cache.pages written from cacheNumber of pages written from the cache.
wiredTiger.cache.pages written from cachewith thewiredTiger.cache.pages read into cachecan provide an overview of the I/O activity.
To adjust the size of the WiredTiger internal cache, see
--wiredTigerCacheSizeGBandstorage.wiredTiger.engineConfig.cacheSizeGB. Avoid increasing the WiredTiger internal cache size above its default value. If your use case requires increased internal cache size, see--wiredTigerCacheSizePctandstorage.wiredTiger.engineConfig.cacheSizePct.
wiredTiger.logA document that returns statistics on WiredTiger's write ahead log (i.e. the journal).
wiredTiger.sessionA document that returns the open cursor count and open session count for the session.