This document lists hard and soft limitations of MongoDB. Unless specified otherwise, limits apply to both MongoDB Atlas and self-hosted deployments.
General MongoDB Limits
Collection and Database Size
MongoDB does not impose a hard limit on collection or database sizes. The maximum size depends on the host file system:
ext4: 16 tebibytes (TiB) maximum file size
XFS: 8 exbibytes (EiB) maximum file size
To scale beyond file system or hardware limits, use sharding for non-Atlas deployments. For on-prem collections nearing these limits or experiencing performance bottlenecks, MongoDB recommends sharding or migrating to MongoDB Atlas, which supports auto-scaling.
When sharding a collection, MongoDB determines initial chunk ranges based on the shard key:
Default
chunkSize: 128 MBTypical average shard key size: 16 bytes
BSON Documents
- BSON Document Size
The maximum BSON document size is 16 mebibytes.
The maximum document size helps ensure that a single document cannot use an excessive amount of RAM or an excessive amount of bandwidth during transmission. To store documents larger than the maximum size, MongoDB provides the GridFS API. For more information about GridFS, see
mongofilesand the documentation for your driver.
- Nested Depth for BSON Documents
MongoDB supports no more than 100 levels of nesting for BSON documents. Each object or array adds a level.
Naming Restrictions
- Use of Case in Database Names
Do not rely on case to distinguish between databases. After you create a database, use consistent capitalization when you refer to it.
For example, you cannot use two databases with names like
salesDataandSalesData. If you create thesalesDatadatabase, do not refer to it using alternate capitalization such assalesdataorSalesData.
- Restrictions on Database Names for Windows
On Windows, database names cannot contain any of the following characters:
/\. "$*<>:|?
- Restrictions on Database Names for Unix and Linux Systems
On Unix and Linux systems, database names cannot contain any of the following characters:
/\. "$
- Restriction on Collection Names
Collection names should begin with an underscore or a letter character, and cannot:
contain the
$be an empty string (e.g.
"")contain the null character
begin with the
system.prefix (reserved for internal use)contain
.system.
If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the
db.getCollection()method inmongoshor a similar method for your driver.The namespace length limit for unsharded collections and views is 255 bytes, and 235 bytes for sharded collections. For a collection or a view, the namespace includes the database name, the dot (
.) separator, and the collection/view name (e.g.<database>.<collection>).
- Restrictions on Field Names
Field names cannot contain the
nullcharacter.The server permits storage of field names that contain dots (
.) and dollar signs ($).MongodB 5.0 adds improved support for the use of (
$) and (.) in field names. There are some restrictions. See Field Name Considerations for more details.Each field name must be unique within the document. You must not store documents with duplicate fields because MongoDB CRUD operations might behave unexpectedly if a document has duplicate fields.
Naming Warnings
Warning
Use caution. The issues in this section could lead to data loss or corruption.
- MongoDB does not support duplicate field names
The MongoDB Query Language have the following restrictions when creating or updating field names:
MongoDB doesn't support inserting documents with duplicate field names. While some BSON builders may support creating such documents, MongoDB doesn't support them, even if the insert succeeds, or appears to succeed.
Updating documents with duplicate field names isn't supported, even if the update succeeds or appears to succeed.
For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion, or may result in an invalid document being inserted that contains duplicate fields. Querying those documents leads to inconsistent results.
Starting in MongoDB 6.1, to see if a document has duplicate field names, use the
validatecommand with thefullfield set totrue. In any MongoDB version, use the$objectToArrayaggregation operator to see if a document has duplicate field names.
- Avoid Ambiguous Field Names
Do not use a field name that is the same as the dot notation for an embedded field. If you have a document with an embedded field
{ "a" : { "b": ... } }, other documents in that collection should not have a top-level field"a.b".If you can reference an embedded field and a top-level field in the same way, indexing and sharding operations happen on the embedded field. You cannot index or shard on the top-level field
"a.b"while the collection has an embedded field that you reference in the same way.For example, if your collection contains documents with both an embedded field
{ "a" : { "b": ... } }and a top-level field"a.b", indexing and sharding operations happen on the embedded field. It is not possible to index or shard on the top-level field"a.b"when your collection also contains an embedded field{ "a" : { "b": ... } }.
- Import and Export Concerns With Dollar Signs (``$``) and Periods (``.``)
Starting in MongoDB 5.0, document field names can be dollar (
$) prefixed and can contain periods (.). However,mongoimportandmongoexportmay not work as expected in some situations with field names that make use of these characters.MongoDB Extended JSON v2 cannot differentiate between type wrappers and fields that happen to have the same name as type wrappers. Do not use Extended JSON formats in contexts where the corresponding BSON representations might include dollar (
$) prefixed keys. The DBRef mechanism is an exception to this general rule.There are also restrictions on using
mongoimportandmongoexportwith periods (.) in field names. Since CSV files use the period (.) to represent data hierarchies, a period (.) in a field name will be misinterpreted as a level of nesting.
- Possible Data Loss With Dollar Signs (``$``) and Periods (``.``)
There is a small chance of data loss when using dollar (
$) prefixed field names or field names that contain periods (.) if these field names are used in conjunction with unacknowledged writes (write concernw=0) on servers that are older than MongoDB 5.0.When running
insert,update, andfindAndModifycommands, drivers that are 5.0 compatible remove restrictions on using documents with field names that are dollar ($) prefixed or that contain periods (.). These field names generated a client-side error in earlier driver versions.The restrictions are removed regardless of the server version the driver is connected to. If a 5.0 driver sends a document to an older server, the document will be rejected without sending an error.
Indexes
- Queries cannot use both text and Geospatial Indexes
You cannot combine the
$textquery, which requires a special text index, with a query operator that requires a different type of special index. For example you cannot combine$textquery with the$nearoperator.
- Fields with 2dsphere Indexes can only hold Geometries
Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. If you attempt to insert a document with non-geometry data in a
2dsphereindexed field, or build a2dsphereindex on a collection where the indexed field has non-geometry data, the operation will fail.Tip
The unique indexes limit in Sharding Operational Restrictions.
- Limited Number of 2dsphere index keys
To generate keys for a 2dsphere index,
mongodmaps GeoJSON shapes to an internal representation. The resulting internal representation may be a large array of values.When
mongodgenerates index keys on a field that holds an array,mongodgenerates an index key for each array element. For compound indexes,mongodcalculates the cartesian product of the sets of keys that are generated for each field. If both sets are large, then calculating the cartesian product could cause the operation to exceed memory limits.indexMaxNumGeneratedKeysPerDocumentlimits the maximum number of keys generated for a single document to prevent out of memory errors. The default is 100000 index keys per document. It is possible to raise the limit, but if an operation requires more keys than theindexMaxNumGeneratedKeysPerDocumentparameter specifies, the operation will fail.
- NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double
If the value of a field returned from a query that is covered by an index is
NaN, the type of thatNaNvalue is alwaysdouble.
- Multikey Index
Multikey indexes cannot cover queries over array fields.
- Geospatial Index
Geospatial indexes can't cover a query.
- Memory Usage in Index Builds
createIndexessupports building one or more indexes on a collection.createIndexesuses a combination of memory and temporary files on disk to build indexes. The default memory limit is 200 megabytes percreateIndexescommand, shared equally among all indexes built in that command. For example, if you build 10 indexes with onecreateIndexescommand, MongoDB allocates each index 20 megabytes for the index build process when using the default memory limit of 200. When you reach the memory limit, MongoDB creates temporary files in the_tmpsubdirectory within--dbpathto complete the build.Adjust the memory limit with the
maxIndexBuildMemoryUsageMegabytesparameter. Increasing this parameter is only necessary in rare cases, such as when you run many simultaneous index builds with a singlecreateIndexescommand or when you index a data set larger than 500GB.Each
createIndexescommand has a limit ofmaxIndexBuildMemoryUsageMegabytes. When using the defaultmaxNumActiveUserIndexBuildsof 3, the total memory usage for all concurrent index builds can reach up to 3 times the value ofmaxIndexBuildMemoryUsageMegabytes.The
maxIndexBuildMemoryUsageMegabyteslimit applies to all index builds initiated by user commands likecreateIndexesor administrative processes like initial sync.An initial sync populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously.
- Collation and Index Types
The following index types only support simple binary comparison and do not support collation:
Tip
To create a
textor2dindex on a collection that has a non-simple collation, you must explicitly specify{collation: {locale: "simple"} }when creating the index.
- Hidden Indexes
You cannot hide the
_idindex.You cannot use
hint()on a hidden index.
Sorts
Capped Collections
- Maximum Number of Documents in a Capped Collection
If you specify the maximum number of documents in a capped collection with
create'smaxparameter, the value must be less than 2 31 documents.If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.
Replica Sets
- Number of Voting Members of a Replica Set
Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see Non-Voting Members.
- Maximum Size of Auto-Created Oplog
If you do not explicitly specify an oplog size (i.e. with
oplogSizeMBor--oplogSize) MongoDB will create an oplog that is no larger than 50 gigabytes. [1][1] The oplog can grow past its configured size limit to avoid deleting the majority commit point.
Sharded Clusters
- Operations Unavailable in Sharded Environments
$wheredoes not permit references to thedbobject from the$wherefunction. This is uncommon in un-sharded collections.
- Covered Queries in Sharded Clusters
When run on
mongos, indexes can only cover queries on sharded collections if the index contains the shard key.
- Single Document Modification Operations in Sharded Collections
To use
updateandremove()operations for a sharded collection that specify thejustOneormulti: falseoption:If you only target one shard, you can use a partial shard key in the query specification or,
You can provide the shard key or the
_idfield in the query specification.
- Unique Indexes in Sharded Collections
MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.
- Maximum Number of Documents Per Range to Migrate
By default, MongoDB cannot move a range if the number of documents in the range is greater than 2 times the result of dividing the configured range size by the average document size. If MongoDB can move a sub-range of a chunk and reduce the size to less than that, the balancer does so by migrating a range.
db.collection.stats()includes theavgObjSizefield, which represents the average document size in the collection.For chunks that are too large to migrate:
The balancer setting
attemptToBalanceJumboChunksallows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Ranges that Exceed Size Limit for details.When issuing
moveRangeandmoveChunkcommands, it's possible to specify the forceJumbo option to allow for the migration of ranges that are too large to move. The ranges may or may not be labeled jumbo.
Shard Keys
- Shard Key Index Type
A shard key index can be an ascending index on the shard key, a compound index that starts with the shard key and specifies ascending order for the shard key, or a hashed index.
A shard key index cannot be:
A descending index on the shard key
Any of the following index types:
- Shard Key Selection
Your options for changing a shard key depend on the version of MongoDB that you are running:
Starting in MongoDB 5.0, you can reshard a collection by changing a document's shard key.
You can refine a shard key by adding a suffix field or fields to the existing shard key.
- Monotonically Increasing Shard Keys Can Limit Insert Throughput
For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys can affect insert throughput. If your shard key is the
_idfield, be aware that the default values of the_idfields are ObjectIds which have generally increasing values.When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a single shard. The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.
If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.
To avoid this constraint, use a hashed shard key or select a field that does not increase or decrease monotonically.
Hashed shard keys and hashed indexes store hashes of keys with ascending values.
Operations
- Sort Operations
If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform an in-memory sort operation on the data.
For more information on sorts and index use, see Sort and Index Use.
- Aggregation Pipeline Stages
MongoDB limits the number of aggregation pipeline stages allowed in a single pipeline to 1000.
If an aggregation pipeline exceeds the stage limit before or after being parsed, you receive an error.
- Aggregation Pipeline Memory
Starting in MongoDB 6.0, the
allowDiskUseByDefaultparameter controls whether pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default.If
allowDiskUseByDefaultis set totrue, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default. You can disable writing temporary files to disk for specificfindoraggregatecommands using the{ allowDiskUse: false }option.If
allowDiskUseByDefaultis set tofalse, pipeline stages that require more than 100 megabytes of memory to execute raise an error by default. You can enable writing temporary files to disk for specificfindoraggregateusing the{ allowDiskUse: true }option.
The
$searchaggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.Examples of stages that can write temporary files to disk when allowDiskUse is
trueare:$sortwhen the sort operation is not supported by an index
Note
Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputting the resulting documents.
Some stages can't output any documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit.
If the results of one of your
$sortpipeline stages exceed the limit, consider adding a $limit stage.The profiler log messages and diagnostic log messages includes a
usedDiskindicator if any aggregation stage wrote data to temporary files due to memory restrictions.
- Aggregation and Read Concern
The
$outstage cannot be used in conjunction with read concern"linearizable". If you specify"linearizable"read concern fordb.collection.aggregate(), you cannot include the$outstage in the pipeline.The
$mergestage cannot be used in conjunction with read concern"linearizable". That is, if you specify"linearizable"read concern fordb.collection.aggregate(), you cannot include the$mergestage in the pipeline.
- Geospatial Queries
Using a
2dindex for queries on spherical data can return incorrect results or an error. For example,2dindexes don't support spherical queries that wrap around the poles.
- Geospatial Coordinates
Valid longitude values are between
-180and180, both inclusive.Valid latitude values are between
-90and90, both inclusive.
- Area of GeoJSON Polygons
For
$geoIntersectsor$geoWithin, if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the$geometryexpression. Otherwise,$geoIntersectsor$geoWithinqueries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere,$geoIntersectsor$geoWithinqueries for the complementary geometry.
- Multi-document Transactions
For multi-document transactions:
You can create collections and indexes in transactions. For details, see Create Collections and Indexes in a Transaction
The collections used in a transaction can be in different databases.
Note
You cannot create new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.
You cannot write to capped collections.
You cannot use read concern
"snapshot"when reading from a capped collection. (Starting in MongoDB 5.0)You cannot read/write to collections in the
config,admin, orlocaldatabases.You cannot write to
system.*collections.You cannot return the supported operation's query plan using
explainor similar commands.
For cursors created outside of a transaction, you cannot call
getMoreinside the transaction.For cursors created in a transaction, you cannot call
getMoreoutside the transaction.
You cannot specify the
killCursorscommand as the first operation in a transaction.Additionally, if you run the
killCursorscommand within a transaction, the server immediately stops the specified cursors. It does not wait for the transaction to commit.
The following operations are not allowed in transactions:
Creating new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.
Explicit creation of collections, e.g.
db.createCollection()method, and indexes, e.g.db.collection.createIndexes()anddb.collection.createIndex()methods, when using a read concern level other than"local".The
listCollectionsandlistIndexescommands and their helper methods.Other non-CRUD and non-informational operations, such as
createUser,getParameter,count, etc. and their helpers.Parallel operations. To update multiple namespaces concurrently, consider using the
bulkWritecommand instead.
Transactions have a lifetime limit as specified by
transactionLifetimeLimitSeconds. The default is 60 seconds.
- Write Command Batch Limit Size
There is no limit to the amount of write operations that the driver can handle. Drivers groups data into batches according to the maxWriteBatchSize, which is 100,000 and cannot be modified. If the batch contains more than 100,000 operations, the driver divides the batch into smaller groups with counts less than or equal to the maxWriteBatchSize. For example, if the operation contains 250,000 operations, the driver creates three batches: two with 100,000 operations and one with 50,000 operations.
- Views
A view definition
pipelinecannot include the$outor the$mergestage. This restriction also applies to embedded pipelines, such as pipelines used in$lookupor$facetstages.Views have the following operation restrictions:
Views are read-only.
You cannot rename views.
find()operations on views do not support the following find command projection operators:Views do not support
$text.Views do not support map-reduce operations.
- Projection Restrictions
- ``$``-Prefixed Field Path Restriction
The
find()andfindAndModify()projection cannot project a field that starts with$with the exception of the DBRef fields.For example, the following operation is invalid:
db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } )
- ``$`` Positional Operator Placement Restriction
The
$projection operator can only appear at the end of the field path, for example"field.$"or"fieldA.fieldB.$".For example, the following operation is invalid:
db.inventory.find( { }, { "instock.$.qty": 1 } ) To resolve, remove the component of the field path that follows the
$projection operator.
- Empty Field Name Projection Restriction
find()andfindAndModify()projection cannot include a projection of an empty field name.For example, the following operation is invalid:
db.inventory.find( { }, { "": 0 } ) In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.
- Path Collision: Embedded Documents and Its Fields
You cannot project an embedded document with any of the embedded document's fields.
For example, consider a collection
inventorywith documents that contain asizefield:{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... } The following operation fails with a
Path collisionerror because it attempts to project bothsizedocument and thesize.uomfield:db.inventory.find( {}, { size: 1, "size.uom": 1 } ) In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
If the projection of the embedded document comes after any and all projections of its fields, MongoDB projects the embedded document. For example, the projection document
{ "size.uom": 1, size: 1 }produces the same result as the projection document{ size: 1 }.If the projection of the embedded document comes before the projection any of its fields, MongoDB projects the specified field or fields. For example, the projection document
{ "size.uom": 1, size: 1, "size.h": 1 }produces the same result as the projection document{ "size.uom": 1, "size.h": 1 }.
- Path Collision: ``$slice`` of an Array and Embedded Fields
find()andfindAndModify()projection cannot contain both a$sliceof an array and a field embedded in the array.For example, consider a collection
inventorythat contains an array fieldinstock:{ ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... } The following operation fails with a
Path collisionerror:db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) In previous versions, the projection applies both projections and returns the first element (
$slice: 1) in theinstockarray but suppresses thewarehousefield in the projected element. Starting in MongoDB 4.4, to achieve the same result, use thedb.collection.aggregate()method with two separate$projectstages.
- ``$`` Positional Operator and ``$slice`` Restriction
find()andfindAndModify()projection cannot include$sliceprojection expression as part of a$projection expression.For example, the following operation is invalid:
db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) In previous versions, MongoDB returns the first element (
instock.$) in theinstockarray that matches the query condition; i.e. the positional projection"instock.$"takes precedence and the$slice:1is a no-op. The"instock.$": { $slice: 1 }does not exclude any other document field.
Sessions
- Sessions and $external Username Limit
To use Client Sessions and Causal Consistency Guarantees with
$externalauthentication users (Kerberos, LDAP, or X.509 users), usernames cannot be greater than 10k bytes.
- Session Idle Timeout
Sessions that receive no read or write operations for 30 minutes or that are not refreshed using
refreshSessionsare marked as expired and can be closed by the MongoDB server. Closing a session kills any in-progress operations and open cursors, including those configured withnoCursorTimeout()or amaxTimeMS()greater than 30 minutes.Long-Running Cursor Operations
For operations that return a cursor that may be idle for longer than 30 minutes, issue the operation within an explicit session using
Mongo.startSession()and periodically refresh the session usingrefreshSessions. For example:var session = db.getMongo().startSession() var sessionId = session sessionId // show the sessionId var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout() var refreshTimestamp = new Date() // take note of time at operation start while (cursor.hasNext()) { // Check if more than 5 minutes have passed since the last refresh if ( (new Date()-refreshTimestamp)/1000 > 300 ) { print("refreshing session") db.adminCommand({"refreshSessions" : [sessionId]}) refreshTimestamp = new Date() } // process cursor normally } This example uses
refreshSessionsevery 5 minutes to prevent the 30-minute idle timeout. The cursor remains open indefinitely.For MongoDB drivers, see the driver documentation for creating sessions.
Atlas-Only Limitations
The following limitations apply only to deployments hosted in MongoDB Atlas. If any of these limits present a problem for your organization, contact Atlas support.
Cluster Limits
Component | Limit |
|---|---|
Shards in multi-region clusters | 12 |
Shards in single-region clusters | 70 |
Cross-region network permissions for a multi-region cluster |
|
Electable nodes per replica set or shard | 7 |
Cluster tier for the Config server (minimum and maximum) |
|
Connection Limits by Cluster Tier
MongoDB Atlas limits concurrent incoming connections based on cluster tier and class.
Connection limits apply per node
For sharded clusters, limits apply per mongos router (number of routers equals the number of replica set nodes across all shards)
Your read preference also affects total connections allocated per query
Connection limits for cluster tiers:
Note
MongoDB Atlas reserves a small number of connections to each cluster for supporting MongoDB Atlas services.
Multi-Cloud Connection Limitation
If you're connecting to a multi-cloud MongoDB Atlas deployment through a private connection, you can access only the nodes in the same cloud provider you're connecting from. This cloud provider might not have the primary node in its region. When this happens, specify the secondary read preference mode in the connection string.
To access all nodes from your current provider through a private connection, configure a VPN or private endpoint to MongoDB Atlas for each remaining cloud provider.
Collection and Index Limits
There is no hard limit on the number of collections in a MongoDB Atlas cluster. However, performance degrades with many collections and indexes. Larger collections have greater impact.
Recommended maximum combined number of collections and indexes by cluster tier:
Cluster Tier | Recommended Maximum |
|---|---|
| 5,000 collections and indexes |
| 10,000 collections and indexes |
| 100,000 collections and indexes |
Organization and Project Limits
MongoDB Atlas deployments have the following organization and project limits:
Component | Limit |
|---|---|
Database users per project | 100 |
Atlas users per project | 500 |
Atlas users per organization | 500 |
API Keys per organization | 500 |
Access list entries per project | 200 |
Users per team | 250 |
Teams per project | 100 |
Teams per organization | 250 |
Teams per user | 100 |
Organizations per user | 250 |
Linked organizations per cross-organization configuration | 250 |
Clusters per project | 25 |
Projects per organization | 250 |
Custom MongoDB roles per project | 100 |
Assigned roles per database user | 100 |
Hourly billing per organization | $50 |
Federated database instances per project | 25 |
Total Network Peering Connections per project |
|
Pending network peering connections per project | 25 |
AWS Private Link addressable targets per region | 50 |
Azure PrivateLink addressable targets per region | 150 |
Unique shard keys per MongoDB Atlas-managed Global Cluster project |
|
| 1 |
Number of alert configurations per project | 500 |
Service Account Limits
Component | Limit |
|---|---|
Atlas service accounts per organization | 200 |
Access list entries per service account | 200 |
Secrets per service account | 2 |
Active tokens per service account | 100 |
Label Limits
MongoDB Atlas limits the length and enforces ReGex requirements for the following component labels:
Component | Character Limit | RegEx Pattern |
|---|---|---|
Cluster Name | 64 [2] |
|
Project Name | 64 |
|
Organization Name | 64 |
|
API Key Description | 250 |
| [2] | If you have peering-only mode enabled, the cluster name character limit is 23. |
| [3] | MongoDB Atlas uses the first 23 characters of a cluster's name. These characters must be unique within the cluster's project. Cluster names with fewer than 23 characters can't end with a hyphen (-). Cluster names with more than 23 characters can't have a hyphen as the 23rd character. |
| [4] | (1, 2) Organization and project names can include any Unicode letter or number plus the following punctuation: -_.(),:&@+'. |
Free and Flex Cluster Limitations
Additional limitations apply to MongoDB Atlas free clusters and Flex clusters. To learn more, see:
Command Limitations
Some MongoDB commands are unsupported in MongoDB Atlas. Additionally, some commands are supported only in MongoDB Atlas free clusters. To learn more, see: