Maximum size of database or collection

I’m new user and I’m studying MongoDB. I have installed MongoDB as a Docker container on a Linux device. This device sends documents which contain information about some data, in different DBs of MongoDB. I would like to know what is the maximum size of the DB and of the collections and what happens to the DB once that size is reached. Is the information overwritten?
I hope you can help me. Thank you


Hi Federica,

The maximum size an individual document can be in MongoDB is 16MB with a nested depth of 100 levels.

Edit: There is no max size for an individual MongoDB database.

You can learn more about MongoDB limits and thresholds here: :slight_smile:

1 Like

Hi Ado,
I had seen the table but there are different parameters for the max size, it goes from 1 TB to 32 TB according to the Chunk Size and Average Size of Shard Key Values. I haven’t set either of these two parameters. What values ​​should I consider?
Sorry for the requests but I have some difficulties to understand how MongoDB works.
Thank you

Hey Federica,

Sorry I made a mistake in my original reply. As far as the database size goes, there is technically no limit for how big an individual database can be.

If you’re using MongoDB Atlas, you won’t ever have to worry about database size as it will scale as you grow.

1 Like

Welcome to the MongoDB Community @FEDERICA_BO!

Practical limits & thresholds to consider are documented in the MongoDB Limits and Thresholds page that @ado shared earlier.

The table you are referring to is specific to Sharding Existing Collection Data Size and per the Important callout for this section, this limitation only applies to initial sharding:

These limits only apply for the initial sharding operation. Sharded collections can grow to any size after successfully enabling sharding.

Collections are generally sharded well before reaching those collection sizes. Rebalancing TBs of data will take a long while even with great server & network resources. It is best to shard well before it becomes urgent to do so, as data migration will add even more load to a deployment that is already stressed.

The estimation of these limits is explained just above the table. When a collection is initially sharded, a calculation is done to determine how to split existing data into chunk ranges based on the shard key with each range representing data sizes close to the configured Chunk Size. A list of initial split points is currently returned in a single BSON document which is subject to the 16MB document size limit.

What that table is trying to estimate is the size of collections that can be sharded based on varying shard key sizes or chunk sizes:

Use the following formulas to calculate the theoretical maximum collection size.

maxSplits = 16777216 (bytes) / maxCollectionSize (MB) > maxSplits * (chunkSize / 2)

Chunk Size should be left at the default value (64MB) unless you have specific motivation to change this (for example, if you waited too long to shard and need a larger chunk size for initial sharding :slight_smile: ). There is no configuration for shard key size: the average size of shard key values will depend on the field(s) you choose for your shard key index and the associated values in the collection being sharded.

For more background on practical vs theoretical limits, please see my response on this earlier discussion: Database and collection limitations - #2 by Stennie_X.

If you are concerned about managing capacity planning and scaling yourself, MongoDB Atlas would be a significant help with features like Cluster Auto-Scaling and the ability to adjust cluster resources based on your current requirements.



Thank you guys for you explanation. You have been very kind. Now the topic is clearer.

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.