I am writing to inquire about how Atlas works when reducing storage size. We are currently using MongoDB Atlas with AWS provider and have 184GB of free storage size out of 216GB storage size.
In this situation, do we need to execute “compact” on each node before changing the storage in the cluster configuration, or can Atlas handle the free storage size without needing to perform compaction?
Thanks for reading this and I’ll wait for your response.
All Atlas deployments are running on the WiredTiger storage engine. WiredTiger is a no-overwrite data engine, and only releases disk space when blocks available for reuse are checked and used for the writing of new blocks before the file is extended. There is no need to forcibly compact your data, as WiredTiger manages this for you.
So in terms of whether you “need” to execute compact() or not depends on what you’re after. E.g. Are you planning to reduce the storage configuration for the cluster once you have reclaimed the disk space?