Downsize mongoDB disks

We have 3 shared MongoDB clusters. We enabled 3T on each node.

We did some DB cleanup and disk usage is reduced to 50%. As part of cost savings, we are planning to downsize the disks to 2 TB. We are looking for the best ways to perform this during business hours.


Hey :wave: @MouliVeera_N_A,

Thank you for reaching out to the MongoDB Community forums. :sparkles:

Regarding your statement, “We enabled 3T on each node,” could you please clarify what you mean by “3T”?

Additionally, it would be helpful if you can share the current size of your MongoDB deployment.

Furthermore, I’d like to emphasize the importance of performing a backup before initiating the downsizing process. Backing up your data ensures that you have a reliable and recoverable copy in case any issues arise during the downsizing. Please refer to the MongoDB Backup Methods to read more on this.

Feel free to provide any further details related to your deployment so that we can assist you more effectively.

Best regards,

3T: Its a 3TB. Our sharded cluster is with 3 nodes and each node is with 3TB PVC on them.

The current size of storage used is 950GB.

Here the request is to downsize the PVC to 2TB.


One possible way might be adding some new nodes with 2T as disk space to the same replica set, then remove the ones with 3TB. But apparently, this can be slow given you have 100+ GB to replicate.

Another one:
Create a snapshot from 3T PV, then create a 2T PV from the snapshot and add a new node from it to the replica set. Then replication will continue with new changes.

That works for VM environment.

We are using Kubernetes statefulsets and we can enlarge them using patch and cascade deletes.

Downsizing the disks is challenging.


@Kushagra_Kesav could you share some inputs to move further.

Hey @MouliVeera_N_A,

When making significant changes to production MongoDB deployments, it is always advisable to take proper backups ahead of time before downsizing disks in case anything goes wrong.

In terms of downsizing the disks from 3TB to 2TB, this may be possible as long as the total size of data where MongoDB resides is not currently consuming more than 2TB of disk space.

However, this is not really a MongoDB question but rather a Kubernetes operational question. Unfortunately, we don’t really have the expertise to answer this, and even if we have the answer, it might not reflect the current best practices with regard to Kubernetes operations. I would suggest going to StackOverflow or ServerFault instead for the specifics on the best practices for live resizing Kubernetes persistent volumes.