Backing up 1.2TB database in reasonable time

I’m running 1.2TB (and growing) mongo cluster of 3 nodes. Until recently I used ZFS, but had to migrate to XFS and need to update my backup routine.

With ZFS I’d take snapshot and upload it, but XFS snapshots seem to be nothing like ZFS snapshots and require fs to be frozen and I don’t like this.

My best option seems to be mongodump. I tried it today but it seems to run extremely slow when I use replica set connect string. Question is: can I run mongodump against the local node without replica set provided? Would that create useful restore dump in case of disaster?

Hi @Georgi_Danov

I would consider keeping/adding a member(hidden, non-voting) and keep it on ZFS as it is so versatile for ZFS snapshots.

XFS on top of LVM2 can give the a similar experience(in my opinion inferior) to ZFS. It would allow an immediate snapshot and allow you to mount the snapshot and copy it off.

Mongodump, even the manual will dissuade you from this method

mongodump and mongorestore are simple and efficient tools for backing up and restoring small MongoDB deployments, but are not ideal for capturing backups of larger systems.

Thank you. Unfortunatelly ZFS is not an option because of the type of servers we use. I’ll have to grow the disk by adding VM volumes to it and this is impractical.

XFS and cp --reflink supposedly provide point-in-time snapshot very similar to what I’m used to in ZFS so right now I’m experimenting with that.