We are currently using mongodumpto back up our databases. Despite using the
–gzip` option in our scripts, we ve noticed that the process is quite slow, and the resulting backups are often larger than the original databases. Also we are Opslog , as part of your backup strategy . does it required to backedup.
Hi @Sujith_Nadh and welcome to the community!
What kind of checking did you do?
Best Regards
Hi Sujith,
It is possible to get zipped files larger than the original if your data includes mostly binary files such as images and videos as they themselves come with their own compressions. And then BSON happens while adding them to the database.
Slow progress can also happen depending on the power of your machines. Compression algorithms depend on the speed and type of CPU as well as speed and capacity of RAM and Disks.
I could not understand the last part of your post: What is this Opslog? Did you make a typo, or is it some automation program I am not aware of. Can you please clarify it?. Backup strategy may depend this.
Sorry it was a typo oplog, Backup size reduced after removing oplog from the bacup
Thanks for clarification. Logs can be large and oplog can be comparably large if the database is small.
However be careful, remove it from backup if you are absolutely sure that your data is final in the backup or at least you have frequent backups.
oplog is itself is a database that keeps the log of operations that modify your data. And removing it untimely can cause data loss:
MongoDB applies database operations on the primary and then records the operations on the primary’s oplog. The secondary members then copy and apply these operations in an asynchronous process.
Please refer to this page about the oplog:
Replica Set Oplog - MongoDB Manual v8.0