Mongodump gzip distant database


In order to import db on my dev team computers i’m editing a command line to copy a portion of a production database. It mostly looks like this:

mongodump --archive --gzip --db=someDistantDB | mongorestore --archive --gzip  --nsFrom='someLocalDB.*' --nsTo='someLocalDB.*'

In order to increase perfomance of the command, and save network data usage, it is important that data transfered from my distantDB is gziped on the distant server before the transfert on my local computer.

I didn’t found any information on how the --gzip parameter work with distant database, so here is my questions:

  • Are the data gziped before the http transfert ?
  • If not is there a way to do it ? (without performing an ssh on the distant server)
  • If not … could this be a feature request ? :smiley:

From tests i did i have the answer to the first answer => The data are flat transfered and gziped locally

Hi Johan,

As you need to restore on several different computers, I think it’s best that you run mongodump on one of these computers, or if possible on the MongoDB server and from there transfer the file to a common location on your network. You can then point mongorestore to that file on each computer that needs restoring.

Do you believe it makes sense?

1 Like

Hi @Leandro_Domingues thank you for your answer. This is actually what we are doing for now, and i was working on a lighter solution (that’s why i tested the pipe solution).

Generating archive on server have some drawbacks:

  • Either need to give an ssh access to users so they can generate their dump, or need to make a webService which generate it. (We got on the second solution)

  • With the second solution we have some issues. Mongodump take some time on our database, for big exports it can easily take more than 2 minutes … this is hard to monitor/manage with a web server. For instance how to be sure their is no 2 process running at the same time. How to know the progression of the process. And how to interrupt the export process.

  • Generated files need to be clean.

With the pipe solutions i find it’s lighter ^^

Hi @Leandro_Domingues i found a temporary workaround which needs users to have an ssh access =>

ssh sh << 'EOF' | mongorestore --archive --gzip --drop --nsFrom='remoteDb.*' --nsTo='localDb.*' --host=""
  mongodump --gzip --ssl --uri='mongodb://' --archive 2>/dev/null

Data are gziped when transferring over HTTP. However this look a bit hacky isn’t it ? :smiley:

1 Like

I oppened a bug / feature request. You may want to follow this here :slight_smile: