Is the oplog size of a hidden node relevant?

I plan to have a big oplog size for primary and secondary nodes (stound 50GB), and a small oplog size for the hidden node.

The rationale behind it is that from the hidden node I will perform Filesystem Sanpshot backups in Linux. Therefore, I need to minimize the empty storage in the volume holding the database directory, so the backups do not store too much empty space. Additionally, a FS snapshot will take very long on a big logical volume, even though the volume might be empty.

Since a hidden node will never become primary, there is no chance that it will reduce the recover time of any other node due to its small oplog. Right?

Furthermore, if the hidden node goes down, it should recover nicely since:

  • The “visible” (primary and secondary) nodes have a big oplog.
  • Even if the hidden has a small oplog, the rule to catch up after a down time would be that the newest operation in the hidden oplog is also currently recorded in the primary’s oplog. Since the newest operation on the hidden node will be the last one inserted in its oplog, no matter how small its oplog is, that operation will be there.

Thus, the hidden node oplog can be very small. Am I missing something?

Hi @Francisco_Cortes welcome to the community!

As I understand it, you’re using a hidden node to perform backups. Is this correct?

If you’re thinking about restoring the backup to recreate the replica set, then the restored primary would be using the hidden node’s setting, right? Wouldn’t that mean that the newly restored primary mirror the hidden node’s small oplog setting (since the backup was made via fs snapshot)? Have you tried a restore procedure using this setup and check if this is the case?

There’s an upper limit on the default oplog size and it’s 50GB (see Oplog Size). I think that unless you have very specific reasons to setup the hidden node in a special way and have it setup differently on the other nodes, I would stick to the defaults and keep every node the same to simplify maintenance.

Best regards
Kevin

Isn’t the node configuration stored somewhere else (not in the data dir of the actual database of mongodb). As far as I know, data directory contains these databases: local, admin, and all the custom databases for the application. The setting for the Oplog is in /etc/mongod.conf, right? Why would the Oplog setting change if I restore the database dir? (Regardless of the amount of oplog data that is currently stored in the backup source…) Is this not something you have documented?

My understanding is that in this case, the restored primary will still keep its Oplog setting of 50gb. However, it will temporarily have a smaller Oplog data (due to the small Oplog of the hidden node, source of the backup) but that small Oplog will then start to grow with the new db write events happenig from there on, and then start refreshing itself once reached 50gb. Is this not what is expected to happend?

For mongod config, it’s one possible place. It can also be set up in a separate config file that you reference from the command line, or setup from the command line parameter.

However, the --oplogSize parameter only take effect when no oplog is setup, as mentioned in this paragraph from the oplogSize page:

Once the mongod has created the oplog for the first time, changing the --oplogSize option will not affect the size of the oplog.

Also, since the oplog is a capped collection (see the Replica Set Oplog page), the metadata about it are stored by the WiredTiger storage engine itself. You can see this information using db.getCollectionInfos() in the mongo shell. For example, in my local OSX deployment:

> db.getCollectionInfos({name:'oplog.rs'})
[
  {
    name: 'oplog.rs',
    type: 'collection',
    options: { capped: true, size: 201326592, autoIndexId: false },
    info: {
      readOnly: false,
      uuid: UUID("ddae36c4-3624-4d3c-91be-9186dcc0d153")
    }
  }
]

Since backup & restore procedures are one of the most vital procedure in your maintenance playbook, I would suggest to thoroughly check for any surprises should you want to deviate from a more straightforward backup/restore procedures of uniformly provisioned nodes.

Best regards
Kevin