Navigation
This version of the documentation is archived and no longer supported. It will be removed on EOL_DATE. To learn how to upgrade your version of MongoDB Ops Manager, refer to the upgrade documentation.
You were redirected from a different version of the documentation. Click here to go back.
This version of the manual is no longer supported. It will be removed on EOL_DATE.

Restore a Sharded Cluster from a Snapshot

When you restore a cluster from a snapshot, Ops Manager provides you with restore files for the selected restore point.

To learn about the restore process, see Restore Overview.

Changed in Ops Manager 3.6: Point-in-Time Restores

Prior to 3.6, the Backup Daemon created the complete point- in-time restore on its host. With 3.6, you download a client-side tool along with your snapshot. This tool downloads and applies the oplog to a snapshot on your client system. This reduces network and storage needs for your Ops Manager deployment.

Considerations

Review change to BinData BSON sub-type

The BSON specification changed the default subtype for the BSON binary datatype (BinData) from 2 to 0. Some binary data stored in a snapshot may be BinData subtype 2. The Backup Agent automatically detects and converts snapshot data in BinData subtype 2 to BinData subtype 0. If your application code expects BinData subtype 2, you must update your application code to work with BinData subtype 0.

See also

The notes on the BSON specification explain the particular specifics of this change.

Restore using settings given in restoreInfo.txt

The backup restore file includes a metadata file named restoreInfo.txt. This file captures the options the database used when the snapshot was taken. The database must be run with the listed options after it has been restored. This file contains:

  • Group name

  • Replica Set name

  • Cluster ID (if applicable)

  • Snapshot timestamp (as Timestamp at UTC)

  • Last Oplog applied (as a BSON Timestamp at UTC)

  • MongoDB version

  • Storage engine type

  • mongod startup options used on the database when the snapshot was taken

  • Encryption (Only appears if encryption is enabled on the snapshot)

  • Master Key UUID (Only appears if encryption is enabled on the snapshot)

    If restoring from an encrypted backup, you must have a certificate provisioned for this Master Key.

Snapshots when Agent Cannot Stop Balancer

Ops Manager displays a warning next to cluster snapshots taken while the balancer is enabled. If you restore from such a snapshot, you run the risk of lost or orphaned data. For more information, see Snapshots when Agent Cannot Stop Balancer.

Secure Copy (SCP) Delivery

Important

Restore delivery via SCP was removed in Ops Manager 4.0.

Prerequisites

Restore from Encrypted Backup Requires Same Master Key

To restore from an encrypted backup, you need the same master key used to encrypt the backup and either the same certificate as is on the Backup Daemon host or a new certificate provisioned with that key from the KMIP host.

If the snapshot is encrypted, the restore panel displays the KMIP master key id and the KMIP server information. You can also find the information when you view the snapshot itself as well as in the restoreInfo.txt file.

Disable Client Requests to MongoDB during Restore

You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:

  • Restore to new systems with new hostnames and reconfigure your application code once the new deployment is running, or
  • Ensure that the MongoDB deployment will not receive client requests while you restore data.

Restore a Snapshot

To have Ops Manager automatically restore the snapshot:

1

Click Backup, then the Overview tab.

2

Click the deployment, then click Restore or Download.

3

Select the restore point.

  1. Choose the point from which you want to restore your backup.

    Restore Type Description Action
    Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore.
    Point In Time

    Allows you to choose a date and time as your restore time objective for your snapshot. By default, the Oplog Store stores 24 hours of data.

    Example

    If you select 12:00, the last operation in the restore is 11:59:59 or earlier.

    Important

    • You must enable cluster checkpoints to perform a PIT restore on a sharded cluster.

      If no checkpoints that include your date and time are available, Ops Manager asks you to choose another point in time.

    • You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.

    Select a Date and Time.
  2. Click Next.

  3. If you chose Point In Time, a list of Checkpoints closest to the time you selected appears. You may choose one of the listed checkpoints to start your point in time restore, or click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.

4

Choose to restore the files to another cluster.

  1. Click Choose Cluster to Restore to.

  2. Complete the following fields:

    Field Action
    Project Select a project to which you want to restore the snapshot.
    Cluster to Restore to

    Select a cluster to which you want to restore the snapshot.

    Ops Manager must manage the target sharded cluster.

    Warning

    Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.

  1. Click Restore.

    Ops Manager notes how much storage space the restore requires.

5

Click Restore.

1

Click Backup, then the Overview tab.

2

Click the deployment, then click Restore or Download.

3

Select the restore point.

  1. Choose the point from which you want to restore your backup.

    Restore Type Description Action
    Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore.
    Point In Time

    Allows you to choose a date and time as your restore time objective for your snapshot. By default, the Oplog Store stores 24 hours of data.

    Example

    If you select 12:00, the last operation in the restore is 11:59:59 or earlier.

    Important

    • You must enable cluster checkpoints to perform a PIT restore on a sharded cluster.

      If no checkpoints that include your date and time are available, Ops Manager asks you to choose another point in time.

    • You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.

    Select a Date and Time.
  2. Click Next.

  3. If you chose Point In Time, a list of Checkpoints closest to the time you selected appears. You may choose one of the listed checkpoints to start your point in time restore, or click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.

  4. Once you have selected a checkpoint, apply the oplog to this snapshot to bring your snapshot to the date and time you selected. The oplog is applied for all operations up to but not including the selected time.

4

Click Download to restore the files manually.

5

Configure the snapshot download.

  1. Configure the following download options:

    Pull Restore Usage Limit Select how many times the link can be used. If you select No Limit, the link is re-usable until it expires.
    Restore Link Expiration (in hours) Select the number of hours until the link expires. The default value is 1. The maximum value is the number of hours until the selected snapshot expires.
  2. Click Finalize Request.

  3. If you use 2FA, Ops Manager prompts you for your 2FA code. Enter your 2FA code, then click Finalize Request.

6

Retrieve the snapshots.

Ops Manager creates links to the snapshot. By default, these links are available for an hour and can be used just once.

To download the snapshots:

  1. If you closed the restore panel, click Backup, then Restore History.
  2. When the restore job completes, click (get link) for each shard and for one of the config servers appears.
  3. Click:
    • The copy button to the right of the link to copy the link to use it later, or
    • Download to download the snapshot immediately.

Extra step for point-in-time restores

For point-in-time and oplog timestamp restores, additional instructions are shown. The final step shows the full command you must run using the mongodb-backup-restore-util. It includes all of the necessary options to ensure a full restore.

Select and copy the mongodb-backup-restore-util command provided under Run Binary with PIT Options.

7

Restore the snapshot data files to the destination host.

Extract the snapshot archive for the config server and for each shard to a temporary location.

Example

tar -xvf <backupSnapshot>.tar.gz
mv <backupSnapshot> <temp-database-path>
8

Run the MongoDB Backup Restore Utility (Point-in-Time Restore Only).

  1. Download the MongoDB Backup Restore Utility to your host.

    Note

    If you closed the restore panel, click Backup, then More and then Download MongoDB Backup Restore Utility.

  2. Start a mongod instance using the extracted snapshot directory as the data directory.

    Example

    mongod --port <port number> \
      --dbpath <temp-database-path> \
      --setParameter ttlMonitorEnabled=false
    
  3. Run the MongoDB Backup Restore Utility on your destination host. Run it once for the config server and each shard.

    Pre-configured mongodb-backup-restore-util command

    Ops Manager provides the mongodb-backup-restore-util with the appropriate options for your restore on the restore panel under Run Binary with PIT Options.

    You should copy the mongodb-backup-restore-util command provided in the Ops Manager Application.

    ./mongodb-backup-restore-util --https --host <targetHost> \
      --port <targetPort> \
      --opStart <opLogStartTimeStamp> \
      --opEnd <opLogEndTimeStamp> \
      --logFile <logPath> \
      --oplogSourceAddr <oplogSourceAddr> \
      --apiKey <apiKey> \
      --groupId <groupId> \
      --rsId <rsId> \
      --whitelist <database1.collection1, database2, etc.> \
      --blacklist <database1.collection1, database2, etc.> \
      --seedReplSetMember \
      --oplogSizeMB <size> \
      --seedTargetPort <port> \
      --ssl \
      --sslCAFile <path> \
      --sslPEMKeyFile <path>
    

    The mongodb-backup-restore-util command uses the following options:

    Option Required Description
    --https Optional Use if you need TLS/ SSL to connect to the --oplogSourceAddr.
    --host Required Provide the hostname or IP address for the host that serves the mongod to which the oplog should be applied. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --port Required Provide the port for the host that serves the mongod to which the oplog should be applied. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --opStart Required Provide the BSON timestamp for the first oplog entry you want to include in the restore. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --opEnd Required Provide the BSON timestamp for the last oplog entry you want to include in the restore. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --logFile Optional Provide a path, including file name, where the MBRU log is written.
    --oplogSourceAddr Required Provide the URL for the Ops Manager resource endpoint. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --apiKey Required Provide your Ops Manager Agent API Key. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --groupId Required Provide the group ID. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --rsId Required Provide the replica set ID. If you copied the mongodb-backup-restore-util command provided in the Ops Manager Application, this field is pre-configured.
    --whitelist Optional Provide a list of databases and/or collections to which you want to limit the restore.
    --blacklist Optional Provide a list of databases and/or collections to which you want to exclude from the restore.
    --seedReplSetMember Optional

    Use if you need a replica set member to re-create the oplog collection and seed it with the correct timestamp.

    Requires --oplogSizeMB and --seedTargetPort.

    --oplogSizeMB Conditional

    Provide the oplog size in MB.

    Required if --seedReplSetMember is set.

    --seedTargetPort Conditional

    Provide the port for the replica set’s primary. This may be different from the ephemeral port used.

    Required if --seedReplSetMember is set.

    --ssl Optional Use if you need TLS / SSL to apply oplogs to the mongod. Requires --sslCAFile and --sslPEMKeyFile.
    --sslCAFile Conditional

    Provide the path to the CA file.

    Required if --ssl is set.

    --sslPEMKeyFile Conditional

    Provide the path to the PEM certificate file.

    Required if --ssl is set.

    --sslPEMKeyFilePwd Conditional

    Provide the password for the PEM certificate file specified in --sslPEMKeyFile.

    Required if --ssl is set.

9

Copy the completed snapshots to restore to other hosts.

  • For the config server, copy the restored config server database to the working database path of each replica set member.
  • For each shard, copy the restored shard database to the working database path of each replica set member.
10

Unmanage the Sharded Cluster.

Before attempting to restore the data manually, remove the sharded cluster from Automation.

11

Restore the Sharded Cluster Manually.

Follow the tutorial from the MongoDB Manual to restore the sharded cluster.

12

Reimport the Sharded Cluster.

To manage the sharded cluster with automation again, import the sharded cluster back into Ops Manager.

13

Start the Sharded Cluster Balancer.

Once a restore completes, the sharded cluster balancer is turned off. To start the balancer:

  1. Click Deployment.
  2. Click ellipsis h icon on the card for your desired sharded cluster.
  3. Click Manager Balancer.
  4. Toggle to Yes.
  5. Click pencil icon to the right of Set the Balancer State.
  6. Toggle to Yes.
  7. Click Save.
  8. Click Review & Deploy to save the changes.