Navigation
This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Ops Manager, refer to the upgrade documentation.
You were redirected from a different version of the documentation. Click here to go back.

Restore a Sharded Cluster from a Backup

When you restore a cluster from a backup, Ops Manager provides you with a restore files for the selected restore point. For an overview of the restore process, please see Restore Overview.

Changed in Ops Manager 3.6: Point-in-Time Restores

Prior to 3.6, the Backup Daemon created the complete point- in-time restore on its host. With 3.6, you download a client-side tool along with your snapshot. This tool downloads and applies the oplog to a snapshot on your client system. This reduces network and storage needs for your Ops Manager deployment.

Considerations

BinData

The BSON specification changed the default subtype for the BSON binary datatype (BinData) from 2 to 0. Some binary data stored in a snapshot may be BinData subtype 2. The Backup Agent automatically detects and converts snapshot data in BinData subtype 2 to BinData subtype 0. If your application code expects BinData subtype 2, you must update your application code to work with BinData subtype 0.

See also

The notes on the BSON specification explain the particular specifics of this change.

The backup restore file includes a metadata file, restoreInfo.txt. This file captures the options the database used when the snapshot was taken. The database must be run with the listed options after it has been restored.

restoreInfo.txt

This file contains:

  • Group name

  • Replica Set name

  • Cluster Id (if applicable)

  • Snapshot timestamp (as Timestamp at UTC)

  • Last Oplog applied (as a BSON Timestamp at UTC)

  • MongoDB version

  • Storage engine type

  • mongod startup options used on the database when the snapshot was taken

  • Encryption (Only appears if encryption is enabled on the snapshot)

  • Master Key UUID (Only appears if encryption is enabled on the snapshot)

    If restoring from an encrypted backup, you must have a certificate provisioned for this Master Key.

Prerequisites

Restoring from Encrypted Backup

To restore from an encrypted snapshot, the snapshot and target head database must have been encrypted using the same KMIP master key. The Backup Daemon host that serves the target head database must either use an existing certificate or provision a new certificate that was generated using this master key.

If the snapshot is encrypted, the restore panel displays the KMIP master key id and the KMIP server information. You can also find the information when you view the snapshot itself as well as in the restoreInfo.txt file.

Client Requests During Restoration

You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:

  • Restore to new systems with new hostnames and reconfigure your application code once the new deployment is running, or
  • Ensure that the MongoDB deployment will not receive client requests while you restore data.

Snapshots when Agent Cannot Stop Balancer

Ops Manager displays a warning next to cluster snapshots taken while the balancer is enabled. If you restore from such a snapshot, you run the risk of lost or orphaned data. For more information, see Snapshots when Agent Cannot Stop Balancer.

Secure Copy (SCP) Delivery

SCP Restore Deprecated

Restore delivery via SCP is a deprecated feature. This feature will be removed in Ops Manager 4.0.

Important

You need to generate a key pair before using SCP. SCP transfers files faster than HTTP.

Note

Microsoft Windows does not include SCP. Installing SCP is outside the scope of this manual.

When copying files individually, the MaxStartups value in sshd_config should be increased to:

(4 × (number of shards + number of config servers)) + 10

SCP is performed in parallel and, by default, Secure Shell Daemon (sshd) installations use a small number of concurrent connections. Changing this setting in sshd_config allows SCP to support sufficient connections to complete the restore.

Example

For a sharded cluster with 7 shards and 3 config servers, change MaxStartups to 50:

MaxStartups  50

Automatic Restore

To have Ops Manager automatically restore the backup, perform the select the snapshot procedure.

Manual Restore

To restore the backup manually, perform the following:

  1. Select and Prepare the Snapshot.
  2. Retrieve the Snapshot using HTTPS or Send the Snapshot using SCP.
  3. Prepare and Distribute Snapshot
  4. Unmanage the Sharded Cluster.
  5. Restore the Sharded Cluster Manually.
  6. Reimport the Sharded Cluster.

Select and Prepare the Snapshot

1

Click Backup, then the Overview tab.

2

Click the deployment then click Restore or Download.

3
  1. Choose the point from which you want to restore your backup.

    Restore Type Description Action
    Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore.
    Point In Time

    Creates a custom snapshot that includes all operations up to but not including the selected time. By default, the oplog stores 24 hours of data.

    Example

    If you select 12:00, the last operation in the restore is 11:59:59 or earlier.

    Important

    You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.

    Select a Date and Time.
  2. Click Next.

  3. If you chose Point In Time and checkpoints are enabled, a list of Checkpoints appears. You may choose a checkpoint to be your point in time, or click Choose another point in time to remove the list of checkpoints and select the date and time from the menus.

4

Choose how to restore the files.

Choose to restore the snapshot to an existing MongoDB instance or download a copy of the data.

To restore to an existing instance, click Choose Cluster to Restore to.
  1. Complete the following fields:

    Field Action
    Project Select a project to which you want to restore the snapshot.
    Cluster to Restore to

    Select a cluster to which you want to restore the snapshot.

    Ops Manager must manage the target sharded cluster.

    Warning

    Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.

  2. Click Restore.

    Ops Manager notes how much storage space the restore requires.

Important

You can skip the remainder of this page.

To download the data, click Download.

You can choose to download a copy using HTTPS or have Ops Manager send you a copy using SCP.

Retrieve the Snapshot using HTTPS

1

Configure the snapshot download.

  1. Configure the following download options:

    Pull Restore Usage Limit Select how many times the link can be used. If you select No Limit, the link is re-usable until it expires.
    Restore Link Expiration (in hours) Select the number of hours until the link expires. The default value is 1.
  2. Click Finalize Request.

  3. If you use 2FA, Ops Manager prompts you for your 2FA code. Enter your 2FA code, then click Finalize Request.

2

Retrieve the snapshots.

Ops Manager creates links to the snapshot. By default, these links are available for an hour and can be used just once. To download the snapshots:

  1. If you closed the restore panel, click Backup, then Restore History.

  2. When the restore job completes, click (get link) for each shard and for one of the config servers appears.

  3. Click:

    • The copy button to the right of the link to copy the link to use it later, or
    • Download to download the snapshot immediately.
  4. Select and copy the mongodb-backup-restore-util command provided under Run Binary with PIT Options.

    Important

    For point-in-time and oplog timestamp restores, additional instructions are shown. The final step shows the full command you must run using the MBRU. It includes all of the necessary options to ensure a full restore.

Send Snapshot using SCP

Direct Ops Manager to copy the restore files to your server via SCP.

1

Configure how to secure copy the data.

  1. Select from the following options:

    Format

    Select the format in which you want to receive the restore files:

    Individual DB Files
    Transfers individual MongoDB data files that Ops Manager produces directly to the target directory.
    Archive

    Transfers MongoDB data files in a single archive (tar or tar.gz) that you must extract before restoring the data files to a working directory.

    This option displays only if the archive size can be calculated.

    With Archive delivery, you need sufficient storage space on the destination host for both the archive and the extracted files.

    SCP Host Type the hostname of the host to receive the files.
    SCP Port Type the port of the host to receive the files.
    SCP User Type the username used to access to the host.
    Auth Method Select whether to use a username and password or an SSH certificate to authenticate to the host.
    Password Type the user password used to access to the host.
    Passphrase Type the SSH passphrase used to access to the host.
    Target Directory Type the absolute path to the directory on the host to which to copy the restore files.
  2. Click Finalize Request.

  3. If you use 2FA, Ops Manager prompts you for your 2FA code. Enter your 2FA code, then click Finalize Request.

2

Retrieve the snapshot.

The restore panel shows the restore point type and time, the delivery method, and the download link once Ops Manager has generated it.

The files are copied to the host directory you specified. To verify that the files are complete, see how to validate a secure copy restore.

Extra step for point-in-time restores

For point-in-time and oplog timestamp restores, additional instructions are shown. The final step shows the full command you must run using the MBRU. It includes all of the necessary options to ensure a full restore.

Select and copy the mongodb-backup-restore-util command provided under Run Binary with PIT Options.

Prepare and Distribute Snapshots

1

Restore the snapshot data files to the destination host.

Extract the snapshot archive for the config server and for each shard to a temporary location.

Example

tar -xvf {backupRestoreName}.tar.gz
mv {backupRestoreName} {temp-database-path}
2

Run the MongoDB Backup Restore Utility (Point-in-Time Restore Only).

  1. Download the MongoDB Backup Restore Utility to your host.

    Note

    If you closed the restore panel, click Backup, then More and then Download MongoDB Backup Restore Utility.

  2. Run the MongoDB Backup Restore Utility on your destination host. Run it once for the config server and each shard.

    Pre-configured MBRU command

    Ops Manager provides the mongodb-backup-restore-util with the appropriate options for your restore on the restore panel under Run Binary with PIT Options.

    Select and copy the mongodb-backup-restore-util command provided.

    ./mongodb-backup-restore-util --https --host <targetHost> \
      --port <targetPort> \
      --opStart <opLogStartTimeStamp> \
      --opEnd <opLogEndTimeStamp> \
      --logFile <logPath> \
      --oplogSourceAddr <oplogSourceAddr> \
      --apiKey <apiKey> \
      --groupId <groupId> \
      --rsId <rsId> \
      --whitelist <database1.collection1, database2, etc.> \
      --blacklist <database1.collection1, database2, etc.> \
      --seedReplSetMember \
      --oplogSizeMB <size> \
      --seedTargetPort <port> \
      --ssl \
      --sslCAFile <path> \
      --sslPEMKeyFile <path>
    

    The mongodb-backup-restore-util command uses the following options:

    Option Required Description
    --https Optional Use if you need TLS/ SSL to connect to the --oplogSourceAddr.
    --host Required Provide the hostname or IP address for the host that serves the mongod to which the oplog should be applied.
    --port Required Provide the port for the host that serves the mongod to which the oplog should be applied.
    --opStart Required Provide the BSON timestamp for the first oplog entry you want to include in the restore.
    --opEnd Required Provide the BSON timestamp for the last oplog entry you want to include in the restore.
    --logFile Optional Provide a path, including file name, where the MBRU log is written.
    --oplogSourceAddr Required Provide the URL for the oplog.
    --apiKey Required Provide your Ops Manager Agent API Key.
    --groupId Required Provide the group ID.
    --rsId Required Provide the replica set ID.
    --whitelist Optional Provide a list of databases and/or collections to which you want to limit the restore.
    --blacklist Optional Provide a list of databases and/or collections to which you want to exclude from the restore.
    --seedReplSetMember Optional

    Use if you need a replica set member to re-create the oplog collection and seed it with the correct timestamp.

    Requires --oplogSizeMB and --seedTargetPort.

    --oplogSizeMB Conditional

    Provide the oplog size in MB.

    Required if --seedReplSetMember is set.

    --seedTargetPort Conditional

    Provide the port for the replica set’s primary. This may be different from the ephemeral port used.

    Required if --seedReplSetMember is set.

    --ssl Optional Use if you need TLS / SSL to apply oplogs to the mongod. Requires --sslCAFile and --sslPEMKeyFile.
    --sslCAFile Conditional

    Provide the path to the CA file.

    Required if --ssl is set.

    --sslPEMKeyFile Conditional

    Provide the path to the PEM certificate file.

    Required if --ssl is set.

3

Copy the completed snapshots to restore.

  • For the config server, copy the restored config server database to the working database path of each replica set member.
  • For each shard, copy the restored shard database to the working database path of each replica set member.

Unmanage the Sharded Cluster

Before attempting to restore the data manually, remove the sharded cluster from Automation.

Restore the Sharded Cluster Manually

Follow the tutorial from the MongoDB Manual to restore the sharded cluster.

Reimport the Sharded Cluster

To manage the sharded cluster with automation again, import the sharded cluster back into Ops Manager.