Navigation
This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Ops Manager, refer to the upgrade documentation.
You were redirected from a different version of the documentation. Click here to go back.

Restore a Sharded Cluster from a Snapshot

When you restore a cluster from a snapshot, Ops Manager provides you with restore files for the selected restore point.

To learn about the restore process, see Restore Overview.

Changed in Ops Manager 3.6: Point-in-Time Restores

Prior to 3.6, the Backup Daemon created the complete point- in-time restore on its host. With 3.6, you download a client-side tool along with your snapshot. This tool downloads and applies the oplog to a snapshot on your client system. This reduces network and storage needs for your Ops Manager deployment.

Considerations

Review change to BinData BSON sub-type

The BSON specification changed the default subtype for the BSON binary datatype (BinData) from 2 to 0. Some binary data stored in a snapshot may be BinData subtype 2. The Backup automatically detects and converts snapshot data in BinData subtype 2 to BinData subtype 0. If your application code expects BinData subtype 2, you must update your application code to work with BinData subtype 0.

See also

The notes on the BSON specification explain the particular specifics of this change.

Restore using settings given in restoreInfo.txt

The backup restore file includes a metadata file named restoreInfo.txt. This file captures the options the database used when the snapshot was taken. The database must be run with the listed options after it has been restored. This file contains:

  • Group name

  • Replica Set name

  • Cluster ID (if applicable)

  • Snapshot timestamp (as Timestamp at UTC)

  • Restore timestamp (as a BSON Timestamp at UTC)

  • Last Oplog applied (as a BSON Timestamp at UTC)

  • MongoDB version

  • Storage engine type

  • mongod startup options used on the database when the snapshot was taken

  • Encryption (Only appears if encryption is enabled on the snapshot)

  • Master Key UUID (Only appears if encryption is enabled on the snapshot)

    If restoring from an encrypted backup, you must have a certificate provisioned for this Master Key.

Snapshots when Agent Cannot Stop Balancer

Ops Manager displays a warning next to cluster snapshots taken while the balancer is enabled. If you restore from such a snapshot, you run the risk of lost or orphaned data. For more information, see Snapshots when Agent Can’t Stop Balancer.

Backup Considerations

All FCV databases must fulfill the appropriate backup considerations.

Encryption Considerations

To restore from an encrypted backup, you need the same master key used to encrypt the backup and either the same certificate as is on the Backup Daemon host or a new certificate provisioned with that key from the KMIP host.

If the snapshot is encrypted, the restore panel displays the KMIP master key id and the KMIP server information. You can also find the information when you view the snapshot itself as well as in the restoreInfo.txt file.

Disable Client Requests to MongoDB during Restore

You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:

  • Restore to new systems with new hostnames and reconfigure your application code once the new deployment is running, or
  • Ensure that the MongoDB deployment will not receive client requests while you restore data.

Restore a Snapshot

To have Ops Manager automatically restore the snapshot:

1

Click Continuous Backup, then the Overview tab.

2

Click the deployment, then click Restore or Download.

3

Select the restore point.

  1. Choose the point from which you want to restore your backup.

    Restore Type Description Action
    Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore.
    Point In Time

    Creates a custom snapshot that includes all operations up to but not including the selected time. By default, the Oplog Store stores 24 hours of data.

    Example

    If you select 12:00, the last operation in the restore is 11:59:59 or earlier.

    Important

    In FCV 4.0, you cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup. This note does not apply to FCV 4.2 or later.

    Select a Date and Time.
    Oplog Timestamp

    Creates a custom snapshot that includes all operations up to and including the entered Oplog timestamp. The Oplog Timestamp contains two fields:

    Timestamp Timestamp in the number of seconds that have elapsed since the UNIX epoch
    Increment Order of operation applied in that second as a 32-bit ordinal.

    Type an Oplog Timestamp and Increment.

    Run a query against local.oplog.rs on your replica set to find the desired timestamp.

  2. Click Next.

4

Choose to restore the files to another cluster.

  1. Click Choose Cluster to Restore to.

  2. Complete the following fields:

    Field Action
    Project Select a project to which you want to restore the snapshot.
    Cluster to Restore to

    Select a cluster to which you want to restore the snapshot.

    Ops Manager must manage the target sharded cluster.

    Warning

    Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.

  1. Click Restore.

    Ops Manager notes how much storage space the restore requires in its UI.

5

Click Restore.

Important

Rotate Master Key after Restoring Snapshots Encrypted with AES256-GCM

If you restore an encrypted snapshot that Ops Manager encrypted with AES256-GCM, rotate your master key after completing the restore.

The manual restore process assumes that:

  • The target host has no data in place.
  • You have not used an encrypted snapshot.
  • You have not enabled two-factor authentication.

Warning

Restore the snapshot manually only if you can’t run an automatic restore. If you determine that you must use a manual restore, contact MongoDB Support for help. This section provides a high-level overview of the stages in the manual restore procedure.

The manual restore process has the following high-level stages that you perform with help from MongoDB Support:

  1. Connect to each replica set and the Config Server Replica Set (CSRS) with either the legacy mongo shell or mongosh.
  2. (Optional). Review the configuration file of each replica set and CSRS. After you complete the restore process, you can reconstruct the configuration on the restored replica sets using the saved configuration files.
  3. Prepare the target hosts.
    • Stop all mongod processes running on the target hosts.
    • Provision enough storage space to hold the restored data.
    • Prepare directories for data and logs.
    • Add a configuration file to your MongoDB Server directory with the target host’s storage and log paths, and configuration for replicas and sharding roles.
  4. Restore the CSRS.
  5. Restore each shard’s replica set.
  6. Restart each mongos process in the target cluster.
  7. Verify that you can connect to the cluster.

The full manual restore procedure can be found in the MongoDB Server 4.2 documentation. For MongoDB 4.4 or later deployments, refer to the corresponding versions of the manual.

To have Ops Manager automatically restore the snapshot:

1

Click Continuous Backup, then the Overview tab.

2

Click the deployment, then click Restore or Download.

3

Select the restore point.

  1. Choose the point from which you want to restore your backup.

    Restore Type Description Action
    Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore.
    Point In Time

    Allows you to choose a date and time as your restore time objective for your snapshot. By default, the Oplog Store stores 24 hours of data.

    Example

    If you select 12:00, the last operation in the restore is 11:59:59 or earlier.

    Important

    • If you are restoring a sharded cluster that runs FCV of 4.0 or earlier, you must enable cluster checkpoints to perform a PIT restore on a sharded cluster.

      If no checkpoints that include your date and time are available, Ops Manager asks you to choose another point in time.

    • You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.

    Select a Date and Time.
  2. Click Next.

  3. If you are restoring a sharded cluster that runs FCV of 4.0 or earlier and you chose Point In Time:

    1. A list of Checkpoints closest to the time you selected appears.
    2. To start your point in time restore, you may:
      • Choose one of the listed checkpoints, or
      • Click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.
4

Choose to restore the files to another cluster.

  1. Click Choose Cluster to Restore to.

  2. Complete the following fields:

    Field Action
    Project Select a project to which you want to restore the snapshot.
    Cluster to Restore to

    Select a cluster to which you want to restore the snapshot.

    Ops Manager must manage the target sharded cluster.

    Warning

    Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.

  1. Click Restore.

    Ops Manager notes how much storage space the restore requires in its console.

5

Click Restore.

Important

Rotate Master Key after Restoring Snapshots Encrypted with AES256-GCM

If you restore an encrypted snapshot that Ops Manager encrypted with AES256-GCM, rotate your master key after completing the restore.

The manual restore process assumes that:

  • The target host has no data in place.
  • You have not used an encrypted snapshot.
  • You have not enabled two-factor authentication.

Warning

Restore the snapshot manually only if you can’t run an automatic restore. If you determine that you must use a manual restore, contact MongoDB Support for help. This section provides a high-level overview of the stages in the manual restore procedure.

The manual restore process has the following high-level stages that you perform with help from MongoDB Support:

  1. Connect to each replica set and the Config Server Replica Set (CSRS) with either the legacy mongo shell or mongosh.
  2. (Optional). Review the configuration file of each replica set and CSRS. After you complete the restore process, you can reconstruct the configuration on the restored replica sets using the saved configuration files.
  3. Prepare the target hosts.
    • Stop all mongod processes running on the target hosts.
    • Provision enough storage space to hold the restored data.
    • Prepare directories for data and logs.
    • Add a configuration file to your MongoDB Server directory with the target host’s storage and log paths, and configuration for replicas and sharding roles.
  4. Restore the CSRS.
  5. Restore each shard’s replica set.
  6. Restart each mongos process in the target cluster.
  7. Verify that you can connect to the cluster.

The full manual restore procedure can be found in the MongoDB Server documentation.