Live Migrate (Pull) a MongoDB 6.0.8 or Later Cluster into Atlas
On this page
Important
Feature unavailable in Serverless Instances
Serverless instances don't support this feature at this time. To learn more, see Serverless Instance Limitations.
If both the source and destination clusters are running MongoDB 6.0.8 or later, Atlas can pull a source cluster to an Atlas cluster using the procedure described in this section.
This process uses mongosync as the underlying data migration tool, enabling faster live migrations with less downtime:
Atlas syncs data from the source to the destination cluster until you cut your applications over to the destination Atlas cluster.
Once you reach the cutover step in the following procedure:
Stop writes to the source cluster.
Stop your application instances, point them to the Atlas cluster, and restart them.
Restrictions
This live migration has the following limitations:
The FCV for source and destination clusters is at least 6.0 and is the same on source and destination clusters.
You can't run this live migration procedure for source or destination clusters with MongoDB versions earlier than MongoDB 6.0.8. To learn more, see Server Release Notes.
This live migration procedure doesn't support MongoDB rapid releases, such as 6.1 or 6.2. Only major MongoDB releases, such as 6.0.x (starting from 6.0.8 and later) are supported. To learn more, see MongoDB versioning.
You can't use Serverless instances as destination clusters.
You can't select an
M0
(Free Tier) orM2/M5
shared cluster as the source or destination for live migration. To migrate data from anM0
(Free Tier) orM2/M5
shared cluster to a paid cluster, change the cluster tier and type.You can't live migrate using this migration procedure to an Atlas destination cluster that has BI Connector for Atlas enabled.
During live migration, Atlas disables host alerts.
Time series collections are not supported. The migration process skips any time series collections on the source cluster.
Clustered collections with expireAfterSeconds set aren't supported.
convertToCapped and cloneCollectionAsCapped commands aren't supported.
If in your source cluster you used applyOps operations, they aren't supported on the destination cluster.
Documents that have dollar ($) prefixed field names aren't supported. See Field Names with Periods and Dollar Signs.
Queryable Encryption is not supported.
You can't sync a collection that has a unique index and a non-unique index defined on the same fields.
Within a collection, the
_id
field must be unique across all of the shards in the cluster. To learn more, see Sharded Clusters and Unique Indexes.You can't use the movePrimary command to reassign the primary shard while running this live migration process.
You can't add or remove shards while running this live migration process.
This live migration process only migrates indexes that exist on all shards and that have consistent specs on all shards.
You can't refine a shard key while running this live migration process.
You can't modify the shard key using reshardCollection during this live migration process.
The maximum number of shard key indexes is one lower than normal, 63 instead of 64.
You can't use this live migration process to sync one source cluster to many destination clusters.
Network compression isn't supported.
This live migration process replicates data, it doesn't replicate zone configuration.
System collections aren't replicated with this live migration process.
If you issue a dropDatabase command on the source cluster, this change isn't directly applied on the destination cluster. Instead, this live migration process drops user collections and views in the database on the destination cluster, but it doesn't drop system collections on that database. For example, on the destination cluster, the drop operation doesn't affect a user-created system.js collection. If you enable profiling, the system.profile collection remains. If you create views on the source cluster and then drop the database, replicating the drop with this live migration process removes the views, but leaves an empty system.views collection. In these cases, the live migration of the
dropDatabase
results removes all user-created collections from the database, but leaves its system collections on the destination cluster.
Live migration (pull) doesn't support VPC peering or private endpoints for either the source or destination cluster.
Migration Path
Atlas live migration described in this section supports the following migration paths:
Source Cluster MongoDB Version | Destination Atlas Cluster MongoDB Version |
---|---|
6.0.8 | 6.0.8 |
7.0+ | 7.0+ |
Required Access
To live migrate your data, you must have Project Owner
access
to Atlas.
Users with Organization Owner
access must add themselves to the
project as a Project Owner
.
Prerequisites
If the cluster runs with authentication:
For replica sets, grant the
backup
andreadAnyDatabase
roles on the admin database to the user that will run the migration process.For sharded clusters, grant the
backup
,readAnyDatabase
, andclusterMonitor
roles on the admin database to the user that will run the migration process.Ensure that this user is authenticated using both SCRAM-SHA-1 and SCRAM-SHA-256. To learn more, see Source Cluster Security.
Important
Source Cluster Readiness
To help ensure a smooth data migration, your source cluster should meet all production cluster recommendations. Check the Operations Checklist and Production Notes before beginning the Live Migration process.
Network Access
Configure network permissions for the following components:
Source Cluster Firewall Allows Traffic from Live Migration Server
Any firewalls for the source cluster must grant the MongoDB live migration server access to the source cluster.
The Atlas live migration process streams data through a MongoDB-controlled live migration server. Atlas provides the IP ranges of the MongoDB live migration servers during the live migration process. Grant these IP ranges access to your source cluster. This allows the MongoDB live migration server to connect to the source clusters.
Note
If your organization has strict network requirements and you cannot enable the required network access to MongoDB live migration servers, see Live Migrate a Community Deployment to Atlas.
Atlas Cluster Allows Traffic from Your Application Servers
Atlas allows connections to a cluster from hosts added to the project IP access list. Add the IP addresses or CIDR blocks of your application hosts to the project IP access list. Do this before beginning the migration procedure.
Atlas temporarily adds the IP addresses of the MongoDB migration servers to the project IP access list. During the migration procedure, you can't edit or delete this entry. Atlas removes this entry once the procedure completes.
To learn how to add entries to the Atlas IP access list, see Configure IP Access List Entries.
Pre-Migration Validation
Before starting the following live migration procedure, Atlas runs validation checks on the source and destination clusters and verifies that:
The source and destination cluster's MongoDB version is at least FCV 6.0 and is matching as described in Restrictions.
The source cluster's database user has the correct permissions as described in Source Cluster Security.
The source and destination clusters are either both replica sets, or they are both sharded clusters with the same number of shards.
If the source cluster is a standalone, before using this migration process, convert the standalone to a replica set.
- If migrating a sharded cluster to another sharded cluster, the
- source sharded cluster must use CSRS (Config Server Replica Sets). See Replica Set Config Servers.
Source Cluster Security
Various built-in roles provide sufficient privileges. For example:
For source replica sets running MongoDB 6.0.8 or later a MongoDB user must have
the readAnyDatabase
and backup
roles.
For source sharded clusters running MongoDB 6.0.8 or later a MongoDB user
must have the readAnyDatabase
, backup
, and
clusterMonitor
roles.
To verify that the database user who will run the live migration process
has these roles, run the db.getUser()
command on the admin
database. For example, for a replica set, run:
use admin db.getUser("admin") { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "roles" : [ { "role" : "backup", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" } ] } ...
Specify the username and password to Atlas when prompted by the walk-through screen of the live migration procedure.
Atlas only supports SCRAM for connecting to source clusters that enforce authentication.
How MongoDB Secures its Live Migration Servers
In any pull-type live migration to Atlas, Atlas manages the server that runs the live migration and sends data from the source to the destination cluster.
MongoDB takes the following measures to protect the integrity and confidentiality of your data in transit to Atlas:
MongoDB encrypts data in transit between the Atlas-managed live migration server and the destination cluster. If you require encryption for data in transit between the source cluster and the Atlas-managed migration server, configure TLS on your source cluster.
MongoDB protects access to the Atlas-managed migration server instances as it protects access to any other parts of Atlas.
In rare cases where intervention is required to investigate and restore critical services, MongoDB adheres to the principle of least privilege and authorizes only a small group of privileged users to access your Atlas clusters for a minimum limited time necessary to repair the critical issue. MongoDB requires MFA for these users to log in to Atlas clusters and to establish an SSH connection via the bastion host. Granting this type of privileged user access requires approval by MongoDB senior management. MongoDB doesn't allow access by any other MongoDB personnel to your MongoDB Atlas clusters.
MongoDB allows use of privileged user accounts for privileged activities only. To perform non-privileged activities, privileged users must use a separate account. Privileged user accounts can't use shared credentials. Privileged user accounts must follow the password requirements described in Section 4.3.3 of the Atlas Security whitepaper.
You can restrict access to your clusters by all MongoDB personnel, including privileged users, in Atlas. If you choose to restrict such access and MongoDB determines that access is necessary to resolve a support issue, MongoDB must first request your permission and you may then decide whether to temporarily restore privileged user access for up to 24 hours. You can revoke the temporary 24-hour access grant at any time. Enabling this restriction may result in increased time for the response and resolution of support issues and, as a result, may negatively impact the availability of your Atlas clusters.
MongoDB reviews privileged user access authorization on a quarterly basis. Additionally, MongoDB revokes a privileged user's access when it is no longer needed, including within 24 hours of that privileged user changing roles or leaving the company. We also log any access by MongoDB personnel to your Atlas clusters, retain audit logs for at least six years, and include a timestamp, actor, action, and output. MongoDB uses a combination of automated and manual reviews to scan those audit logs.
To learn more about Atlas security, see the Atlas Security whitepaper. In particular, review the section "MongoDB Personnel Access to MongoDB Atlas Clusters".
Considerations
Network Encryption
During pull live migrations, if the source cluster does not use TLS encryption for its data, the traffic from the source cluster to Atlas is not encrypted. Determine if this is acceptable before you start a pull live migration procedure.
Database Users and Roles
Atlas doesn't migrate any user or role data to the destination cluster.
If the source cluster doesn't use authentication, you must create a user in Atlas because Atlas doesn't support running without authentication.
If the source cluster enforces authentication, you must recreate the credentials that your applications use on the destination Atlas cluster. Atlas uses SCRAM for user authentication. To learn more, see Configure Database Users.
Destination Cluster Configuration
When you configure the destination cluster, consider the following:
The live migration process streams data through a MongoDB-managed live migration server. Each server runs on infrastructure hosted in the nearest region to the source cluster. The following regions are available:
- Europe
Frankfurt
Ireland
London
- Americas
Eastern US
Western US
- APAC
Mumbai
Singapore
Sydney
Tokyo
Use the cloud region for the destination cluster in Atlas that has the lowest network latency relative to the application servers or to your deployment hosted on the source cluster. Ideally, your application's servers should be running in the cloud in the same region as the destination Atlas cluster's primary region. To learn more, see Cloud Providers.
Due to network latency, the live migration process may not be able to keep up with a source cluster that has an extremely heavy write load. In this situation, you can still migrate directly from the source cluster by pointing the mongomirror tool to the destination Atlas cluster.
The destination cluster in Atlas must match or exceed the source deployment in terms of RAM, CPU, and storage. Provision a destination cluster of an adequate size so that it can accommodate both the migration process and the expected workload, or scale up the destination cluster to a tier with more processing power, bandwidth or disk IO.
To maximize migration performance, use at least an M40 cluster for the destination cluster. When migrating large data sets, use an M80 cluster with 6000 IOPS disks or higher.
You can also choose to temporarily increase the destination Atlas cluster's size for the duration of the migration process. Once you migrate your application's workload to a cluster in Atlas, contact support for assistance with further performance tuning and sizing of your destination cluster to minimize costs.
To avoid unexpected sizing changes, disable auto-scaling on the destination cluster. To learn more, see Manage Clusters.
To prevent unbounded growth of the oplog collection, set a fixed oplog size for the duration of the live migration process. To learn more, see Required Access and Atlas Configuration Options. If you are observing performance issues even after you've followed these recommendations, contact support.
The source and destination clusters are either both replica sets, or they are both sharded clusters with the same number of shards.
You can't select an
M0
(Free Tier) orM2/M5
shared-tier cluster as the destination cluster for live migration.Don't change the
featureCompatibilityVersion
flag while Atlas live migration is running.
Avoid Workloads on the Destination Cluster
Avoid running any workloads, including those that might be running on namespaces that don't overlap with the live migration process, on the destination cluster. This action avoids potential locking conflicts and performance degradation during the live migration process.
Don't run multiple migrations to the same destination cluster at the same time.
Don't start the cutover process for your applications to the destination cluster while the live migration process is syncing.
Avoid Cloud Backups
Atlas stops taking on-demand cloud backup snapshots of the target cluster during live migration. Once you complete the cutover step in the live migration procedure on this page, Atlas resumes taking cloud backup snapshots based on your backup policy.
Avoid Elections
The live migration process makes a best attempt to continue a migration during temporary network interruptions and elections on the source or destination clusters. However, these events might cause the live migration process to fail. If the live migration process can't recover automatically, restart it from the beginning.
Migrate Your Cluster
Note
Staging and Production Migrations
Consider running this procedure twice. Run a partial migration that stops at the Perform the Cutover step first. This creates an up-to-date Atlas-backed staging cluster to test application behavior and performance using the latest driver version that supports the MongoDB version of the Atlas cluster.
After you test your application, run the full migration procedure using a separate Atlas cluster to create your Atlas-backed production environment.
Pre-Migration Checklist
Before starting the import process:
If you don't already have a destination cluster, create a new Atlas deployment and configure it as needed. For complete documentation on creating an Atlas cluster, see Create a Cluster.
After your Atlas cluster is deployed, ensure that you can connect to it from all client hardware where your applications run. Testing your connection string helps ensure that your data migration process can complete with minimal downtime.
Download and install
mongosh
on a representative client machine, if you don't already have it.Connect to your destination cluster using the connection string from the Atlas UI. For more information, see Connect via
mongosh
.
Once you have verified your connectivity to your destination cluster, start the live migration procedure.
Procedure
Start the migration process.
Start the migration process one of the following ways:
In the left-side panel of your organization's page, click Live Migration and choose Select Cluster for General Live Migration, or
Navigate to the destination Atlas cluster and click the ellipsis ... button. On the cluster list, the ellipsis ... button appears beneath the cluster name. When you view cluster details, the ellipsis ... appears on the right-hand side of the screen, next to the Connect and Configuration buttons.
Click Migrate Data to this Cluster.
Atlas displays a walk-through screen with instructions on how to proceed with the live migration. The process syncs the data from your source cluster to the new destination cluster. After you complete the walk-through, you can point your application to the new cluster.
You will need the following details for your source cluster to facilitate the migration:
For replica sets, the hostname and port of the source cluster primary. For example,
mongoPrimary.example.net:27017
.For sharded clusters, the hostname and port of each
mongos
on each shard. For example,mongos.example.net:27017
.The database authentication username and password used to connect to the source cluster. For replica sets, the database user must have the
readAnyDatabase
andbackup
roles. For sharded clusters, the database user must have thereadAnyDatabase
,backup
, andclusterMonitor
roles.If the source cluster uses
TLS/SSL
and isn't using a public Certificate Authority (CA), you will need the source cluster CA file.
Prepare the information as stated in the walk-through screen, then click I'm Ready To Migrate.
Atlas displays a walk-through screen that collects information required to connect to the source cluster.
Atlas displays the IP address of the MongoDB live migration server responsible for your live migration at the top of the walk-through screen. Configure your source cluster firewall to grant access to the displayed IP address.
For replica sets, enter the hostname and port of the primary member of the source cluster into the provided text box. For sharded clusters, enter the hostname and port of each
mongos
.If the source cluster enforces authentication, enter a username and password into the provided text boxes.
See Source Cluster Security for guidance on the user permissions required by Atlas live migration.
If the source cluster uses
TLS/SSL
and isn't using a public Certificate Authority (CA), toggle the switch Is encryption in transit enabled? and copy the contents of the source cluster CA file into the provided text box.If you wish to drop all collections on the destination cluster before starting the migration process, toggle the switch Delete existing data on your destination cluster?
Click Validate to confirm that Atlas can connect to the source cluster.
If validation fails, check that:
You have added Atlas to the IP access list on your source cluster.
The provided user credentials, if any, exist on the source cluster and have the required permissions.
The Is encryption in transit enabled? toggle is enabled only if the source cluster requires it.
The CA file provided, if any, is valid and correct.
Click Start Migration to start the migration process.
Once the migration process begins, Atlas UI displays the Migrating Data walk-through screen for the destination Atlas cluster. The walk-through screen updates as the destination cluster proceeds through the migration process. The migration process includes:
Applying new writes to the source cluster data to the destination cluster data.
Copying data from the source cluster to the destination cluster.
Finalizing the migration on the destination cluster.
A lag time value displays during the final phase of the migration process that represents the current lag between the source and destination clusters.
You receive an email notification when your expiration window is nearly up.
When the lag timer and the Prepare to Cutover button turn green, proceed to the next step.
Perform the cutover.
When Atlas detects that the source and destination clusters are nearly in sync, it starts an extendable 120 hour (5 day) timer to begin the cutover stage of the live migration procedure. After the 120 hour period passes, Atlas stops synchronizing with the source cluster. You can extend the time remaining by 24 hours by clicking Extend time below the <time> left to cut over timer.
Click Prepare to Cutover. Atlas displays a walk-through screen that states: Your migration is almost complete! The walk-through screen displays the following instructions on how to proceed with the cutover process:
Stop your application. This ensures that no more writes occur on the source cluster.
Wait for the optime gap to reach zero. When the counter reaches zero, the source and destination clusters are in sync.
Check the box that states: I confirm that I am ready to cut over the application to the destination cluser. By proceeding, Atlas will finalize the migration. This process will take a few seconds. Once it is complete you can point your application at the destination cluster and begin writing to it.
Click Cutover. Atlas completes the migration and displays the Connect page.
Decide when to resume writes on the destination cluster. You can do one of the following:
Wait for the banner on your cluster card to state: Your cluster migration is complete and then resume writes on the destination cluster. If you choose to wait for the migration to complete, your application experiences a temporary pause in writes during the time period needed to finalize the migration.
or
Begin application's writes to the destination cluster without waiting for the migration to complete, while your cluster card banner states: Your destination cluster in Atlas is ready to accept writes, but we are still finalizing the migration. If you choose to move writes to the destination cluster without waiting until the end of the migration process, and live migration fails in the final stages and issues an error, you must redirect writes back to your source cluster and restart the live migration process.
When you are ready to redirect writes to the destination cluster in Atlas:
Use the destination cluster's connection string to connect to your application.
Confirm that your application is working with the destination Atlas cluster.
Atlas performs these actions to complete the process:
Removes the MongoDB live migration server subnets from the IP access list on the destination cluster.
Removes the database user that live migration used to import data to the destination cluster.
Marks the migration process as complete.
Migration Support
If you have any questions regarding migration support beyond what is covered in this documentation, or if you encounter an error during migration, please request support through the Atlas UI.
To file a support ticket:
Click Support in the left-hand navigation.
Click Request Support.
For Issue Category, select
Help with live migration
.For Priority, select the appropriate priority. For questions, please select
Medium Priority
. If there was a failure in migration, please selectHigh Priority
.For Request Summary, please include
Live Migration
in your summary.For More details, please include any other relevant details to your question or migration error.
Click the Request Support button to submit the form.