To transition from an embedded config server to a dedicated config server, you must ensure the config shard's data is migrated to the remaining shards in the cluster. This procedure describes how to safely migrate data and complete the transition.
About this Task
Creating, sharding, or moving collections while performing this procedure may cause interruptions and lead to unexpected results.
Do not use this procedure to migrate an entire cluster to new hardware. To migrate, see Migrate a Self-Managed Sharded Cluster to Different Hardware.
When you remove a shard in a cluster with an uneven chunk distribution, the balancer first removes the chunks from the draining shard and then balances the remaining uneven chunk distribution.
Removing a shard may cause an open change stream cursor to close, and the closed change stream cursor may not be fully resumable.
You can safely restart a cluster during a transition process. If you restart a cluster during an ongoing draining process, draining continues automatically after the cluster components restart. MongoDB records the transition status in the
config.shardscollection.
Before you Begin
This procedure uses the
sh.moveCollection()method to move collections off of the config shard. Before you begin this procedure, review themoveCollectionconsiderations and requirements to understand the command behavior.To transition to a dedicated config server, first connect to one of the cluster's
mongosinstances usingmongosh.
Steps
Ensure the balancer is enabled.
To migrate data from the config shard, the balancer process
must be enabled. To check the balancer state, use the
sh.getBalancerState() method:
sh.getBalancerState()
If the operation returns true, the balancer is enabled.
If the operation returns false, see
Enable the Balancer.
Verify the config server is acting as a shard.
Run listShards to confirm that the config shard
appears in the shard list:
db.adminCommand( { listShards: 1 } )
In the output, the shards._id field contains the shard names.
The config shard typically has an _id of "config":
{ shards: [ { _id: 'config', ... }, ... ], ok: 1 ... }
Start migrating sharded data off the config shard.
From the admin database, run the
transitionToDedicatedConfigServer command:
use admin db.adminCommand( { transitionToDedicatedConfigServer: 1 } )
The config shard enters the draining state and the balancer begins migrating chunks from the config shard to the remaining shards in the cluster. Depending on your network capacity and the amount of data, this operation can take anywhere from minutes to days to complete.
Move unsharded collections off the config shard.
Use the $listClusterCatalog aggregation stage to
identify unsharded collections that still reside on the config
shard:
use admin db.aggregate([ { $listClusterCatalog: { shards: true } }, { $match: { sharded: false, shards: "config", type: { $nin: ["timeseries", "view"] }, ns: { $not: { $regex: "^enxcol_\\..*(\\.esc|\\.ecc|\\.ecoc|\\.ecoc\\.compact)$" } }, $or: [ { ns: { $not: { $regex: "\\.system\\." } } }, { ns: { $regex: "\\.system\\.buckets\\." } } ], db: { $nin: ["config", "admin"] } } }, { $project: { _id: 0, ns: 1 } } ])
For each namespace in the output, use sh.moveCollection()
to move the unsharded collection from the config shard to a
recipient shard:
sh.moveCollection( "<database>.<collection>", "<ID of recipient shard>" )
Repeat this step until no unsharded collections remain on the config shard.
Change the primary shard for databases that use the config shard.
From the admin database, run
db.printShardingStatus():
use admin db.printShardingStatus()
In the databases section of the output, check each database's
primary field. For any application databases (databases other
than config and admin) whose primary is the config shard,
change the primary shard to another shard.
To change a database's primary shard, run movePrimary:
db.adminCommand({ movePrimary: "<dbName>", to: "<recipientShard>" })
Any collections that were not moved in the previous step are
unavailable while movePrimary runs.
Check transition status.
To check the progress of the transition, re-run
transitionToDedicatedConfigServer from the admin
database:
use admin db.adminCommand( { transitionToDedicatedConfigServer: 1 } )
Continue checking the status until the transition completes successfully, and the output resembles the following example:
{ state: 'completed', msg: 'removeshard completed successfully', shard: 'config', ok: 1 }
Commit the transition to a dedicated config server.
After the config shard reports a state of 'completed',
commit the transition from an embedded config server to a
dedicated config server:
use admin db.adminCommand( { commitTransitionToDedicatedConfigServer: 1 } )
A successful commit returns the following output:
{ ok: 1, '$clusterTime': { ... }, operationTime: ... }
When successful, MongoDB removes the config shard from the
cluster metadata and finalizes the transition to a dedicated
config server. If the config shard is not completely drained, the
command fails and you must continue checking the transition
status for { state: 'completed' } before retrying.
After the transition is committed, listShards no
longer includes the config shard in the shard list. The cluster
now uses a dedicated config server and the config server no
longer stores application data as a shard.