Before you attempt any downgrade, familiarize yourself with the content of this document.
Downgrade Path
Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.
Create Backup
Optional but Recommended. Create a backup of your database.
Considerations
While the downgrade is in progress, you cannot make changes to the collection metadata. For example, during the downgrade, do not do any of the following:
- any operation that creates a database 
- any other operation that modifies the cluster metadata in any way. See Sharding Reference for a complete list of sharding commands. Note, however, that not all commands on the Sharding Reference page modify the cluster metadata. 
Prerequisites
Before downgrading the binaries, you must downgrade the feature
compatibility version and remove any 3.6 features incompatible with 3.4 or earlier versions as outlined
below. These steps are necessary only if
featureCompatibilityVersion has ever been set to "3.6".
1. Downgrade Feature Compatibility Version
- Downgrade the - featureCompatibilityVersionto- "3.4".- db.adminCommand({setFeatureCompatibilityVersion: "3.4"}) - The - setFeatureCompatibilityVersioncommand performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on the- mongosinstance.
To ensure that all members of the sharded cluster reflect the updated
featureCompatibilityVersion, connect to each shard replica set
member and each config server replica set member and check the
featureCompatibilityVersion:
Tip
For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ) 
All members should return a result that includes:
"featureCompatibilityVersion" : { "version" : "3.4" } 
If any member returns a featureCompatibilityVersion that includes
either a version value of "3.6" or a targetVersion field,
wait for the member to reflect version "3.4" before proceeding.
For more information on the returned featureCompatibilityVersion
value, see View FeatureCompatibilityVersion.
2. Remove Backwards Incompatible Persisted Features
Remove all persisted features that are incompatible with 3.4. For example, if you have defined
any any view definitions, document validators, and partial index
filters that use 3.6 query features such as $jsonSchema or
$expr, you must remove them.
Procedure
Downgrade a Sharded Cluster
Warning
Before proceeding with the downgrade procedure, ensure that all
members, including delayed replica set members in the sharded
cluster, reflect the prerequisite changes.  That is, check the
featureCompatibilityVersion and the removal of incompatible
features for each node before downgrading.
Download the latest 3.4 binaries.
Using either a package manager or a manual download, get the latest release in the 3.4 series. If using a package manager, add a new repository for the 3.4 binaries, then perform the actual downgrade process.
Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.
Disable the Balancer.
Turn off the balancer as described in Disable the Balancer.
Downgrade each shard, one at a time.
Downgrade the shards one at a time. If the shards are replica sets, for each shard:
- Downgrade the secondary members of the replica set one at a time: - Perform a clean shut down of the - mongodprocess.- Note- If you do not perform a clean shut down, errors may result that prevent the - mongodprocess from starting.- Forcibly terminating the - mongodprocess may cause inaccurate results for- db.collection.count()and- db.stats()as well as lengthen startup time the next time that the- mongodprocess is restarted.- This applies whether you attempt to terminate the - mongodprocess from the command line via- killor similar, or whether you use your platform's initialization system to issue a- stopcommand, like- sudo systemctl stop mongodor- sudo service mongod stop.
- Replace the 3.6 binary with the 3.4 binary. 
- Start the 3.4 binary with the - --shardsvrand- --portcommand line options. Include any other configuration as appropriate for your deployment, e.g.- --bind_ip.- mongod --shardsvr --port <port> --dbpath <path> \ - --bind_ip localhost,<hostname(s)|ip address(es)> - Or if using a configuration file, update the file to include - sharding.clusterRole: shardsvr,- net.port, and any other configuration as appropriate for your deployment, e.g.- net.bindIp, and start:- sharding: - clusterRole: shardsvr - net: - port: <port> - bindIp: localhost,<hostname(s)|ip address(es)> - storage: - dbpath: <path> 
- Wait for the member to recover to - SECONDARYstate before downgrading the next secondary member. To check the member's state, you can issue- rs.status()in the- mongoshell.- Repeat for each secondary member. 
 
- Step down the replica set primary. - Connect a - mongoshell to the primary and use- rs.stepDown()to step down the primary and force an election of a new primary:- rs.stepDown() 
- When - rs.status()shows that the primary has stepped down and another member has assumed- PRIMARYstate, downgrade the stepped-down primary:- Shut down the stepped-down primary and replace the - mongodbinary with the 3.4 binary.
- Start the 3.4 binary with the - --shardsvrand- --portcommand line options. Include any other configuration as appropriate for your deployment, e.g.- --bind_ip.- mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)> - Or if using a configuration file, update the file to include - sharding.clusterRole: shardsvr,- net.port, and any other configuration as appropriate for your deployment, e.g.- net.bindIp, and start the 3.4 binary:- sharding: - clusterRole: shardsvr - net: - port: <port> - bindIp: localhost,<hostname(s)|ip address(es)> - storage: - dbpath: <path> 
 
Downgrade the config servers.
If the config servers are replica sets:
- Downgrade the secondary members of the replica set one at a time: - Shut down the secondary - mongodinstance and replace the 3.6 binary with the 3.4 binary.
- Start the 3.4 binary with both the - --configsvrand- --portoptions. Include any other configuration as appropriate for your deployment, e.g.- --bind_ip.- mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)> - If using a configuration file, update the file to specify - sharding.clusterRole: configsvr,- net.port, and any other configuration as appropriate for your deployment, e.g.- net.bindIp, and start the 3.4 binary:- sharding: - clusterRole: configsvr - net: - port: <port> - bindIp: localhost,<hostname(s)|ip address(es)> - storage: - dbpath: <path> - Include any other configuration as appropriate for your deployment. 
- Wait for the member to recover to - SECONDARYstate before downgrading the next secondary member. To check the member's state, issue- rs.status()in the- mongoshell.- Repeat for each secondary member. 
 
- Step down the replica set primary. - Connect a - mongoshell to the primary and use- rs.stepDown()to step down the primary and force an election of a new primary:- rs.stepDown() 
- When - rs.status()shows that the primary has stepped down and another member has assumed- PRIMARYstate, shut down the stepped-down primary and replace the- mongodbinary with the 3.4 binary.
- Start the 3.4 binary with both the - --configsvrand- --portoptions. Include any other configuration as appropriate for your deployment, e.g.- --bind_ip.- mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)> - If using a configuration file, update the file to specify - sharding.clusterRole: configsvr,- net.port, and any other configuration as appropriate for your deployment, e.g.- net.bindIp, and start the 3.4 binary:- sharding: - clusterRole: configsvr - net: - port: <port> - bindIp: localhost,<hostname(s)|ip address(es)> - storage: - dbpath: <path> 
 
Re-enable the balancer.
Once the downgrade of sharded cluster components is complete, re-enable the balancer.