3.4 MongoDB cluster upgrade to 3.6 and 4.0

Hi,

I’ve a 3.4 MongoDB cluster with tree nodes (CentOS 7) :

  • node01 : Primary
  • node02 : Secondary
  • node03 : Arbiter

The configuration is : (10.0.0.1 for node01, 10.0.0.2 for node02, 10.0.0.3 for node03).

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

storage:
  dbPath: /data/mongodb
  journal:
    enabled: true

processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile

net:
  port: 27017
  bindIp: 127.0.0.1,10.0.0.1

replication:
  replSetName: rs01

The RS status is :

rs01:PRIMARY> rs.status()
{
        "set" : "rs01",
        "date" : ISODate("2020-05-18T07:22:14.091Z"),
        "myState" : 1,
        "term" : NumberLong(206),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1589786533, 3),
                        "t" : NumberLong(206)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1589786533, 3),
                        "t" : NumberLong(206)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1589786533, 3),
                        "t" : NumberLong(206)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "10.0.0.1:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 6017906,
                        "optime" : {
                                "ts" : Timestamp(1589786533, 3),
                                "t" : NumberLong(206)
                        },
                        "optimeDate" : ISODate("2020-05-18T07:22:13Z"),
                        "electionTime" : Timestamp(1583769604, 1),
                        "electionDate" : ISODate("2020-03-09T16:00:04Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "10.0.0.2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 6017890,
                        "optime" : {
                                "ts" : Timestamp(1589786532, 1),
                                "t" : NumberLong(206)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1589786532, 1),
                                "t" : NumberLong(206)
                        },
                        "optimeDate" : ISODate("2020-05-18T07:22:12Z"),
                        "optimeDurableDate" : ISODate("2020-05-18T07:22:12Z"),
                        "lastHeartbeat" : ISODate("2020-05-18T07:22:12.813Z"),
                        "lastHeartbeatRecv" : ISODate("2020-05-18T07:22:12.714Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "10.0.0.1:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "10.0.0.3:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 6017890,
                        "lastHeartbeat" : ISODate("2020-05-18T07:22:12.821Z"),
                        "lastHeartbeatRecv" : ISODate("2020-05-18T07:22:12.849Z"),
                        "pingMs" : NumberLong(2),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}

I use this documentation to upgrade to 3.6 version in first step : https://docs.mongodb.com/manual/release-notes/3.6-upgrade-sharded-cluster/#upgrade-recommendations-and-checklists

I did a backup with mongodump on the primary node.

Now, I check the feature compatibility version :

Node01 :

rs01:PRIMARY> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.4", "ok" : 1 }

Node02 :

rs01:SECONDARY> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.4", "ok" : 1 }

Node03 :

rs01:ARBITER> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.2", "ok" : 1 }

I see that the feature compatibility on the ARBITER is wrong :

rs01:ARBITER> db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
{
        "ok" : 0,
        "errmsg" : "not master",
        "code" : 10107,
        "codeName" : "NotMaster"
}

I go to the primary to exec this command :

rs01:PRIMARY> db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
{ "ok" : 1 }

I return to the ARBITER to check it :

rs01:ARBITER> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.2", "ok" : 1 }

How to do to change it ?

Thanks

Welcome to the MongoDB Community @Celine_celine!

Since arbiters do not store any data, the Feature Compatibility Version (FCV) cannot be changed. Arbiters always have the downgrade value of FCV (so 3.2 for a 3.4 mongod is expected).

There’s an outstanding issue to add this to the documentation (DOCS-13029) that seems to have been overlooked. This should likely be mentioned in a few places; I’ll follow up with our docs team.

Regards,
Stennie

1 Like

Thanks @Stennie_X

I understand, thanks for this information. I continue my upgrade :slight_smile:

Re @Stennie_X

I’ve upgrade my rs to 4.0 version.

The arbiter have the previous version only until 3.6 version.
In 4.0 version, my arbiter have the current FCV

rs01:ARBITER> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : { "version" : "4.0" }, "ok" : 1 } 

Besides rs.status and service status, are there any other ways to monitor the health of our cluster / rs?

Thanks

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.