Hi,
I’ve a 3.4 MongoDB cluster with tree nodes (CentOS 7) :
- node01 : Primary
- node02 : Secondary
- node03 : Arbiter
The configuration is : (10.0.0.1 for node01, 10.0.0.2 for node02, 10.0.0.3 for node03).
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /data/mongodb
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
net:
port: 27017
bindIp: 127.0.0.1,10.0.0.1
replication:
replSetName: rs01
The RS status is :
rs01:PRIMARY> rs.status()
{
"set" : "rs01",
"date" : ISODate("2020-05-18T07:22:14.091Z"),
"myState" : 1,
"term" : NumberLong(206),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1589786533, 3),
"t" : NumberLong(206)
},
"appliedOpTime" : {
"ts" : Timestamp(1589786533, 3),
"t" : NumberLong(206)
},
"durableOpTime" : {
"ts" : Timestamp(1589786533, 3),
"t" : NumberLong(206)
}
},
"members" : [
{
"_id" : 0,
"name" : "10.0.0.1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6017906,
"optime" : {
"ts" : Timestamp(1589786533, 3),
"t" : NumberLong(206)
},
"optimeDate" : ISODate("2020-05-18T07:22:13Z"),
"electionTime" : Timestamp(1583769604, 1),
"electionDate" : ISODate("2020-03-09T16:00:04Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "10.0.0.2:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6017890,
"optime" : {
"ts" : Timestamp(1589786532, 1),
"t" : NumberLong(206)
},
"optimeDurable" : {
"ts" : Timestamp(1589786532, 1),
"t" : NumberLong(206)
},
"optimeDate" : ISODate("2020-05-18T07:22:12Z"),
"optimeDurableDate" : ISODate("2020-05-18T07:22:12Z"),
"lastHeartbeat" : ISODate("2020-05-18T07:22:12.813Z"),
"lastHeartbeatRecv" : ISODate("2020-05-18T07:22:12.714Z"),
"pingMs" : NumberLong(1),
"syncingTo" : "10.0.0.1:27017",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "10.0.0.3:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 6017890,
"lastHeartbeat" : ISODate("2020-05-18T07:22:12.821Z"),
"lastHeartbeatRecv" : ISODate("2020-05-18T07:22:12.849Z"),
"pingMs" : NumberLong(2),
"configVersion" : 3
}
],
"ok" : 1
}
I use this documentation to upgrade to 3.6 version in first step : https://docs.mongodb.com/manual/release-notes/3.6-upgrade-sharded-cluster/#upgrade-recommendations-and-checklists
I did a backup with mongodump on the primary node.
Now, I check the feature compatibility version :
Node01 :
rs01:PRIMARY> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.4", "ok" : 1 }
Node02 :
rs01:SECONDARY> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.4", "ok" : 1 }
Node03 :
rs01:ARBITER> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.2", "ok" : 1 }
I see that the feature compatibility on the ARBITER is wrong :
rs01:ARBITER> db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
{
"ok" : 0,
"errmsg" : "not master",
"code" : 10107,
"codeName" : "NotMaster"
}
I go to the primary to exec this command :
rs01:PRIMARY> db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
{ "ok" : 1 }
I return to the ARBITER to check it :
rs01:ARBITER> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{ "featureCompatibilityVersion" : "3.2", "ok" : 1 }
How to do to change it ?
Thanks