Cluster upgrade where configserver and replicat are on the same machine

Hi everyone!

I have a three servers cluster in 3.4. Each server runs one MongoS, one MongoD for our single data replicaset, and one MongoD for the config server replicaset. This setup was designed a while ago when we thought we were going to need sharding, but we never actually enabled sharding on collections.

I’d like to upgrade that MongoDB cluster to 3.6 (and more, up to the latest version). The documentation clearly states that the config server rs should be upgraded first, and then the data server and last MongoS. Except since those run on the same machine for me, the config server mongod and the data mongod use the same binary. And MongoS, while a different binary, comes from the same package. So I can’t really upgrade one and then the other (unless I keep a data MongoD running from a now removed binary, which doesn’t sound ideal).

I can of course move the config server rs to 3 different servers before the upgrade, but I was wondering if there was a simpler way? The standard Ubuntu package installs the binary in a common path without the version in it, so I’m not sure if I can have co-existing setup of both Mongo 3.4 and 3.6.

What do you think? :slight_smile:

Thanks!

Hi @Wenceslas_des_Deserts ,

Welcome to the MongoDB Community Forums.

I see that you’re mainly concerned about binaries being same for shard server and config server.

Here is what you can do to manage this within your infrastructure and with the minimal efforts. Good thing is, you can have co-existing setup with the way described below.

Before that, please note: For production setup, it is advisable to use different physical machines for each of mongods, be it shard server replica set or config server. Reason behind this is, if hardware failure occurs on the instance running primary nodes of both config server and shard server, there are chances you might end up with no active client server connection which would accept read, write requests.

  1. Create a new folder to keep your new versions of binaries
  2. Download 3.6 version binaries from this link and place it inside that folder.
  3. Disable / stop balancer sh.stopBalancer()
  4. Shutdown the secondary config server and start it with the new binary that you downloaded. example: /path/to/your/mongod -f config_server_config_file.conf (Do this for both secondaries and for primary last)
  5. Apply the same method for shard server with replica set starting with secondary instance and primary at the end.
  6. Apply the same method for mongos
  7. Enable / start balancer ‘sh.startBalancer()’

Feel free to shoot any questions you shall have in above setup or if you’re facing any issues.

All the best!

2 Likes

Hi,

Thanks for the quick answer!

I have two follow-up questions:

  • Does it mean the MongoDB binary is self contained? No libraries or shared stuff that needs to be deployed by the package? I didn’t realize that, that makes it much simpler! (actually I looked into the .deb and it looks like it’s only the binary, some doc, and the service and config files)
  • About your warning: I know that setup isn’t ideal, but I’m not sure I understand the specific concern you’re raising here. If the instance running both primaries dies, shouldn’t other instances elect a new primary for both the config server rs & the data rs?

Yes, The binaries are self-contained and as far as you’ve supported package dependencies installed (which you will get error for if it’s not. For example, libcrypt.so or libssl.so which would be there in your disk already, but if it’s not there in /usr/lib/, you would just need to copy that file from the location and paste it to /usr/lib/),

About the new primary being elected case, It will elect the new primary fine, but just think of a case when mongo client is making a query while election is happening, rare possibility of downtime but possible and you might want to avoid that in production.

I hope you got the answer @Wenceslas_des_Deserts . If not, Feel free to let us know if you’ve any questions or facing any issues. Happy to help!

Yes, thank you very much! I really appreciate the quality and quickness of your answers!

1 Like

My pleasure @Wenceslas_des_Deserts

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.