RS primary still connects to old replica set members

So I have the following mongo deployment running in aws:
FIrst replica set: mongors1x, mongors1y, mongors1z.
Second replica set: mongors2x, mongors2y, mongors2z.
There is also a config replica set and a mongos but it’s not important.

My current mission is to make a snapshot of mongors1x, launch the host from this snapshot and reconfigure the mongodb service on this host so it runs as a single node in it’s replica set separate from my regular deployment.

What I did:

  1. Launch a host from a snapshot of mongors1x. Let’s call it mongorestorers1.
  2. Launched server on it.
  3. Logged in and executed the following:
    cfg = rs.conf()
    cfg.memebers = [ { _id: 1, host: “mongorestorers1:27017”}]
    rs.reconfig(cfg, {force: true})

After reconfig it became PRIMARY. However when I look at the log, I can see that it still connects to mongors1x, mongors1y and mongors1z.

Is that normal?

Here are the entries from the log that tell me that my newly launched host talks to my main deployment that should be separate.
2021-11-05T06:04:29.738+0000 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongors2y:27017
2021-11-05T06:04:29.745+0000 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongors2y:27017, took 7ms (1 connections now open to mongors2y:27017)

Is this a typo or is that what was used while reconfigure

This is a typo. The rs.reconfigure() returns “ok” as 1.

1 Like