I’ve a 3-node (1of 3 is hidden delayed node) replica set running on my local MacOS system, and a dockerized nodejs app that connects to it. The problem is connecting to the replica set from the docker.
The non-dockerized nodejs app is able to connect to the replica set using mongoose.connect("mongodb://localhost:27017,localhost:27018/dev") and triggering any election doesn’t effect the availablity. Since I can’t use the same connection string from inside the docker container (I maybe a noob , could be possible to do so), when I use mongoose.connect("mongodb://host.docker.internal:27017,host.docker.internal:27018/dev") it doesn’t work, (the reason for using host.docker.internal : https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds), on the contrast using mongoose.connect("mongodb://host.docker.internal:27017/dev") works like a charm but doesn’t give me max availability if I do a rs.stepDown() on the primary, which fails the purpose.
It looks like you are missing the replicaSet option from your connection string which will monitor changes in replica set configuration and availability (as opposed to a direct connection to a specific replica set member).
I assume this is just for development/learning purposes, but one caution on this 3-node configuration is that w:majority or w:2 writes will be acknowledged by your delayed secondary in the event another data-bearing member of the replica set is unavailable. For example, if your delayed secondary applies writes with a 3600 second delay and a normal secondary is down, majority writes will take at least 3600s to be successfully acknowledged.
Hi @Stennie, I see using replicaSet option, I don’t have to give all my hosts and it’s port. Using that I was only able to use it with non-dockerized nodejs app to change my connection string from mongoose.connect("mongodb://localhost:27017,localhost:27018/dev") to mongoose.connect("mongodb://localhost:27017/dev?replicaSet=rs0"), but it didn’t help with the problem where dockerized nodejs app when connecting using replicaSet option like mongoose.connect("mongodb://host.docker.internal:27017/dev?replicaSet=rs0") did not work.
Would this topology be an issue for production purpose, if my write concern is only default to w:1? Also shouldn’t write concern being acknowledged by delay secondary be a bad thing, as delayed nodes should be responsible only for delayed replication and voting if chose to. Sorry if I’m talking giberish, I know less about write concern.
The important aspect of a replica set connection is that clients use the hostnames in the replica set config. If you have port forwarding or aliases that allow connection via a different hostname (eg host.docker.internal), clients will establish an initial connection and then try to connect to the hosts in the replica set config if you specified the replicaSet connection option.
You can manually work around this by adding the expected hostnames and IP mappings in /etc/hosts, but DNS hostnames would be more reliable to maintain if you plan on adding new replica set members in future.
I wouldn’t recommend this topology for a production deployment unless you are comfortable with consequences of replication to a delayed secondary. I would start with a 3 member topology for data redundancy and failover, and add special nodes (like a delayed secondary or hidden member for reporting) as additional hidden and non-voting members.
In a degraded scenario with only Primary and Delayed members available, data won’t be replicated to the secondary until after the delay. If you have a significant delay, you are exposed to the risk of data loss in the unfortunate event the primary has an unrecoverable issue before you get your second data bearing member back online. You can avoid this scenario by adding another non-delayed voting secondary.
If you are using w:1 write concern you must already be comfortable with potential for data to be rolled back (I would recommend w:majority unless this isn’t a concern). However, even if you aren’t writing data with majority write concern you may be reading data with majority read concern (for example, using change streams).
The majority commit point for your replica set won’t advance until data is committed to a majority of data-bearing voting members, so you will also have cache pressure similar to performance issues with a PSA replica set.
With a 3 member replica set the majority required to sustain a primary is 2 members, so I assumed your delayed secondary is voting (otherwise you will have no primary if any of the other secondaries are unavailable).
This is expected behaviour as called out in the write concern documentation: