Same connection string to connect to any node of a replica set


I am doing a personal project to acquire knowledge about different technologies, and I have reached a point with MongoDB that I do not know if what I want to do, can be done somehow, I explain:

I have a replica set with a primary and two secondaries, and the project I am developing is with Python, and when I want to connect, the connection I do it against localhost:27017. The problem is when 27017 is down, because even if the replica set localhost:27018 and localhost:27019 are up, it doesn’t connect.
So, I wanted to know if there is a way for me to always connect to localhost:27017 (either from Python or Compass), even though in the backend it is actually connecting to localhost:27018.

I’ve read a lot about it, but I can’t figure it out. I don’t know if with the Sharded Cluster and Router it could be done? I’m a bit lost on this.

Thank you very much in advance, any help that allows me to continue researching and learning is appreciated. If you need more details, let me know and I will add them to the OP.

Hi @Jaime_Martin and welcome in the MongoDB Community :muscle: !

First of all, if you want to learn more about MongoDB, you should check out the MongoDB University. It’s free and full of courses that will help you get up to speed with MongoDB. Given the context of your question, the M103 one should be just right for you.

Now to answer your question. MongoDB works with a Replica Set (RS) of, usually, 3 nodes. If you want to connect to the full replica set rather than just a single node, you have to connect with the full connection string. In your case it’s something like:


More about connection strings in the doc:

If you connect with the replicaSet option, then the driver will retrieve the RS config and identify the Primary in the list of servers. We say that the drivers are “replica set aware”. It knows the entire topology so it can adapt in case another nodes becomes Primary.

Sharded clusters are an entire different story. You use a sharded cluster when you want to split the workload across multiple RS working together (=shards). All the shards are then reached through mongos nodes (=routers) which are usually hosted near the drivers. Usually you use these when you have more than 2 TB of data.


1 Like

Hello Maxime,

Thank you very much for your reply.

Yes, so far that’s how I had been working, with that connection string, but I wanted to know if it was possible to do what I was commenting, so I understand no from your answer.

Thank you very much for the links, I will take a look at them.

Best regards,


1 Like

Try to kill the primary during writes operations. The write operations will automatically move to another node once the election of the new primary is done.
If you are using a recent version of MongoDB and the driver (v4.2 and up), retryable writes are enabled by default. If you write with the write concern w=majority, you shouldn’t be missing any write operation at the end.

1 Like

I understand.

Let’s see if you can clarify the last doubt I have, which I think I know the answer to, but just to make sure. If I have a replica set with 3 nodes (without referee), and one falls, how is automatically chosen which one is going to be the primary? Couldn’t there be a case of 1-1 tie in votes?

I have read this on stackexchange and this is how I think it works, is this correct?

“If you have 3 voting nodes in the replica set configuration and any single node is unavailable, the remaining 2 nodes still represent a strict majority of the replica set configuration (i.e. 2/3 nodes) and can elect a primary. The primary election requires a strict majority of voting nodes, so either 2 or 3 votes will elect a primary. With an even number of voting nodes (for example, 4) a strict majority will require n/2+1 votes (so 3 votes). With all members healthy, a 4 node replica set with an even number of votes could result in a 2/2 split and take longer to reach consensus.”

Thank you very much for the time you dedicate to clarify doubts and help. Thank you very much.

Hi @Jaime_Martin,

Sorry for the delay, I had a baby since last time so the amount of mess in my life is definitely increasing.

In my comment RS = Replica Set, P = Primary, S = Secondary, PSS = state of the RS with P, S & S in this case.

If you have a 3 nodes RS:

  • If you have 3 nodes alive, the most up-to-date one will ask if it can become primary. The 2 others will vote for him => You get a primary => PSS :green_circle:.
  • If one node dies, 2 options.
    • the P died: the most up-to-date of the 2 secondary will call an election for himself. The other node will vote for it. No ties because one node will call the vote before the other and prevent the other one from call a vote too.
    • One of the S dies: Nothing happens. => PS :green_circle:.
  • 2 nodes die. At that point, you can’t reach the majority of the voting members of the RS anymore as you are left with a single node. If it’s the P that survived, he will perform a stepDown operation and become S. That’s because he doesn’t know what happened to the 2 other nodes which maybe are still alive but outside of its reach (network split issue) and maybe they are performing an election in the meantime to re-elect a P. The stepDown operation prevents a split brain issue where you would end up with 2 primaries at the same time. => You end up with S :red_circle: => Read only if you are not reading only with the readPreferrence “Primary”. Any other option would read on this secondary.

Let take a note of one thing: with 3 nodes, I have a majority at 2 nodes. Each day a server has a certain probability to fail and I’m starting to have problems if 2 nodes fails.
The more nodes I have, the greater the chances I have to lose 2 of these nodes and have a problem. I’ll come back to this logic in a second.

Now let’s try to have a RS with 4 nodes. Majority is at 3 now.
This means that I can only afford to lose one node.
It would be more interesting to have 5 nodes. Because the majority is still 3 and now I can afford to lose 2 nodes instead of one.

With 3 or 4 nodes RS, I can only afford to lose 1 node. But with 4 nodes instead of 3, I now have a greater probability to lose 2 nodes than when I had only 3 nodes (CF my comment above). So basically, with 4 nodes, I made my RS less resilient and less highly available (HA) than when I had 3 nodes.
That’s why we don’t recommend 4 or 6 nodes RS and prefer 3, 5 or 7 nodes RS which provide better high availability.

Let’s take a final exemple to finish this example. Let’s say you only have 2 Data Centers (DC) available to deploy your MongoDB RS. It’s not optimal but that’s what you have.

You follow the recommendations and go with a 3 nodes RS => DC1 take 2 nodes and DC2 take 1 node.
That’s the best possible option here. If DC1 goes down entirely => You lose 2 nodes. DC2 is in read only :red_circle:. If DC2 does down, you only lose 1 node. The 2 other nodes can perform an election if necessary :green_circle:.

If you decided that you wanted some symmetry in there and you decided that a 4 nodes RS was a better idea => DC1 takes 2 nodes. DC2 takes 2 nodes. Majority is still at 3 nodes… I guess you understand the rest now. If DC1 or DC2 goes down, you lose 2 nodes at once => no more majority => :red_circle: and :red_circle:. You are less resilient to a DC level failure than with 3 nodes.

3 nodes in DC1 and 2 nodes in DC2 would provide the same level of resilience to a DC level failure but provide a better resilience to server level failure.

The optimal solution here would be to bring a 3rd DC in the game and move one node from DC1 in DC3. Then you could afford to lose any DC entirely and still have 3 nodes in total with the 2 others.
Same logical applies if you have 3 DCs and 3 nodes RS (1 in each DC).

I hope it’s more clear. Ties aren’t the problem. If a tie ever occurs (not even sure it’s actually possible), then it’ll be resolved within the next second. The real problem is the majority and the probability to lose nodes.