Hi @Jaime_Martin,
Sorry for the delay, I had a baby since last time so the amount of mess in my life is definitely increasing.
In my comment RS = Replica Set, P = Primary, S = Secondary, PSS = state of the RS with P, S & S in this case.
If you have a 3 nodes RS:
- If you have 3 nodes alive, the most up-to-date one will ask if it can become primary. The 2 others will vote for him => You get a primary => PSS .
- If one node dies, 2 options.
- the P died: the most up-to-date of the 2 secondary will call an election for himself. The other node will vote for it. No ties because one node will call the vote before the other and prevent the other one from call a vote too.
- One of the S dies: Nothing happens. => PS .
- 2 nodes die. At that point, you can’t reach the majority of the voting members of the RS anymore as you are left with a single node. If it’s the P that survived, he will perform a stepDown operation and become S. That’s because he doesn’t know what happened to the 2 other nodes which maybe are still alive but outside of its reach (network split issue) and maybe they are performing an election in the meantime to re-elect a P. The stepDown operation prevents a split brain issue where you would end up with 2 primaries at the same time. => You end up with S => Read only if you are not reading only with the readPreferrence “Primary”. Any other option would read on this secondary.
Let take a note of one thing: with 3 nodes, I have a majority at 2 nodes. Each day a server has a certain probability to fail and I’m starting to have problems if 2 nodes fails.
The more nodes I have, the greater the chances I have to lose 2 of these nodes and have a problem. I’ll come back to this logic in a second.
Now let’s try to have a RS with 4 nodes. Majority is at 3 now.
This means that I can only afford to lose one node.
It would be more interesting to have 5 nodes. Because the majority is still 3 and now I can afford to lose 2 nodes instead of one.
With 3 or 4 nodes RS, I can only afford to lose 1 node. But with 4 nodes instead of 3, I now have a greater probability to lose 2 nodes than when I had only 3 nodes (CF my comment above). So basically, with 4 nodes, I made my RS less resilient and less highly available (HA) than when I had 3 nodes.
That’s why we don’t recommend 4 or 6 nodes RS and prefer 3, 5 or 7 nodes RS which provide better high availability.
Let’s take a final exemple to finish this example. Let’s say you only have 2 Data Centers (DC) available to deploy your MongoDB RS. It’s not optimal but that’s what you have.
You follow the recommendations and go with a 3 nodes RS => DC1 take 2 nodes and DC2 take 1 node.
That’s the best possible option here. If DC1 goes down entirely => You lose 2 nodes. DC2 is in read only . If DC2 does down, you only lose 1 node. The 2 other nodes can perform an election if necessary .
If you decided that you wanted some symmetry in there and you decided that a 4 nodes RS was a better idea => DC1 takes 2 nodes. DC2 takes 2 nodes. Majority is still at 3 nodes… I guess you understand the rest now. If DC1 or DC2 goes down, you lose 2 nodes at once => no more majority => and . You are less resilient to a DC level failure than with 3 nodes.
3 nodes in DC1 and 2 nodes in DC2 would provide the same level of resilience to a DC level failure but provide a better resilience to server level failure.
The optimal solution here would be to bring a 3rd DC in the game and move one node from DC1 in DC3. Then you could afford to lose any DC entirely and still have 3 nodes in total with the 2 others.
Same logical applies if you have 3 DCs and 3 nodes RS (1 in each DC).
I hope it’s more clear. Ties aren’t the problem. If a tie ever occurs (not even sure it’s actually possible), then it’ll be resolved within the next second. The real problem is the majority and the probability to lose nodes.
Cheers,
Maxime.