Three-member replica set across Data center

Hi,

I was going through this official document about how to set up a 3-member replica set across 2 Datacenters.
For e.g.

  1. 2 members in Datacenter-A with one Arbiter node and
  2. 1 member in Datacenter-B

For the above configuration, let’s assume that Datacenter-A went down. My understanding is since Datacenter-B is having one member, it will become Primary. But the official document says the replica set will become read-only.

Could you please let me know why not a member of Datacenter-B becomes primary? Is it because the Arbiter node is present only in Datacenter-A? What if I don’t have the Arbiter node both in Datacenter-A & B or have the Arbiter node in Datacenter-B as well?

First it is recommended (may be mandatory) to start with a odd number of members. An arbiter is still counted as a member despite the fact that it cannot become primary as it holds no data. In your scenario you start with 4 nodes. To have a primary node you need a majority of members voting for one of the member. The majority number of nodes for a 4-member replica set is 3 nodes. If you lose DC-A, you lose 3 nodes and you have only 1 node left. The majority is impossible. Losing DC-B allows majority in DC-A.

I think the only minimum configuration to support the lost of one DC is to have 3 DC with 1 data bearing node per DC.

1 Like

Hi @Allwyn_Jesu,

There has been a number of topics already in this forum about arbiters. I’ll just mention this one below but please have a read through a few of them.

Arbiters are not recommended in a production environment, it’s as simple as that. If you can, avoid them.

A MongoDB Replica Set (RS) can only elect a primary if the majority of the voting members in the RS can be reached. In your setup with 2 nodes in DC1 and 1 node in DC2, if DC1 goes down / offline, you are left with a single node that cannot reach the majority (2) so it cannot become primary. This is to prevent the possibility of having 2 primaries as the isolated DC1 is in a position to elect a primary potentially (== split brain).

The only way to get something “perfect” is to have 3 DCs, each with one node. As it’s complicated / costly to have, in MongoDB Atlas, we use the “availability zones” of AWS and make sure each node is in a different availability zone.

I also explained here a few hours ago why one arbiter is already a bad idea - even in a 5 nodes RS: readConcern with more than 1 secondary and an arbiter - #2 by MaBeuLux88.

I hope this helps :sweat_smile:.

Cheers,
Maxime.

2 Likes