Replica Set of Only 2 Nodes

Hi, we have a customer whose internet connection might fall at any time. We’ve decided to do a local installation, although we would like make backups of their database on our own, so if they local server goes down, they could keep working with our replication on our server. But we would prioritize working on their local one.

Expected behaviour :

  • If local mongo and remote are available → local
  • If local mongo loses connection with remote mongo, but local is avaible from local web server → local
  • If local mongo is not accessible from local web server → remote

What options do we have to achieve this behaviour ?

Welcome to the MongoDB Community @SuredaKuara !

Replica sets require a strict majority of voting members (n/2+1) available in order to elect or sustain a primary, so you will need a minimum of 3 replica set members for a fault tolerant deployment.

If two of those are local and have a higher priority than the remote member, the primary will always be local.

The suggested configuration would look like:

  • member1 (local): priority 2
  • member2 (local): priority 2
  • member3 (remote): priority 1

This differs from your expected behaviour in one regard: there is a second local member so failover will not be remote. There is still a remote replica set member for offsite data redundancy.

Regards,
Stennie

2 Likes

Hi @Stennie_X, thanks for your response.

But if I apply this configuration, would member3 (remote) fail if member1 and member2 went down? Maybe it is what you exposed in your last paragraph but I can not understand it at all.

Regars,
Andreu

I think your and customer’s systems are in two different geographic locations.

Have you checked this page?
Replica Sets Distributed Across Two or More Data Centers — MongoDB Manual

you have to use at least 3 members the way @Stennie_X shortly explained. but there is a problem with this 2+1 setup that if 2 of the local go down, your remote will be read-only.

You can have a 3-center setup with extra hardware in use. Please check that link and see if you can make sense of your case.

Hi @Yilmaz_Durmaz, that is what i meant.

Thanks for your responses, we will discuss if it is a good option the 2+1 approach. Just one more question, this configuration would work with a Arbiter Node, wouldn’t it ?

Arbiters do not hold data, they are there to make a majority in voting.

In any case, you will need a 3rd member with its own IP address/port that both the other members can access, whether in a real or virtual machine in your or the customer’s location. that is why the connection string has addresses of all members: mongodb://host1:27017,host2:27017,host3:27017/?replicaSet=myRS

Arbiter is recommended on the customer’s side in the manual. I haven’t tried it myself but I think if the arbiter survives it will vote for the remote member to become primary. but if both members are lost in local, remote will become read-only.

also because the connection is possibly slow between centers, the writes on a single machine in the local might get lost forever. extra costs for another member might become less important compared to data loss. so keep this in mind in your next meeting.

check this one (if you haven’t) on how to make an arbiter member and add to the replica set
Add an Arbiter to Replica Set — MongoDB Manual

1 Like

@Yilmaz_Durmaz many thanks for your time, we will take a look.

Hi @SuredaKuara,

I would avoid using arbiters if possible. For more elaboration on the downsides, see Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie_X.

You need at least two healthy voting members to elect or maintain a primary if there are three configured voting members in your replica set. If any two members of your replica set are down, a healthy third member will be a readonly secondary.

This avoids data inconsistency scenarios where a network partition could otherwise result in more than one primary. For example: member1 is down, the connection to the remote network is down, but member2 and member3 are healthy. If member2 and member3 could both decide to be primaries and accept writes, the data in these replica set members would diverge. The strict majority requirement ensures that a primary will only be present in a partition with a majority of voting members.

Since your original description had all requests originating via a local web server with unreliable internet I assumed you would prefer local availability, but you can plan your deployment to suit your failover and fault tolerance requirements.

Regards,
Stennie

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.