Can you run mongodb in 3 AZ?

The issue is as follows: we have a requirement for our services to be available in case of 1 AZ + 1 host failure.

Is it true that with MongoDB you cannot make any setup using 3 availability zones for that? We’re mostly limited by the number of voting members (https://www.mongodb.com/docs/manual/reference/limits/#mongodb-limit-Number-of-Voting-Members-of-a-Replica-Set). And there seems to be no workaround except for allocating another availability zone with one arbiter.

Is there any specific reason why there is no option to increase the number of voting members? Are there any planned changes?

Welcome to the MongoDB Community @Nikita_Mikhaylov !

At least 3 AZs (with 1 data-bearing member in each) are recommended for a replica set deployment if you want to allow automatic failover between multiple DCs. This is the standard deployment set up used by MongoDB Atlas for high availability:

  • Highly available : A minimum of three data nodes per replica set are automatically deployed across availability zones (AWS), fault domains (Azure), or zones (GCP) for continuous application uptime in the event of outages and routine maintenance.

For more information on deployment considerations, please see Replica Set Deployment Architectures in the MongoDB server documentation.

How does the maximum number of voting members (7 in a replica set) limit your planned deployment?

An arbiter is only needed if you have an insufficient number of voting members to ensure a quorum to sustain or elect a primary, but I strongly recommend avoiding arbiters if possible. For more context, see Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie_X.

Regards,
Stennie

How does the maximum number of voting members (7 in a replica set) limit your planned deployment?

An arbiter is only needed if you have an insufficient number of voting members to ensure a quorum to sustain or elect a primary

Our prior setup was as follows: we have 3 main availability zones which are basically allocated hardware in different data centers, in which we locate all our services. And we’re trying to avoid using other availability zones (for reasons unrelated to this discussion)

We had a setup with 3 voting members in AZ1, 2 in AZ2, and 2 in AZ3, being 7 in total.

This setup worked fine for several years until the outage happened in AZ1 and one node in went under maintenance in AZ2. During that time election failed and all updates failed since the primary node could not be elected.

As of now the only way we see to fix this issue was as follows: remove votes from one of the members in AZ1 and allocate another arbiter in AZ4

But from the user perspective, this 7 voting member limit seems somewhat arbitrary and we would really like to avoid using AZ4.

To give an example, we’d rather have 9 nodes split evenly across 3 AZ, this would allow us to perform maintenance over one host even during AZ outage.