On this page
In some circumstances (such as when you have a primary and a secondary, but cost constraints prohibit adding another secondary), you may choose to add an arbiter to your replica set. An arbiter participates in elections for primary but an arbiter does not have a copy of the data set and cannot become a primary.
An arbiter has exactly
1 election vote. By default an arbiter has
Changed in version 3.6: Starting in MongoDB 3.6, arbiters have priority
0. When you upgrade
a replica set to MongoDB 3.6, if the existing configuration has an
arbiter with priority
1, MongoDB 3.6 reconfigures the arbiter to
Do not run an arbiter on systems that also host the primary or the secondary members of the replica set.
To add an arbiter, see Add an Arbiter to Replica Set.
For example, in the following replica set with a 2 data bearing members (the primary and a secondary), an arbiter allows the set to have an odd number of votes to break a tie:
For 3-Member Primary-Secondary-Arbiter Architecture*
If you have a three-member replica set with a
primary-secondary-arbiter (PSA) architecture or a sharded cluster
with a three-member PSA shards, the cache pressure will increase if
any data bearing node is down and support for
"majority" read concern is enabled.
To prevent the storage cache pressure from immobilizing a deployment with a three-member primary-secondary-arbiter (PSA) architecture, you can disable read concern "majority" starting in MongoDB 4.0.3 (and 3.6.1+). For more information, see Disable Read Concern Majority.
For the following MongoDB versions,
pv1 increases the likelihood
w:1 rollbacks compared to
(no longer supported in MongoDB 4.0+) for replica sets with arbiters:
MongoDB 3.2.11 or earlier
For more information, see the
Arbiters do not replicate the
Because of this, arbiters always have a Feature Compatibility Version equal
to the downgrade version of the binary, regardless of the FCV value of the
Use a single arbiter to avoid problems with data consistency. Multiple arbiters prevent the reliable use of the majority write concern.
To ensure that a write will persist after the failure of a primary node, the majority write concern requires a majority of nodes to acknowledge a write operation. Arbiters do not store any data, but they do contribute to the number of nodes in a replica set. When a replica set has multiple arbiters, it's less likely that a majority of data bearing nodes will be available after a node failure.
If a secondary node falls behind the primary, and the cluster is
reconfigured, votes from multiple arbiters
can elect the node that had fallen behind. The new primary will not
have the unreplicated writes even though the writes could have been
majority committed by the old configuration. The result is data
To avoid this scenario, use at most a single arbiter.
When running with
authorization, arbiters exchange credentials with
other members of the set to authenticate. MongoDB encrypts the
authentication process, and the MongoDB authentication exchange is
Because arbiters do not store data, they do not possess the internal table of user and role mappings used for authentication. Thus, the only way to log on to an arbiter with authorization active is to use the localhost exception.
The only communication between arbiters and other set members are: votes during elections, heartbeats, and configuration data. These exchanges are not encrypted.
However, if your MongoDB deployment uses TLS/SSL, MongoDB will encrypt
all communication between replica set members. See
mongos for TLS/SSL for more information.
As with all MongoDB components, run arbiters in trusted network environments.
For example, in the following replica set with 2 data-bearing members (the primary and a secondary), an arbiter allows the set to have an odd number of votes to break a tie: