The scenario I never saw mentioned in that discussion was: PSSAA
The reason I ask is it seems like having the extra two arbiters would allow a split where PS / SAA would allow SAA to stay operational as primary in a case where P,S,S,A,A were instances in a larger pool of machines where you might potentially lose more than one of them at a time.
Is that 100% not supported, not recommended, and “doesn’t operate that way”?
The voting majority situation you are setting up with PS/SAA would be better implemented as P/SS (or ideally P/S/S with members in three data centres).
In your PSSAA scenario, unavailability of any data-bearing node also means you lose the ability to acknowledge majority writes despite maintaining a voting majority. This generally has undesirable operational consequences, particularly in modern versions of MongoDB with more features and use cases relying on majority read and write concerns.
@Nathan_Neulinger, adding to what @Stennie said , Also note if you choose data center 1 PS and data center 2 as SAA, there is more chances if there is any network interruption than SAA can become primary and if PS join back SAA than it has to rollback data…
if you have only two data centers and when data center 1 goes down, you want to make datacenter 2 read write and keeping oplog to sync when data center come back within oplog window than you can choose
PS/S(priorty=0)AA- make sure data center 2 S has priorty=0 to avoid it becoming primary during network interruption. and you can manually decide to make primary in case of true disaster.