I use 1 arbiter + 2 data nodes replicaset in our company on-prem Kubernetes (one arbiter is necessary, because of only 2 zones and not replicated storage).
Architecture of the cluster looks like 1 data node in each zone each connected to separated PV storage (replication is done on the side of mongodb) + 1 arbiter which can reschedule automatically from one zone to another.
Now Im testing durability of the cluster.
If I shutdown arbiter and one of the data nodes together, everything is fixed by itself. When arbiter starts up, arbiter and running data node (stepped down to secondary) elects primary and the second data node recovers from primary. So far so good.
But when I try to shutdown both data nodes (so only arbiter is alive), both data nodes after restart gets stuck in STARTUP2 member state and try to do initialSync, and returns an error “could not find member to sync from”. Is there any way to persuade one of the data nodes to sync from itself? The data from PV are still there, how to use them to startup the replica and sync to other data node?
MongoDB version: 5.0.6