Failover issue in unidirectional network

I have many 3 nodes of MongoDB replica set on Kubernetes, recently we had a network incident in one of our data centers that affected our MongoDB clusters, clusters can’t do the failover process successfully and each node changed their role(primary to secondary or vise versa) continuously in infinite loop, to reproduce this issue after the incident I set a NetworkPolicy to block Egress from secondary nodes to the primary node. after applying this Networkpolicy cluster don’t have any issues maybe the network policy doesn’t drop established connections…, but after I restart one of the MongoDB nodes cluster try to failover and change the primary(unreachable node) with one of the secondary nodes but after a while, it continuously switches primary to secondary and secondary to primary again.

Status format
# Node : <node0-status> <node1-status> <node2-status>
# p: Primary
# s: Secondary
# u: Unreachable

Steps to reproduce this issue

  1. Deploy a 3-node replica set that has this rs.status() output.
// Replicaset status from all three nodes in this step.
Node-0: p s s
Node-1: p s s
Node-2: p s s
  1. Apply this Networkpolicy or block Egress traffic from the secondary pods to the primary pod.
kind: NetworkPolicy
  name: test-network-policy
    matchLabels: x-mongo
  - Egress
  - to:
    - ipBlock:
        - X.X.X.X // (Primary pod IP address)
// Status after applying the above network policy
Node-0: p s s
Node-1: p s s
Node-2: p s s
  1. restart one of the secondary pods.
// Staus after restarting one of the secondary pods.
Node-0: p s s <-> s p s
Node-1: u p s <-> u s s
Node-2: u s p <-> u p s <-> u s s

Each node has a different status for the other nodes and switches between them.

I know that MongoDB needs full-bidirectional network connectivity between nodes, but if this happens should the MongoDB replicaset be in this terrible situation? or is it a wrong configuration on my side?