Adding External Secondary Nodes to a MongoDB Replica Set (Kubernetes + Servers Outside the Cluster) – Stuck in STARTUP

Hello community, I’m newest in mongodb and kubernetes and I catch an issue with my replicaset.

Problem:
I have a MongoDB replica set with a primary node in a Kubernetes cluster. I’m attempting to add two secondary nodes located on a separate remote server (outside the Kubernetes cluster). After adding the secondaries using rs.add(), they remain in the STARTUP state (stateStr: 'STARTUP').

Network Checks:

  • Successfully tested bidirectional connectivity between nodes using nc -zv on the exposed ports.
  • Primary node: nc -zv <SECONDARY_IP> <EXPOSED_PORT> successed *.
  • Secondary nodes: nc -zv <PRIMARY_IP> <EXPOSED_PORT> successed *.

Replica Set Configuration:

  • Initialized the replica set on the primary with:
 rs.initiate({
      _id: 'dbrs',
      members: [{ _id: 0, host: '<PRIMARY_IP>:<EXPOSED_NODE_PORT>', priority: 4 }]
    });

*Added secondaries via:

rs.add({ host: '<SECONDARY_1_IP>:<EXPOSED_PORT>', priority: 2 });
rs.add({ host: '<SECONDARY_2_IP>:<EXPOSED_PORT>', priority: 1 });

Observations:

  • Secondaries never progress beyond STARTUP.
  • No obvious network issues detected via nc.