How can I connect to my replicaset via DNS?

I have 2 node setup

10.1.0.7  - mongodb-1.com (primary)
10.1.0.8  - mongodb-2.com (secondary)

here is my mongod.conf

net:
  port: 27017
  bindIp: 0.0.0.0
  tls:
    mode: allowTLS
    certificateKeyFile: /etc/mongodb/certificates/mongodb.pem
security:
  authorization: enabled
  keyFile: /home/dbuser/repset
replication:
  replSetName: "repset"

here is my rs.conf()

{
  _id: 'repset',
  version: 4,
  term: 3,
  members: [
    {
      _id: 0,
      host: '10.1.0.7:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 1,
      host: '10.1.0.8:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    }
  ],
  protocolVersion: Long("1"),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId("641aebeb4623f6fd0e700cf2")
  }
}

Now my proposed setup would be there are 2 application servers that will connect to this replica set. 1 is via private IP and the other is via DNS name. Now when I connect via private IP. it is working fine. I cant do the same when connecting via DNS name.

any help here guys? :smiley:

are you trying the two methods from the same computer or two separate ones in two separate networks? can you connect to any other service through DNS? it might be an issue from there. or the firewall setting might be preventing such a connection.

you may spin up an HTTP server and try DNS on it first. (ngix?, apache? python or nodejs?). if connections on http ports success, then check MongoDB ports you have set up.

by the way, 2-member replica set with these settings might also be the issue. even numbers are not recommended as primary selection event will not properly decide who to go.

Our ideal setup would be two methods from the same computer. Because we would be using 2 app servers 1 being with the same vnet as where our DB is and will communicate via private IP. while the other app server would be connecting to the DNS since the 2nd app server would be on premise.

you might be confusing what is a DNS. it is just a way to connect to an IP address, but using a name such as “my.private.me”.

the name and the address have to be set in somewhere on the chain of name resolvers. and in return, the target machine at the address must be accessible through any number of routers correctly with redirection.

the simplest DNS setting would be changing the /etc/hosts file in the client machine, to an address that resolves immediately to an IP address of another machine on the same network.

Have you put this simplest setup to test? this will tell you if you have any problems on the network level. if it succeeds then you can set the next level on the router you use if it has its own external IP (you will need port forwarding). else, it means you cannot connect at any level of name resolutions and you have to first check the firewall and related security settings of your servers.

PS: VNET provides a common virtual network so you won’t see this mess and I think you should have a success at network level. But I could be wrong and your VNET setup might have already added required firewall setting for its own network. you may peek at its settings if you fail at network-level access.

1 Like

No I’m not confused at the DNS part. What I need was to enable TLS for my secondary app server (which is located on our onprem env) when communicating to my mongodb server

Excuse my diagram :smiley: (correction on the connectionstring on the ON PREM side it should be replicaSet=repset)

While not requiring it for my 1st app server which is in the same environment as my mongodb servers hence the allowTLS flag. and thus connectionstrings are using private IP only.

Both my 2 mongodb nodes each have their LetsEncrypt cert.

The diagram is great :wink: but now I am confused with your word selections: have you solved the issue?

I have another suggestion to try first, to identify a point of failure. If it is not tied too tight to your setup, remove certificate requirements from the config and restart your servers, then retry connection from the on-prem app. (you can stop these servers and create/run test servers for the purpose, tell me if you need that but can’t figure it out)

If this fails, we can say it is on network settings. If you connect, then we can check about the use of the certificate.

My problem is that on my rs.conf() the host are pointed to the private IPs of each replicaset nodes.

{
  _id: 'repset',
  version: 4,
  term: 3,
  members: [
    {
      _id: 0,
      host: '10.1.0.7:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 1,
      host: '10.1.0.8:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    }
  ],
  protocolVersion: Long("1"),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId("641aebeb4623f6fd0e700cf2")
  }
}

When the setup is like this. From my onprem servers, i could not connect to my replicaset using this connstring:

mongodb://username:password@mongodb-1.com,mongodb-2.com/?authSource=admin&tls=true&replicaSet=repset

but when I try to connect to either node 1 or node 2 as a standalone, it is working fine.

mongodb://username:password@mongodb-1.com/?authSource=admin&tls=true
mongodb://username:password@mongodb-2.com/?authSource=admin&tls=true

The fact that you can cannot connection with option replicaSet=repset and you can without it is explain in https://www.mongodb.com/docs/drivers/node/current/fundamentals/connection/connect/#connect-to-a-replica-set.

With replicaSet=repset, once an initial connection to mongodb-1.com or mongodb-2.com is established, the driver reads the replica set configuration. It then tries to reconnect to all members of the replica set. In your case, you are specifying IPs in 10.x.x.x network which is part of the non globally routable networks. So you cannot connect.

1 Like

nice, this eliminates the DNS problems, and the remaining possibility is your choice for the number of nodes, namely “2-members”.

Your servers refuse to connect to the replica set because the configuration for the set is not something acceptable. You see, your servers have both the same level of priorities and they race against each other and vote for themselves to become the primary node, but that makes a tie and no primary is selected. hence there is no replica set running.

try connecting with “mongo” or “mongosh” or “Compass”, and the node you connect should show “secondary”. it will show “primary” when one of them won the election.

try changing their priority levels first to see if they will resolve with only 2 members. but the recommended setup is at least 3 members, 2 of them active, and 3rd is only there for voting (an arbiter, holds no data). there are many other probable settings including making one of them a low priority delayed node that will not be actively used (a passive backup).

hello @Yilmaz_Durmaz @steevej

I have an update. Basically I tried to add the domain name to the rs members and here is my replicaset config now.

members: [
    {
      _id: 0,
      name: '10.1.0.7:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 11860,
      optime: { ts: Timestamp({ t: 1682488120, i: 1 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1682488120, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-04-26T05:48:40.000Z"),
      optimeDurableDate: ISODate("2023-04-26T05:48:40.000Z"),
      lastAppliedWallTime: ISODate("2023-04-26T05:48:40.274Z"),
      lastDurableWallTime: ISODate("2023-04-26T05:48:40.274Z"),
      lastHeartbeat: ISODate("2023-04-26T05:48:40.318Z"),
      lastHeartbeatRecv: ISODate("2023-04-26T05:48:40.267Z"),
      pingMs: Long("1"),
      lastHeartbeatMessage: '',
      syncSourceHost: '10.1.0.10:27017',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 12,
      configTerm: 5
    },
    {
      _id: 1,
      name: '10.1.0.8:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 11928,
      optime: { ts: Timestamp({ t: 1682488120, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-04-26T05:48:40.000Z"),
      lastAppliedWallTime: ISODate("2023-04-26T05:48:40.274Z"),
      lastDurableWallTime: ISODate("2023-04-26T05:48:40.274Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1682476269, i: 1 }),
      electionDate: ISODate("2023-04-26T02:31:09.000Z"),
      configVersion: 12,
      configTerm: 5,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: 'mongodb-1.com:27017',
      health: 0,
      state: 8,
      stateStr: '(not reachable/healthy)',
      uptime: 0,
      optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
      optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
      lastAppliedWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastDurableWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastHeartbeat: ISODate("2023-04-26T05:48:32.849Z"),
      lastHeartbeatRecv: ISODate("1970-01-01T00:00:00.000Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: "Couldn't get a connection within the time limit",
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: -1,
      configTerm: -1
    },
    {
      _id: 3,
      name: 'mongodb-2.com:27017',
      health: 0,
      state: 8,
      stateStr: '(not reachable/healthy)',
      uptime: 0,
      optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
      optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
      lastAppliedWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastDurableWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastHeartbeat: ISODate("2023-04-26T05:48:29.994Z"),
      lastHeartbeatRecv: ISODate("1970-01-01T00:00:00.000Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: "Couldn't get a connection within the time limit",
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: -1,
      configTerm: -1
    }
  ]

Now when I try to connect from my app outside the VM network using this script. it is now working.

mongodb://user:password@mongodb-1.com,mongodb-2.com/?authSource=admin&tls=true&replicaSet=repset

And also via my app within the network (thru private IP)

mongodb://user:password@10.1.0.7,10.1.0.8/?authSource=admin&tls=true&replicaSet=repset

My question is if my replica set config is valid? I dont remember the link where I found to try to add each domain name to the replicaset.

1 Like

you have corrected some parts for your configuration so the set now has stateStr: 'PRIMARY' meaning main conditions to start are met.

but some other parts are not working yet hence stateStr: '(not reachable/healthy)'

from _id portions, you now seem somehow created “4 member” set, 2 IP and 2 DNS name, but only IP ones are reachable.

I don’t think this was your intention anyway, and this output is collective info from within the server ( rs.stat()? ).

so, can you share the “actual” config files for each member (remove sensitive info, if any)?

Yes, but somehow my actual use-case is now working lol. I just do not know the drawback in this.

Here is my rs.status

{
  set: 'repset',
  date: ISODate("2023-04-26T08:03:44.377Z"),
  myState: 2,
  term: Long("6"),
  syncSourceHost: '10.1.0.7:27017',
  syncSourceId: 0,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 2,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long("6") },
    lastCommittedWallTime: ISODate("2023-04-26T08:03:34.364Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long("6") },
    appliedOpTime: { ts: Timestamp({ t: 1682496224, i: 1 }), t: Long("6") },
    durableOpTime: { ts: Timestamp({ t: 1682496224, i: 1 }), t: Long("6") },
    lastAppliedWallTime: ISODate("2023-04-26T08:03:44.364Z"),
    lastDurableWallTime: ISODate("2023-04-26T08:03:44.364Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1682496184, i: 1 }),
  electionParticipantMetrics: {
    votedForCandidate: true,
    electionTerm: Long("6"),
    lastVoteDate: ISODate("2023-04-26T06:02:50.797Z"),
    electionCandidateMemberId: 0,
    voteReason: '',
    lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1682488970, i: 1 }), t: Long("5") },
    maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1682488970, i: 1 }), t: Long("5") },
    priorityAtElection: 1,
    newTermStartDate: ISODate("2023-04-26T06:02:54.146Z"),
    newTermAppliedDate: ISODate("2023-04-26T06:02:54.769Z")
  },
  members: [
    {
      _id: 0,
      name: '10.1.0.7:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 19964,
      optime: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long("6") },
      optimeDurable: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long("6") },
      optimeDate: ISODate("2023-04-26T08:03:34.000Z"),
      optimeDurableDate: ISODate("2023-04-26T08:03:34.000Z"),
      lastAppliedWallTime: ISODate("2023-04-26T08:03:34.364Z"),
      lastDurableWallTime: ISODate("2023-04-26T08:03:34.364Z"),
      lastHeartbeat: ISODate("2023-04-26T08:03:43.485Z"),
      lastHeartbeatRecv: ISODate("2023-04-26T08:03:44.069Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1682488970, i: 2 }),
      electionDate: ISODate("2023-04-26T06:02:50.000Z"),
      configVersion: 12,
      configTerm: 6
    },
    {
      _id: 1,
      name: '10.1.0.8:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 20032,
      optime: { ts: Timestamp({ t: 1682496224, i: 1 }), t: Long("6") },
      optimeDate: ISODate("2023-04-26T08:03:44.000Z"),
      lastAppliedWallTime: ISODate("2023-04-26T08:03:44.364Z"),
      lastDurableWallTime: ISODate("2023-04-26T08:03:44.364Z"),
      syncSourceHost: '10.1.0.7:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 12,
      configTerm: 6,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: 'mongodb-1.com:27017',
      health: 0,
      state: 8,
      stateStr: '(not reachable/healthy)',
      uptime: 0,
      optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
      optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
      lastAppliedWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastDurableWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastHeartbeat: ISODate("2023-04-26T08:03:32.851Z"),
      lastHeartbeatRecv: ISODate("1970-01-01T00:00:00.000Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: "Couldn't get a connection within the time limit",
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: -1,
      configTerm: -1
    },
    {
      _id: 3,
      name: 'mongodb-2.com:27017',
      health: 0,
      state: 8,
      stateStr: '(not reachable/healthy)',
      uptime: 0,
      optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
      optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
      lastAppliedWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastDurableWallTime: ISODate("1970-01-01T00:00:00.000Z"),
      lastHeartbeat: ISODate("2023-04-26T08:03:40.158Z"),
      lastHeartbeatRecv: ISODate("1970-01-01T00:00:00.000Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: "Couldn't get a connection within the time limit",
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: -1,
      configTerm: -1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1682496224, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("ccd6e271e288862a6e4176fdcfa5d1055cf169c2", "hex"), 0),
      keyId: Long("7216257184332513282")
    }
  },
  operationTime: Timestamp({ t: 1682496224, i: 1 })
}

Only 2 members should be in that replica set, not 4. I would suggest try fixing that.

Before you can use a DNS name (FQDN or similar), you need to make sure the mapped ip is reachable from app2. Once it is, you will need to set up a DNS record for the domain name to ip address and make sure mongodb is listening on that domain name (bind_ip).

I’m guessing you also need to use the same name when adding the member with rs.add or rs.initiate

Yes. But my app2 would not be able to connect to the replicaset if the hostname in rs.conf() is pointing to 10.1.0.7 and 10.1.0.8. Now when I try to change its config to the domain name (mongodb-1.com, mongodb-2.com) my app1 would be using DNS and not via private IP (since they share the same vnet) which will affect its latency that’s why we prefer using that.

For this part, we do not have much control since we are using DNS of our cloud provider (Azure). Although this is reachable for my app2.

I just want to have an understanding why is it that my current setup in rs.conf() is working (having 4 members, 2 private IP and 2 domain name) even if the 2 domain name members are in an unhealthy state.

This Link is similar to my post here if I cause any confusions.