Cannot connect to replica set via mongo compass

I have a mongodb cluster on 3 different VMs. When I try to access to the replicaset via compass using this GUI :

mongodb://at192.168.20.1:27017,192.168.20.2:27017,192.168.20.3:27017/?replicaSet=rs0&authSource=admin&appName=mongosh+1.4.1

It says getaddrinfo ENOTFOUND masternode or sometimes it says getaddrinfo ENOTFOUND client1

However, if I try to connect separately to each one using :
mongodb://at192.168.20.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT
<mongodb://at192.168.20.2:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT/>
mongodb://at192.168.20.3:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT
It works just fine.

Is there anything wrong with my replicaset GUI and how can I fix this?
FYI, 192.168.20.1, 192.168.20.2 and 192.168.20.3 is associated with masternode, client1 and client2 respectively as well as I use at instead of @ due to new user tag thingy
Your reply is very appreciated.

I think your appNsme should be Compass
Why it is showing mongosh

I tried it and the outcome is the same as I mentioned in the post that I’m able to access to separate mongodb but if I try to access to the whole replica set it shows the same error.

is related to DNS information missing. The following is certainly wrong:

Hello,
I missed type the dns. The actual dns that I used was mongodb://@192.168.20.1:27017,192.168.20.2:27017,192.168.20.3:27017/?replicaSet=rs0&authSource=admin&appName=mongosh+1.4.1 for replica set

mongodb://@192.168.20.X:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT for separate connection.

Before @ user:password

Post a screenshot that shows exactly what you are doing that shows the error you are having.

Also post a screenshot that shows using the mongosh with the same connection.

Please do not redact or obfuscate the user name and password you use since the redaction might hide the error you make. If you are afraid to share the password for a server running on a private network you may always create a dummy user that only has read access on a dummy database.

The first one is the one that I cannot access and error is as follows

This one it works just fine


I think you have an issue with your replica set configuration.

I suspect that your replica set configuration uses host names rather than the IP addresses you use to connect, and that some of those are not resolved correctly by your DNS.

Using a command line terminal share the output of the following.

ping masternode

Then connect with mongosh to a single node, 192.168.20.1:27017 for example and share the output of the command:

rs.status()

Here is the result of rs.status()

This is the result of ping masternode

If this a replica set it should show all 3 members info but showing as standalone
Have you run rs.initiate() and added other 2 nodes?
Please show rs.conf() output

{
set: ‘rs0’,
date: ISODate(“2022-05-30T07:31:56.942Z”),
myState: 1,
term: Long(“47”),
syncSourceHost: ‘’,
syncSourceId: -1,
heartbeatIntervalMillis: Long(“2000”),
majorityVoteCount: 2,
writeMajorityCount: 2,
votingMembersCount: 3,
writableVotingMembersCount: 3,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
lastCommittedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
appliedOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
durableOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
lastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
lastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”)
},
lastStableRecoveryTimestamp: Timestamp({ t: 1653895860, i: 1 }),
electionCandidateMetrics: {
lastElectionReason: ‘stepUpRequestSkipDryRun’,
lastElectionDate: ISODate(“2022-05-25T02:51:46.241Z”),
electionTerm: Long(“47”),
lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1653447105, i: 1 }), t: Long(“46”) },
lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1653447105, i: 1 }), t: Long(“46”) },
numVotesNeeded: 2,
priorityAtElection: 1,
electionTimeoutMillis: Long(“10000”),
priorPrimaryMemberId: 1,
numCatchUpOps: Long(“0”),
newTermStartDate: ISODate(“2022-05-25T02:51:46.260Z”),
wMajorityWriteAvailabilityDate: ISODate(“2022-05-25T02:51:47.302Z”)
},
members: [
{
_id: 0,
name: ‘masternode:27017’,
health: 1,
state: 1,
stateStr: ‘PRIMARY’,
uptime: 607291,
optime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
optimeDate: ISODate(“2022-05-30T07:31:48.000Z”),
lastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
lastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
syncSourceHost: ‘’,
syncSourceId: -1,
infoMessage: ‘’,
electionTime: Timestamp({ t: 1653447106, i: 1 }),
electionDate: ISODate(“2022-05-25T02:51:46.000Z”),
configVersion: 1,
configTerm: 47,
self: true,
lastHeartbeatMessage: ‘’
},
{
_id: 1,
name: ‘client1:27017’,
health: 1,
state: 2,
stateStr: ‘SECONDARY’,
uptime: 448792,
optime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
optimeDurable: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
optimeDate: ISODate(“2022-05-30T07:31:48.000Z”),
optimeDurableDate: ISODate(“2022-05-30T07:31:48.000Z”),
lastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
lastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
lastHeartbeat: ISODate(“2022-05-30T07:31:55.333Z”),
lastHeartbeatRecv: ISODate(“2022-05-30T07:31:55.333Z”),
pingMs: Long(“0”),
lastHeartbeatMessage: ‘’,
syncSourceHost: ‘masternode:27017’,
syncSourceId: 0,
infoMessage: ‘’,
configVersion: 1,
configTerm: 47
},
{
_id: 2,
name: ‘client2:27017’,
health: 1,
state: 2,
stateStr: ‘SECONDARY’,
uptime: 607285,
optime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
optimeDurable: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },
optimeDate: ISODate(“2022-05-30T07:31:48.000Z”),
optimeDurableDate: ISODate(“2022-05-30T07:31:48.000Z”),
lastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
lastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”),
lastHeartbeat: ISODate(“2022-05-30T07:31:55.295Z”),
lastHeartbeatRecv: ISODate(“2022-05-30T07:31:54.984Z”),
pingMs: Long(“0”),
lastHeartbeatMessage: ‘’,
syncSourceHost: ‘masternode:27017’,
syncSourceId: 0,
infoMessage: ‘’,
configVersion: 1,
configTerm: 47
}
],
ok: 1,
‘$clusterTime’: {
clusterTime: Timestamp({ t: 1653895908, i: 1 }),
signature: {
hash: Binary(Buffer.from(“ca77f671a7f355a16649c47ff0d4f500f38d0e0a”, “hex”), 0),
keyId: Long(“7097159497856057348”)
}
},
operationTime: Timestamp({ t: 1653895908, i: 1 })
}

Here is my rs.status()

{
_id: ‘rs0’,
version: 1,
term: 47,
members: [
{
_id: 0,
host: ‘masternode:27017’,
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long(“0”),
votes: 1
},
{
_id: 1,
host: ‘client1:27017’,
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long(“0”),
votes: 1
},
{
_id: 2,
host: ‘client2:27017’,
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long(“0”),
votes: 1
}
],
protocolVersion: Long(“1”),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId(“627e2cebd23c7aae01154b0b”)
}
}>
This is rs.config()

Can you connect by hostname instead IP?
Are other 2 nodes pingable and resolving to the ips you are using
Also output of cat /etc/hosts

Here is the cat of /etc/hosts

I had tried using the hostname instead of IPs and it resulted in the same error.

Yes, every node can ping to each other using ping hostname and the ip is all correct.

Looks ok
Did you try to connect to your replica set using hostnames in your connect string?

Yes I did. It gives me the same error

ENOTFOUND means that the host name masternode cannot be found.

In one of your previous post, you shown that you can ping masternode and the other 2 hosts of your replica set.

The only conclusion I can think of, is that you are not running Compass from the same machine as the one you used to run the ping commands.

The host names of your replica set must be DNS resolvable from all machines you are using to access the replica set. They should all resolve to IP addresses that are routed from all machines you are using to access the replica set.

That might be the case because I run the replicaset on 3 separate VMs. Are there any ways for me to allow compass on my pc to be able to access to the set? I tried using ufw allow from 0.0.0.0/0 on every single VM but it still doesn’t work

Sure there is. But I can only rephrase what I wrote in my previous post. The host names use in the replica set must be know by you PC and your PC must be able to route traffic to the corresponding IP addresses.

Networking, including host and domain name resolution and routing is complex. I recommend using Atlas. You would be up and running a replica set in no time.