MongoDB replicaSet error AuthenticationFailed (code 18)

Hi everyone,

I received the error below:

{"t":{"$date":"2022-12-24T11:00:54.895+00:00"},"s":"I",  "c":"NETWORK",  "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"{Replset_name}","host":"{VPS_IP}:27019","error":{"code":18,"codeName":"AuthenticationFailed","errmsg":"Authentication failed."},"action":{"dropConnections":false,"requestImmediateCheck

That error is shown in mongod.log when I add Secondary in the existing replica set MongoDB between different docker containers in 2 server machines.

My replica set structure includes the following:

  • Primary on VPS1:container1 (active) (same overlay-network)
  • Secondary1 on VPS1:container2 (active) (same overlay-network)
  • Secondary2 on VPS2:container1 (error)

Details in rs.status()

"members" : [
                        "_id" : 0,
                        "name" : "",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 258904,
                        "optime" : {
                                "ts" : Timestamp(1672026076, 1),
                                "t" : NumberLong(67)
                        "optimeDurable" : {
                                "ts" : Timestamp(1672026076, 1),
                                "t" : NumberLong(67)
                        "optimeDate" : ISODate("2022-12-26T03:41:16Z"),
                        "optimeDurableDate" : ISODate("2022-12-26T03:41:16Z"),
                        "lastAppliedWallTime" : ISODate("2022-12-26T03:41:16.739Z"),
                        "lastDurableWallTime" : ISODate("2022-12-26T03:41:16.739Z"),
                        "lastHeartbeat" : ISODate("2022-12-26T03:41:17.962Z"),
                        "lastHeartbeatRecv" : ISODate("2022-12-26T03:41:18.521Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 17,
                        "configTerm" : 67
                        "_id" : 1,
                        "name" : "",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 589529,
                        "optime" : {
                                "ts" : Timestamp(1672026076, 1),
                                "t" : NumberLong(67)
                        "optimeDate" : ISODate("2022-12-26T03:41:16Z"),
                        "lastAppliedWallTime" : ISODate("2022-12-26T03:41:16.739Z"),
                        "lastDurableWallTime" : ISODate("2022-12-26T03:41:16.739Z"),
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1671767185, 1),
                        "electionDate" : ISODate("2022-12-23T03:46:25Z"),
                        "configVersion" : 17,
                        "configTerm" : 67,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                        "_id" : 2,
                        "name" : "",
                        "health" : 0,
                        "state" : 6,
                        "stateStr" : "(not reachable/healthy)",
                        "uptime" : 0,
                        "optime" : {
                                "ts" : Timestamp(0, 0),
                                "t" : NumberLong(-1)
                        "optimeDurable" : {
                                "ts" : Timestamp(0, 0),
                                "t" : NumberLong(-1)
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastAppliedWallTime" : ISODate("1970-01-01T00:00:00Z"),
                        "lastDurableWallTime" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2022-12-26T03:41:17.427Z"),
                        "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "authenticated" : false,
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "configVersion" : -1,
                        "configTerm" : -1

And I ensure that I follow some rules:

  • Same MongoDB version
  • Same mongod.conf file (same replSet name)
  • Same keyFile and chmod 600 (or 400)
  • Can ping each order (open port ICMP inbound)

Many thanks !!

Thanks for pinging the other post, I would not see this otherwise.

I suspect your servers start without proper IP whitelisting. Instead of net.bindIp: addresses, either use or use net.bindIpAll:true. as primary may change anytime, apply this to all and restart.
Configuration File Options — MongoDB Manual

if this change solves the issue, then you need to set a proper IP list.

if not, then share your config file here (remove sensitive parts)

Its seem to my PRIMARY is running on subnet with static IP and the SECONDARY_02 on another VPS cant ping to that. ChatGPT advised me set up a overlay network between 2 VPS to direct comunicate each other containers. But could I set up direct without network overlay?

direct communication needs you open ports on each VPS, forward these ports to containers, allow containers to access outside network ( network type, so not just containers’ localhost resources), and set MongoDB to also listen to the IP of your VPSs.

you will need to set it to listen to localhost (, local docker network (10.xx.xx.xx, 172.xx.xx.xx ), vps network (192.168.xx.xx).

you can listen to all IPs in a network with a single entry but I don’t know (for now) what to use. it might be to end address with 0 or 1, or or maybe it is like I haven’t tried this many variations and it is a chance to try it out on your side if you don’t already know the answer :wink:

Does your idea is set up a VPN?

If no, in secondary02 I exposed container to outside VPS2. And I can ping directly from container docker in VPS1 (subnet with static IP to VPS2 ( that was exposed from container:27017.

But the opposite, I worried that Secondary can’t ping back to Primary

Please clarify your idea, many thanks!

You have not given the result of having in your config files. This is an important step to identify possible problems.

and a TL:DR for my above post would be if you need to connect from multiple networks, mongodb server should be set to listen to:

  • localhost/ so you can log in inside the container (not needed if port forwarded to host)
  • any other IP address this container is set to have, including host VPS’s external IP. you may set a whole address set with what I described in later sentences.

otherwise, the server will just reject all incoming connections that are not included in the whitelist.

you seem to have done most of the job but seeing your config file would really help to find a solution faster.

in other words, to make a replica set in different networks, say A and B, you need a two-way connection between them. Both A-to-B and B-to-A connections must be clear.

other than knowing IP addresses and ports, you also need to have them in the config so that when “mongod” starts, it will allow connection from them.

PS: by the way, what I am saying is not VPN. VPN takes time to setup, but it gives a single IP range to containers. it would then be easier to have simpler mongo config. but again, setting up vpn has its own overhead. the decision is your.

This is my simple file config

# mongod.conf

# for documentation of all options, see:

# where to write logging data.
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
  dbPath: /data/db
    enabled: true
#  engine:
#  wiredTiger:

# how the process runs
 # fork: true  # fork and run in background
 # pidFilePath: /var/run/mongodb/  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
  port: 27017
  bindIp:  # Enter,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.

  authorization: "enabled"
  keyFile: /etc/secret.kf

  replSetName: "marketplace_nft"

## Enterprise-Only Options


And I think don’t need to add IP to the whitelist, A and B in the replica set just need to share the same key file for authentication. And then, just config the whitelist for safety IP’s services using this DB (if need).

By the way, I think A (static IP - subnet container) can ping to B (public IP - be forwarded outside) but B can’t ping back to A.

Moreover, this is my docker-compose.yaml file config. I run the container based on the custom image built from mongo:5.0.6 (mongodb_local:latest)

version: "3.9"
    image: mongodb_local:latest
    container_name: stag_marketplace_nft_mongodb01
      - "27019:27017"
        ipv4_address: ""
   - .env
      - "./mongodb_configuration/:/docker-entrypoint-initdb.d/:ro"
      - "./mongodb_configuration/"
      - "./config/mongod.conf:/data/configdb/mongod.conf:ro"
      - "./config/mongod.conf:/etc/mongod.conf.orig:ro"
      - "./config/secret.kf:/etc/secret.kf:ro"
      - "./data:/data/db"
      - "./log:/var/log/mongodb"
    command: ["/usr/bin/mongod","-f","/data/configdb/mongod.conf"]
    restart: on-failure
  log: null
      name: marketplace

that makes the whitelist’s possibility eliminated. do you have the nerves to try some more possibilities? (if not, you may try VPN option. setting it may come difficult but should just work, can’t say it performs better or worse)

there can be more indicators in the mongod log file if this relates to the server. login to container on VPS2, stop the server, remove the mongod log file, restart the server, wait for a while like 30 seconds (3 times of heartbeat timeout should be enough). check if you can read it if errors are present, or try to identify if it has sensitive information, make redactions, and share the log file here so we can check here.

another possible cause is the firewall settings on those VPSs that prohibits these ports to outside sources. I believe you have admin control over their exposed ports to the outside world. can you try connecting to all servers from the outside world, preferably your own pc if it sits outside the VPSs. Because of this problem I believe you don’t have valued data yet, so remove authentication from the config and restart all containers but do not initiate the replicaset yet (rebuild images if your customization requires it), then try connecting from outside with mongo shell or Compass.

In order to resemble more of the actual network communication, you should instead use telnet or netcat to see if you can access the listening port from each host to others.
Ping only proof that the ICMP communication works between thoses hosts, but doesn’t proof that you can access the service listening on that specified port. If this is a firewall issue, for example, the firewall might allow ICMP but not the TCP service.

Was there any OS user password change done.
Try to do ssh from mongod user.

Whenever I see replication error due to bad auth it’s because the keyfile is not the same on each server. The keyFile has to match on each of the replica set members exactly. I had this issue previously and this resolved the issue (I had a copy error).

I would double check the Keyfile and if you see they don’t match make sure they do and then restart any node you had to update it on and check again.

They keyfile is how they nodes authenticate to each other internally so seeing a bad auth on replication is the reason I suspect this could be an issue.

****I usuall use mongodb on VMs and I see this is in Docker but I would assume the same is true.

1 Like

Hi @Khiem_Nguy_n,

looking at {"t":{"$date":"2022-12-24T11:00:54.895+00:00"},"s":"I", "c":"NETWORK", "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"{Replset_name}","host":"{VPS_IP}:27019","error":{"code":18,"codeName":"AuthenticationFailed","errmsg":"Authentication failed."},"action":{"dropConnections":false,"requestImmediateCheck

It seems like the {Replset_name} and {VPS_IP} variables were not expanded to the actual variable values.
Please check if your setting on the parameterization are setup correctly.

Yeah that’s my real intent. It’s log file and I hide sensitive variables.