Can't connect to MongoDB replica set locally using docker

I am attempting to run a MongoDB cluster locally to test transactions.

I’ve leveraged the Bitnami docker-compose file

version: '2'
services:
  mongodb-primary:
    image: 'bitnami/mongodb:latest'
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongodb-primary
      - MONGODB_REPLICA_SET_MODE=primary
      - MONGODB_ROOT_PASSWORD=password123
      - MONGODB_REPLICA_SET_KEY=replicasetkey123
    ports:
      - 27017:27017

    volumes:
      - 'mongodb_master_data:/bitnami'

  mongodb-secondary:
    image: 'bitnami/mongodb:latest'
    depends_on:
      - mongodb-primary
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongodb-secondary
      - MONGODB_REPLICA_SET_MODE=secondary
      - MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
      - MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
      - MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123
      - MONGODB_REPLICA_SET_KEY=replicasetkey123
    ports:
      - 27027:27017

  mongodb-arbiter:
    image: 'bitnami/mongodb:latest'
    depends_on:
      - mongodb-primary
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
      - MONGODB_REPLICA_SET_MODE=arbiter
      - MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
      - MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
      - MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123
      - MONGODB_REPLICA_SET_KEY=replicasetkey123
    ports:
      - 27037:27017

volumes:
  mongodb_master_data:
    driver: local

The cluster successfully runs and I’m able to run rs.status() and rs.config()

rs.config():

{
  _id: 'replicaset',
  version: 5,
  term: 2,
  members: [
    {
      _id: 0,
      host: 'mongodb-primary:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 5,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 1,
      host: 'mongodb-arbiter:27017',
      arbiterOnly: true,
      buildIndexes: true,
      hidden: false,
      priority: 0,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 2,
      host: 'mongodb-secondary:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    }
  ],
  protocolVersion: Long("1"),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId("636ad53c134a3f3884836da1")
  }
}

rs.status():

{
  set: 'replicaset',
  date: ISODate("2022-11-08T22:58:23.847Z"),
  myState: 1,
  term: Long("2"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long("2") },
    lastCommittedWallTime: ISODate("2022-11-08T22:58:22.005Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long("2") },
    appliedOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long("2") },
    durableOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long("2") },
    lastAppliedWallTime: ISODate("2022-11-08T22:58:22.005Z"),
    lastDurableWallTime: ISODate("2022-11-08T22:58:22.005Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1667948242, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2022-11-08T22:16:31.521Z"),
    electionTerm: Long("2"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1667945788, i: 17 }), t: Long("1") },
    numVotesNeeded: 1,
    priorityAtElection: 5,
    electionTimeoutMillis: Long("10000"),
    newTermStartDate: ISODate("2022-11-08T22:16:31.531Z"),
    wMajorityWriteAvailabilityDate: ISODate("2022-11-08T22:16:31.540Z")
  },
  members: [
    {
      _id: 0,
      name: 'mongodb-primary:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 2513,
      optime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long("2") },
      optimeDate: ISODate("2022-11-08T22:58:22.000Z"),
      lastAppliedWallTime: ISODate("2022-11-08T22:58:22.005Z"),
      lastDurableWallTime: ISODate("2022-11-08T22:58:22.005Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1667945791, i: 1 }),
      electionDate: ISODate("2022-11-08T22:16:31.000Z"),
      configVersion: 5,
      configTerm: 2,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'mongodb-arbiter:27017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 2493,
      lastHeartbeat: ISODate("2022-11-08T22:58:22.069Z"),
      lastHeartbeatRecv: ISODate("2022-11-08T22:58:22.068Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    },
    {
      _id: 2,
      name: 'mongodb-secondary:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 2454,
      optime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long("2") },
      optimeDurable: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long("2") },
      optimeDate: ISODate("2022-11-08T22:58:22.000Z"),
      optimeDurableDate: ISODate("2022-11-08T22:58:22.000Z"),
      lastAppliedWallTime: ISODate("2022-11-08T22:58:22.005Z"),
      lastDurableWallTime: ISODate("2022-11-08T22:58:22.005Z"),
      lastHeartbeat: ISODate("2022-11-08T22:58:22.069Z"),
      lastHeartbeatRecv: ISODate("2022-11-08T22:58:22.069Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'mongodb-primary:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1667948302, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("7c40430db9f17606a984ed8d4e9359e1141366f3", "hex"), 0),
      keyId: Long("7163772610960949254")
    }
  },
  operationTime: Timestamp({ t: 1667948302, i: 1 })
}

I’m able to connect to the nodes individually using

mongodb://root:password123@localhost:27017/?authMechanism=DEFAULT

but I get a time out when attempting to connect with replicaset

Can somebody please help me understand what I’m missing?

Hey Adam, could you please share how you are trying to connect to the replica set?

I hope you are following the recommended way from the official doc:-

mongodb://host1:27017,host2:27017,host3:27017/?replicaSet=replicaset-name

OR

mongo --replSet replicaset-name/morton.local:27018,morton.local:27019

What is the error that you are facing? I mean from the mongod logs you can increase the verbosity and validate the same.

According to what you are saying, it seems that, all the replica members are up and running and none impaired or off?
See if any of the node is unreachable. (because, prima facie looks like a networking issue, but can’t be sure until we can verify the same with some suggestive logs), also see if all the hosts are resolving.

I don’t remember the exact names but here is a quick note: Local servers start to listen on localhost, and there is a config key to set it to listen to outside IPs. Sorry I could remember only this part :slightly_frowning_face:

But since you have port redirections already set, the above suggestion should work fine.

Edit: I find the name: net.bindIp and net.bindIpAll. IP Binding — MongoDB Manual

I ran into this problem as well, trying to create a local replicaset for testing. Connecting to non RS worked, but in replicaset it was trying to connect to the hostname, which in docker doesn’t exist on the host.

The simple solution I came up with is to just set the advertised name to localhost!

MONGODB_ADVERTISED_HOSTNAME: localhost

Other option is to add add it to your hosts file but I disliked this option as using this as a devcontainer it should be self contained.

@github: containers/bitnami/mongodb at main · bitnami/containers (github.com)

sub folders to check: {VERSION}/rootfs/opt/bitnami/

I had written about IP binding before (2 posts above). From the scripts you can find on bitnami’s GitHub page (above links), I can see they did not expose this binding to the environment variables. if it is important you may open a feature request there.

However, you can still control that, and many other settings through a customized config file. a longer template file resides under mongodb/templates/mongodb.conf.tpl in these subfolders.

by the way, I am guessing, setting “localhost” as the advertised name is also a temporary solution until you sail your work to the cloud as it possibly only allows the “host” to connect to “containers” without a name problem. so, thinkering with the config file would be a better solution.

I was facing the same issue trying to dockerize a full stack app which was originally developed using React, Tailwind, Next, Prisma, Mongo Atlas, NextAuth.

The docker-compose.yml spins up mongoDB replica along with the nextjs-frontend.
The code can be used to fix the connectivity issue between prisma and MongoDB replicasets.

here’s how i got it to work.

mongodb-primary:
    image: 'bitnami/mongodb:latest'
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongodb-primary
.
.
.

This is my environment variable for prisma

DATABASE_URL "mongodb://root:prisma@mongodb-primary:27017/test?authSource=admin&retryWrites=false"

I used the value of MONGODB_ADVERTISED_HOSTNAME env in the datbase url to connect to prisma and it connected instantly. I’m 100% sure this will work for everyone.