Unable to connect to local mongodb replica set after restarting my PC

Summary:

Hi, I followed Deploying A MongoDB Cluster With Docker | MongoDB to create a local replica set – which I need for prisma.

The steps

Create replica set nodes

1.docker run -d -p 27017:27017 --name mongo1 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo1

  1. docker run -d -p 27018:27017 --name mongo2 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo2

  2. docker run -d -p 27019:27017 --name mongo3 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo3

Initiate replicate set


docker exec -it mongo1 mongosh --eval "rs.initiate({

_id: \"myReplicaSetName\",

members: [

{_id: 0, host: \"mongo1\"},

{_id: 1, host: \"mongo2\"},

{_id: 2, host: \"mongo3\"}

]

})"

Update /etc/host to include links for mong1, mongo2, mongo3

Edit /etc/host and append the following:


127.0.0.1 mongo1

127.0.0.1 mongo2

127.0.0.1 mongo3

Check status

You can run docker exec -it mongo1 mongosh --eval "rs.status()".

Connect

Connect through one of the following:


mongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=myReplicaSetName

I was able to connect to it when I first created the replica set, BUT when I restart my computer and then try to connect to it again, the connection will timeout.

The problem

I am unable to connect to it through the connection string I pasted above. I’m getting Unable to connect: Server selection timed out after 30000 ms

Important details

  1. I am able to connect to the replica set ONLY AFTER creating it. When I restart my computer, I will get Unable to connect: Server selection timed out after 30000 ms

  2. I am able to connect through direct connection string mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000 – but as far as I understood, this is not connecting to the replica set. Thisworks also for mongodb://127.0.0.1:27018 and mongodb://127.0.0.1:27019

  3. I am using MongoDB for VS Code - Visual Studio Marketplace to connect to it, but my nestjs backend is also unable to connect to it, so this might not be about it.

Research

I’ve read other related topics but I can’t fix it still. I am here asking for help.

Logs

docker exec -it mongo1 mongosh --eval "rs.status()"
Current Mongosh Log ID:	652b5d5d4c291f3fae6c9c71
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.0.1
Using MongoDB:		7.0.2
Using Mongosh:		2.0.1

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

------
   The server generated these startup warnings when booting
   2023-10-15T02:57:26.046+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2023-10-15T02:57:29.189+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2023-10-15T02:57:29.190+00:00: vm.max_map_count is too low
------

{
  set: 'myReplicaSetName',
  date: ISODate("2023-10-15T03:32:45.931Z"),
  myState: 2,
  term: Long("3"),
  syncSourceHost: 'mongo2:27017',
  syncSourceId: 1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
    lastCommittedWallTime: ISODate("2023-10-15T03:32:40.088Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
    appliedOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
    durableOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
    lastAppliedWallTime: ISODate("2023-10-15T03:32:40.088Z"),
    lastDurableWallTime: ISODate("2023-10-15T03:32:40.088Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1697340740, i: 1 }),
  electionParticipantMetrics: {
    votedForCandidate: true,
    electionTerm: Long("3"),
    lastVoteDate: ISODate("2023-10-15T02:57:39.872Z"),
    electionCandidateMemberId: 1,
    voteReason: '',
    lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1697284799, i: 1 }), t: Long("2") },
    maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1697284799, i: 1 }), t: Long("2") },
    priorityAtElection: 1,
    newTermStartDate: ISODate("2023-10-15T02:57:39.888Z"),
    newTermAppliedDate: ISODate("2023-10-15T02:57:39.913Z")
  },
  members: [
    {
      _id: 0,
      name: 'mongo1:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 2120,
      optime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
      optimeDate: ISODate("2023-10-15T03:32:40.000Z"),
      lastAppliedWallTime: ISODate("2023-10-15T03:32:40.088Z"),
      lastDurableWallTime: ISODate("2023-10-15T03:32:40.088Z"),
      syncSourceHost: 'mongo2:27017',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 3,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'mongo2:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 2116,
      optime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
      optimeDurable: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
      optimeDate: ISODate("2023-10-15T03:32:40.000Z"),
      optimeDurableDate: ISODate("2023-10-15T03:32:40.000Z"),
      lastAppliedWallTime: ISODate("2023-10-15T03:32:40.088Z"),
      lastDurableWallTime: ISODate("2023-10-15T03:32:40.088Z"),
      lastHeartbeat: ISODate("2023-10-15T03:32:44.335Z"),
      lastHeartbeatRecv: ISODate("2023-10-15T03:32:44.147Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1697338659, i: 1 }),
      electionDate: ISODate("2023-10-15T02:57:39.000Z"),
      configVersion: 1,
      configTerm: 3
    },
    {
      _id: 2,
      name: 'mongo3:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 2116,
      optime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
      optimeDurable: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long("3") },
      optimeDate: ISODate("2023-10-15T03:32:40.000Z"),
      optimeDurableDate: ISODate("2023-10-15T03:32:40.000Z"),
      lastAppliedWallTime: ISODate("2023-10-15T03:32:40.088Z"),
      lastDurableWallTime: ISODate("2023-10-15T03:32:40.088Z"),
      lastHeartbeat: ISODate("2023-10-15T03:32:45.828Z"),
      lastHeartbeatRecv: ISODate("2023-10-15T03:32:44.688Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'mongo2:27017',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 3
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1697340760, i: 1 }),
    signature: {
      hash: Binary.createFromBase64("AAAAAAAAAAAAAAAAAAAAAAAAAAA=", 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1697340760, i: 1 })
}
docker exec -it mongo2 mongosh --eval "rs.config()"
Current Mongosh Log ID:	652b5fc0f0f4fded3c2eb5c3
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.0.1
Using MongoDB:		7.0.2
Using Mongosh:		2.0.1

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

------
   The server generated these startup warnings when booting
   2023-10-15T02:57:26.057+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2023-10-15T02:57:29.224+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2023-10-15T02:57:29.224+00:00: vm.max_map_count is too low
------

{
  _id: 'myReplicaSetName',
  version: 1,
  term: 3,
  members: [
    {
      _id: 0,
      host: 'mongo1:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 1,
      host: 'mongo2:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 2,
      host: 'mongo3:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    }
  ],
  protocolVersion: Long("1"),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId("6529f995dd3d2ac1a3eedb38")
  }
}

You can’t port forward like this to a replica set. The hosts:ports that are in the rs.conf are what the client will connect to once the topology is discovered from the first seed that is connected to.

So the client will actually try to connect to mogno1:27017
mongo2:27107 and mongo3:27017

I would restart the containers binding each one to a different ip on 27017.

-p 127.0.0.1:27017:27017
-p 127.0.0.2:27017:27017
-p 127.0.0.3:27017:27017

And update the hosts file:

127.0.0.1 mongo1
127.0.0.2 mongo2
127.0.0.3 mongo3