Mongodb-atlas-local reconfiguring replica sets on subsequent runs

The mongodb-atlas-local image stores the replica set information in the /data/db volume. However, each time the container is restarted, it regenerates a replica set config instead of reading the old one and reusing it. This causes a configuration error that prevents the instance from functioning normally.

Steps to reproduce:

  1. Run a mongodb-atlas-local container (bound to /data/db and /data/configdb)
  2. Make note of the replica set id via rs.config() or a similar command
  3. Terminate the docker instance.
  4. Rerun mongodb-atlas-local
  5. Get the error Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat

The node will be unable to elect a primary as a result and eventually terminate.

The originally generated config set name in this case was “e78f2c1395b4”

[W] REPL ReplCoord-0 Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat {"error":{"code":74,"codeName":"NodeNotFound","errmsg":"No host described in new configuration with {version: 1, term: 2} for replica set e78f2c1395b4 maps to this node"},"localConfig":{"_id":"e78f2c1395b4","version":1,"term":2,"members":[{"_id":0,"host":"e78f2c1395b4:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1,"tags":{},"secondaryDelaySecs":0,"votes":1}],"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"67eda70babf0b748a47c3c94"}}}}
[W] REPL ReplCoord-0 Local replica set configuration document set name differs from command line set name; waiting for reconfig or remote heartbeat {"localConfigSetName":"e78f2c1395b4","commandLineSetName":"77ed7c7a97aa"}

Outputs of rs commands on the second run:
rs.status:

MongoServerError[InvalidReplicaSetConfig]: Our replica set config is invalid or we are not a member of it

rs.initiate:

MongoServerError[AlreadyInitialized]: already initialized

rs.config will have the original replica set id.

@Sterling_Larson what method are you using in step 3 to “terminate the docker instance”?

I am facing the same issue. This only happens when I do docker compose down, and a subsequent docker compose up. If I just use a stop and start, everything is fine. It is obviously related to creating a new container.

@Will_Smith2 are you able to share your compose file for me to try and replicate, please?

Yes, I absolutely can share. I am told that new users can not upload attachments, but I have pasted it here Goods docker-compose.yml - Pastebin.com (will expire in 2 weeks).

1 Like

Hi @Sterling_Larson and @Will_Smith2,
Thank you for reaching out about this MongoDB Atlas Local container issue!

TL;DR

Add a fixed hostname to your MongoDB container in the docker-compose.yaml file to solve the re-create issue:

services:
  mongo:
    image: mongodb/mongodb-atlas-local
    hostname: mongodb  # This is the key fix
    ports:
      - 27017:27017
    volumes:
      - './data/db:/data/db'
      - './data/config:/data/configdb'

Detailed Explanation

Reproducer

I was able to reproduce the exact issue you’re experiencing using a simple docker compose setup:

services:
  mongo:
    image: mongodb/mongodb-atlas-local
    ports:
      - 27017:27017
    volumes:
      - './data/db:/data/db'
      - './data/config:/data/configdb'

Steps to reproduce:

  1. docker compose up
    • connect and verify that cluster is working
  2. docker compose down
  3. docker compose up
    • connect and notice that the cluster is broken

What’s Happening

The problem occurs because of how MongoDB’s replica set configuration works:

  1. On the first run, MongoDB initializes a replica set using the container’s auto-generated hostname
  2. This hostname is stored in the replica set configuration in the persisted volume
  3. When you stop and restart the container with docker compose down/up:
    • Docker creates an entirely new container with a new hostname
    • MongoDB tries to use the stored replica set config, but can’t find itself in the member list
    • The error occurs: Locally stored replica set configuration does not have a valid entry for the current node

This is exactly what @Sterling_Larson observed in the logs: the replica set config had hostname e78f2c1395b4, but the new container was 77ed7c7a97aa.

Why the Fix Works

By adding a fixed hostname, mongodb to your docker-compose.yaml file:

  1. The first time MongoDB starts, it creates a replica set config using mongodb as the hostname
  2. When you restart the container, Docker will assign the same hostname mongodb to the new container
  3. MongoDB can now find itself in the stored replica set configuration and start normally
    You can verify this works by checking the replica set config after fixing:
rs.config()

The output will show:

{
  // ...
  members: [
    {
      // ...
      host: 'mongodb:27017',  // Fixed hostname instead of container ID
      // ...
    }
  ],
  // ...
}

This solution ensures that the hostname in the replica set configuration remains consistent across container restarts, allowing MongoDB to function correctly.

Hope this helps! Let me know if you have any questions.

3 Likes

I have implemented the solution outlined here in my docker-compose.yml file, and have tested this. Works like a charm. Thank you for finding the solution, and posting such a great walkthrough of what is ultimately happening.