Issue related to ReplicaSet within Docker Swarm Mode

Greetings to all,

I have some issues with configuration of MongoDB replicaSet within a Swarm Cluster. Here is my docker-compose file:

version: "3.8"

x-node-env: &node-env
  MONGO_INITDB_ROOT_USERNAME: ${DATABASE_USER}
  MONGO_INITDB_ROOT_PASSWORD: ${DATABASE_PASSWORD}

services:
  mongo1:
    image: mongo:4.4
    networks:
      net:
        aliases:
          - mongo1
    command: bash -c "
      chmod 400 /var/local/company/keys/mongodb_openssl.key &&
      mongod --config /data/config/cluster.conf
      "
    environment:
      <<: *node-env
    volumes:
      - "${DATABASE_PATH}:/data/db:rw"
      - "${KEYFILE_DIR}:/var/local/company/keys:r"
      - "${CONFIG_DIR}:/data/config:r"
    deploy:
      placement:
        constraints: [node.hostname == swarm-master1]
      mode: 'replicated'
      replicas: 1
    ports:
      - 27051:27017


  mongo2:
    image: mongo:4.4
    networks:
      net:
        aliases:
          - mongo2
    command: bash -c "
      chmod 400 /var/local/company/keys/mongodb_openssl.key &&
      mongod --config /data/config/cluster.conf
      "
    environment:
      <<: *node-env
    volumes:
      - "/tmp/db2:/data/db:rw"
      - "${KEYFILE_DIR}:/var/local/company/keys:r"
      - "${CONFIG_DIR}:/data/config:r"
    deploy:
      placement:
        constraints: [node.hostname == swarm-node1]
      mode: 'replicated'
      replicas: 1
    ports:
      - 27052:27017
    depends_on:
     - mongo1

  mongo3:
    image: mongo:4.4
    networks:
      net:
        aliases:
          - mongo3
    command: bash -c "
      chmod 400 /var/local/company/keys/mongodb_openssl.key &&
      mongod --config /data/config/cluster.conf
      "
    environment:
      <<: *node-env
    volumes:
      - "/tmp/db3:/data/db:rw"
      - "${KEYFILE_DIR}:/var/local/company/keys:r"
      - "${CONFIG_DIR}:/data/config:r"
    deploy:
      placement:
        constraints: [node.hostname == swarm-node2]
      mode: 'replicated'
      replicas: 1
    ports:
      - 27053:27017
    depends_on:
     - mongo1

networks:
  net:
    driver: overlay

My cluster.conf file is:

storage:
  dbPath: /data/db
replication:
  replSetName: test1
net:
  port: 27017
  bindIpAll: true
security:
  keyFile: /var/local/company/keys/mongodb_openssl.key

So my primary problem(IMHO) is the connectivity. It looks like when I deal with the replicatSet the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD seem to be ignored. When I connect to the master container and then try to connect to the mongodb using user/password provided within env variable and I get rejected. I can connect mongo without passing user/password and initialize the replicaSet, but the issue remains the same, the cluster remains unusable due to missing credentials. For this example I’ve used admin/admin as credentials and what I can observe in the logs is:

{ "c":"ACCESS", "id":20249, "ctx":"conn1","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"admin","authenticationDatabase":"admin","remote":"127.0.0.1:45482","extraInfo":{},"error":"UserNotFound: Could not find user \"admin\" for db \"admin\""}

If I remove replicaSet config everything works like a charm, user/pass are created and one can connect to the each separate mongo instance.

Many thanks for your help.

I am facing the same problem, would anyone know the solution?

It helped when I added “MONGO_INITDB_DATABASE: admin” to “environment”. However, he only has access to the “admin” table this way. Other tables cannot be controlled, however this was the only way to get the “root” user into the tables.

I use image “mongo: 5.0”