Adding a new user to admin db in replica set forces one secondary offline

I have a 3 node replica set.
Twice today I’ve had a node fall out of the set after adding a user to the admin DB.

The node fails with this error log:

{
  "t": {
    "$date": "2023-09-18T20:46:10.853+00:00"
  },
  "s": "F",
  "c": "-",
  "id": 23095,
  "ctx": "OplogApplier-0",
  "msg": "Fatal assertion",
  "attr": {
    "msgid": 34437,
    "error": "NamespaceNotFound: Failed to apply operation: { op: \"i\", ns: \"admin.system.users\", ui: UUID(\"d14e0fd3-a568-471f-9e75-4667babdb3ae\"), o: { _id: \"admin.monitoring\", userId: UUID(\"6918d85d-0a31-440a-9f18-0c71e79cea8c\"), user: \"monitoring\", db: \"admin\", credentials: { SCRAM-SHA-1: { iterationCount: 10000, salt: \"...==\", storedKey: \"...=\", serverKey: \"...=\" }, SCRAM-SHA-256: { iterationCount: 15000, salt: \"...==\", storedKey: \"...=\", serverKey: \"...=\" } }, roles: [ { role: \"clusterMonitor\", db: \"admin\" } ] }, ts: Timestamp(1695069016, 9), t: 141, v: 2, wall: new Date(1695069016310) } :: caused by :: Unable to resolve d14e0fd3-a568-471f-9e75-4667babdb3ae",
    "file": "src/mongo/db/repl/oplog_applier_impl.cpp",
    "line": 343
  }
}

The commands I used was

mongosh -u root -p --host RS_NAME/127.0.0.1 --port 27018 admin 
...
db.createUser(
  {
    user: "monitoring",
    pwd: 'the-password',
    roles: [ { role: "clusterMonitor", db: "admin" } ]
  }
) # create is ok
db.auth('monitoring','the-password') # check is ok

After adding the user it syncs from the primary to one of the secondaries (I canuse it locally on the node).
The other secondary fails and stops. After restarting it, it simply dies again.

What am I doing wrong here?

If the normal CRUD operations can be applied and synchronized to your secondary nodes except for the “createUser” command, you may need to check the security configuration in the mongod.conf of the failing node, is authentication mode identical across all the nodes ?

1 Like