"Client has attempted to reauthenticate as a single user" messages from replica set members filling our logs

I’m running MongoDB 5.0.6 in our newly setup staging environment as a 3 member replica set.

Comparing to our ancient production replica set running a much older version of MongoDB, we noticed our logs keep filling with entries like these:

Logs

{"t":{"$date":"2022-03-30T13:42:57.825+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn20","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:57.825+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn20","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.3:37808","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:57.825+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn21","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:57.825+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn21","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.3:37810","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:57.902+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn20","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
ncipalName":"__system","authenticationDatabase":"local","remote":"10.0.1.3:37808","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:57.979+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn20","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:57.979+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn20","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.3:37808","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:58.056+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn20","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:58.056+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn20","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.3:37808","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:58.382+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn23","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:58.382+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn22","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:58.382+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn22","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.2:46998","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:58.382+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn23","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.2:47000","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:58.454+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn23","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:58.455+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn23","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.2:47000","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:58.527+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn23","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:58.527+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn23","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.2:47000","extraInfo":{}}}
{"t":{"$date":"2022-03-30T13:42:58.600+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn23","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2022-03-30T13:42:58.600+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn23","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.0.1.2:47000","extraInfo":{}}}

Trying to Google for any explanation yields zero results and it seems I’m the only person in this universe with this problem.

Any help in understanding the root cause is much appreciated and I will gladly attempt to provide any information about our setup.

I noticed these log messages appear at exactly 5 min intervals :thinking:

Anyone? This seems odd, since my setup was rather simple - just following the replica set deployment guide and applying some of the “production notes” optimisations.

I switched the replica set from keyfile auth to clusterAuthMode x509 and the errors disappeared :person_shrugging:

1 Like

Hello, were you ever able to resolve this using a keyfile? I’m looking at the same thing. I tried x509 certs briefly, but saw similarly verbose logs. Made me think it was a network-related thing, like a drop and reconnect, but I see no evidence of that in my network logs.

I converted a 5.0.14 stand-alone using authorization to a rs. Used a brain-dead simple configuration; rs config, keyfile on all nodes, bind using DNS name, matching the primary with auth.

Everything is working as expected, but there are endless “Client has attempted to reauthenticate as a single user” logs coming from the IPs of the secondary nodes in ACCESS.

So it is not an issue with the keyfile auth, as the replica set is started, the nodes have initiated, and the data is replicated. The admin DB users are replicating on all nodes as well.

I tried turning off authorization on all nodes and restarting, and it still logs these reauthentication from the nodes to the __system user and local db messages.

There is minimal documentation on the internal SCRAM communication between replica nodes. I looked at the source code, and there is one reference to this log as a warning that the same credentials were reused. So it seems 100% benign in nature. I’d like to figure out what I did wrong, if anything.

{"t":{"$date":"2023-02-27T19:45:33.609+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn37","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2023-02-27T19:45:33.609+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn37","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.2.1.5:58803","extraInfo":{}}}

I set the access network, and replica level logs to verbose, and I do see “Error while waiting for hello response” where the time limit was exceeded in the replica log. I’m assuming that just means it timed out and nothing changed in the topology though.

Would adjusting timeout settings from the default help here, or is this normal behavior?

I tried doubling the values a few times, then reverted to the default values in the replication cfg as this seems unrelated to the reauth messages and made no difference on the hello messages.

I also see good information in level-3 ACCESS logs. It appears to behave as though it has no user/pass (empty strings for both fields), and it falls back to the system default and is able to proceed? The nodes are obviously not authenticating properly. Did I miss some step in setting up the nodes?

{"t":{"$date":"2023-02-27T21:25:25.184+00:00"},"s":"D3", "c":"ACCESS",   "id":5286300, "ctx":"conn33","msg":"Starting authentication step","attr":{"step":"SaslSupportedMechanisms"}}
{"t":{"$date":"2023-02-27T21:25:25.184+00:00"},"s":"D3", "c":"ACCESS",   "id":5859101, "ctx":"conn33","msg":"Set user name for session","attr":{"userName":{"user":"__system","db":"local"},"oldName":{"user":"","db":""}}}
{"t":{"$date":"2023-02-27T21:25:25.184+00:00"},"s":"D3", "c":"ACCESS",   "id":5286301, "ctx":"conn33","msg":"Finished authentication step","attr":{"step":"SaslSupportedMechanisms"}}
{"t":{"$date":"2023-02-27T21:25:25.185+00:00"},"s":"D3", "c":"ACCESS",   "id":5286300, "ctx":"conn32","msg":"Starting authentication step","attr":{"step":"SaslSupportedMechanisms"}}
{"t":{"$date":"2023-02-27T21:25:25.185+00:00"},"s":"D3", "c":"ACCESS",   "id":5859101, "ctx":"conn32","msg":"Set user name for session","attr":{"userName":{"user":"__system","db":"local"},"oldName":{"user":"","db":""}}}
{"t":{"$date":"2023-02-27T21:25:25.185+00:00"},"s":"D3", "c":"ACCESS",   "id":5286301, "ctx":"conn32","msg":"Finished authentication step","attr":{"step":"SaslSupportedMechanisms"}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286300, "ctx":"conn33","msg":"Starting authentication step","attr":{"step":"SaslStart"}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286203, "ctx":"conn33","msg":"Updating user name for session","attr":{"userName":{"user":"","db":"local"},"oldName":{"user":"","db":""}}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286200, "ctx":"conn33","msg":"Setting mechanism name","attr":{"mechanism":"SCRAM-SHA-256"}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286304, "ctx":"conn33","msg":"Determined mechanism for authentication"}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286301, "ctx":"conn33","msg":"Finished authentication step","attr":{"step":"SaslStart"}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286300, "ctx":"conn32","msg":"Starting authentication step","attr":{"step":"SaslStart"}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286203, "ctx":"conn32","msg":"Updating user name for session","attr":{"userName":{"user":"","db":"local"},"oldName":{"user":"","db":""}}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286200, "ctx":"conn32","msg":"Setting mechanism name","attr":{"mechanism":"SCRAM-SHA-256"}}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286304, "ctx":"conn32","msg":"Determined mechanism for authentication"}
{"t":{"$date":"2023-02-27T21:25:25.191+00:00"},"s":"D3", "c":"ACCESS",   "id":5286301, "ctx":"conn32","msg":"Finished authentication step","attr":{"step":"SaslStart"}}
{"t":{"$date":"2023-02-27T21:25:25.196+00:00"},"s":"D3", "c":"ACCESS",   "id":5286300, "ctx":"conn33","msg":"Starting authentication step","attr":{"step":"SaslContinue"}}
{"t":{"$date":"2023-02-27T21:25:25.196+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn33","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2023-02-27T21:25:25.196+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn33","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.2.1.4:56091","extraInfo":{}}}
{"t":{"$date":"2023-02-27T21:25:25.196+00:00"},"s":"D3", "c":"ACCESS",   "id":5286305, "ctx":"conn33","msg":"Marking as cluster member"}

I am using Windows for my deployment (not by my choice), so it appears my issue are probably related to that.

I tried to force SCRAM-SHA-1 authorization because I was seeing Sasl mechanisms being used (despite not setting those up), but the nodes still authorize with SHA-256 regardless, so I can’t even rule that out. But I also tried 1/2 a dozen other configuration settings (mongod.cfg and windows security and network) to try to get rid of these messages, and they are present no matter what. I also upgraded to Mongo 6 from 5.0.14 to see if it was a bug or something.

Below is the node auth process I am seeing.

{"t":{"$date":"2023-02-28T17:03:27.528+00:00"},"s":"D3", "c":"ACCESS",   "id":5286300, "ctx":"conn33","msg":"Starting authentication step","attr":{"step":"SaslStart"}}
{"t":{"$date":"2023-02-28T17:03:27.529+00:00"},"s":"D3", "c":"ACCESS",   "id":5286203, "ctx":"conn33","msg":"Updating user name for session","attr":{"userName":{"user":"","db":"local"},"oldName":{"user":"","db":""}}}
{"t":{"$date":"2023-02-28T17:03:27.529+00:00"},"s":"D3", "c":"ACCESS",   "id":5286200, "ctx":"conn33","msg":"Setting mechanism name","attr":{"mechanism":"SCRAM-SHA-256"}}
{"t":{"$date":"2023-02-28T17:03:27.529+00:00"},"s":"D3", "c":"ACCESS",   "id":5286304, "ctx":"conn33","msg":"Determined mechanism for authentication"}
{"t":{"$date":"2023-02-28T17:03:27.529+00:00"},"s":"D3", "c":"ACCESS",   "id":5286301, "ctx":"conn33","msg":"Finished authentication step","attr":{"step":"SaslStart"}}
{"t":{"$date":"2023-02-28T17:03:27.533+00:00"},"s":"D3", "c":"ACCESS",   "id":5286300, "ctx":"conn34","msg":"Starting authentication step","attr":{"step":"SaslContinue"}}
{"t":{"$date":"2023-02-28T17:03:27.533+00:00"},"s":"W",  "c":"ACCESS",   "id":5626700, "ctx":"conn34","msg":"Client has attempted to reauthenticate as a single user","attr":{"user":{"user":"__system","db":"local"}}}
{"t":{"$date":"2023-02-28T17:03:27.533+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn34","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":false,"principalName":"__system","authenticationDatabase":"local","remote":"10.2.1.5:52348","extraInfo":{}}}
{"t":{"$date":"2023-02-28T17:03:27.534+00:00"},"s":"D3", "c":"ACCESS",   "id":5286305, "ctx":"conn34","msg":"Marking as cluster member"}
{"t":{"$date":"2023-02-28T17:03:27.534+00:00"},"s":"D3", "c":"ACCESS",   "id":5286203, "ctx":"conn34","msg":"Updating user name for session","attr":{"userName":{"user":"__system","db":"local"},"oldName":{"user":"","db":"local"}}}
{"t":{"$date":"2023-02-28T17:03:27.534+00:00"},"s":"D3", "c":"ACCESS",   "id":5286306, "ctx":"conn34","msg":"Successfully authenticated","attr":{"client":"10.2.1.5:52348","isSpeculative":false,"isClusterMember":true,"mechanism":"SCRAM-SHA-256","user":"__system","db":"local"}}
{"t":{"$date":"2023-02-28T17:03:27.534+00:00"},"s":"D3", "c":"ACCESS",   "id":5286301, "ctx":"conn34","msg":"Finished authentication step","attr":{"step":"SaslContinue"}}

I just upgraded my MongoDB replica set from v4.4.16 to MongoDB v5.0.14 and these message started showing up in mongod.log after the upgrade:

{“t”:{“$date”:“2023-03-07T15:21:02.259-05:00”},“s”:“W”, “c”:“ACCESS”, “id”:5626700, “ctx”:“conn29”,“msg”:“Client has attempted to reauthenticate as a single user”,“attr”:{“user”:{“user”:“__system”,“db”:“local”}}}

Any idea how to resolve this ?

Please help.

Sally

Hi,

Did this issue went away for you?
I just upgraded my replica set to 5.0.14 and I got the same warning messages showing up every 5 min in mongod.log.

Please let me know.

Thanks !

Sally

The tl;dr here is that this appears to be normal behavior now in all mongo replica set deployments using keyfile/auth in Mongo 5 and beyond, both Windows and Linux.

Per the documentation, the nodes use the keyfile to initiate and gain access to the replica set, then they do not use it for further auth, and auth is required when using a keyFile now in Mongo 5 and beyond.

I see a reauthorization log entry for each secondary node and for each DB in my deployment, so it appears the nodes reauth to each DB every 5 minutes.

The documentation does not explain the keyfile SCRAM internal reauthentication mechanism between replica set nodes in any great detail. The source code references this warning message in one place in a way that seems it could/should be avoided. There are no steps that I am aware of to “authenticate” the nodes internally. Technically, they all share the admin DB, so you would assume they would default to that for db credentials, but you cannot specify a config or service flag value anywhere.

Even x509 is chatty in this regard, logging constant messages every 5 minutes when the nodes connect.

I had upgraded from a Mongo 4.4 install to a Mongo 5.0.14 prior to creating the replica set myself from a stand-alone instance. However, even a fresh install with 3 locally hosted instances of mongo using auth and a keyfile in either Windows or Linux will cause these log messages to appear endlessly when using Mongo 5 or greater, following the deployment instructions exactly. In Mongo 4, you could use keyfile without auth. In Mongo 5 and greater, auth is required, so we will see these logs.

I tried things like altering all kinds of config values, manual auth from the secondary nodes, creating new users with various roles on the primary in multiple ways, reconfig of replica set configuration settings, etc. Unless there is some step to authorize these nodes to each other that I am missing, this is a lost cause chasing this.

I’m guessing nobody cares about this because keyfile is not used in enterprise, and not recommended in production environments. In my particular case, this is a 100% internal use db and I am using TLS already, so I didn’t want to go through the trouble/complication of using x509, as they would be self-signed anyway. Keeping things simple makes it much easier to maintain and debug.

Thanks for the info Allen.
I had opened a Jira ticket with MongoDB and they are now investigating.
I will share the info with you when I have the result from MongoDB support.

Thanks !
Sally

1 Like

Thanks Sally, appreciated. Please share any information you receive from that ticket in this thread to help direct anyone else with this issue as well.

Anything new on this Sally?

Thanks

I am assuming this isn’t going anywhere. I am planning to move away from using MongoDB now anyway.

So what did the MongoDB support said?

Also wondering - Did mongoDB support provide a solution to remove
“Client has attempted to reauthenticate as a single user” messages??

Getting the authentication messages 5 times in a row for the same second (different ms) every 5 minutes.
And this is happening for both SECONDARY node.