Mongo DB Connection showing time out error with ReasonChanged: "InvalidatedBecause:NoLongerPrimary"

I am using Mongo Db C# driver 2.11.5, and I have a static connection for the mongo client which I store globally. Sometimes the read operations fail. And it gives me error:

A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = WritableServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }

Client view of cluster state is

{ ClusterId : "1", 
  ConnectionMode : "ReplicaSet",
  Type : "ReplicaSet",
  State : "Connected",
  Servers : 
  		ServerId: "{ ClusterId : 1, EndPoint : "mongo atlas Host 1" }",
  		EndPoint: "mongo atlas Host 1", 
  		ReasonChanged: "Heartbeat", 
  		State: "Connected", 
  		ServerVersion: 4.2.11,
  		TopologyVersion: ,
  		Type: "ReplicaSetSecondary"
  	{ ServerId: "{ ClusterId : 1, EndPoint : "mongo atlas host 2", EndPoint: "mongo atlas host 2", 	ReasonChanged: "Heartbeat", State: "Connected", ServerVersion: 4.2.11, TopologyVersion: , Type: "ReplicaSetSecondary",
  	 { ServerId: "{ ClusterId : 1, EndPoint : "mongo atlas host 3 }", EndPoint: "mongo atlas host 3", ReasonChanged: "InvalidatedBecause:NoLongerPrimary", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", LastHeartbeatTimestamp: null, LastUpdateTimestamp: "2021-01-15T16:35:37.9491599Z" }] }

So the main error which I am getting over here is

InvalidatedBecause: NoLongerPrimary.

In my connection string, I don’t have added readPrefference so it will be using readPrefference as Primary only.

Could someone Help me in how I can resolve this issue, or if there is something I am not doing correctly.

From the wording of the error the Primary stepped down of failed and based on the topology selecting a new primary.

Catch this and reconnect. Replica sets provide HA, so recovering from this allows for maintenance and failure scenarios.

This error happened saying the primary was down. The documentation says using the topology new primary would be created. So what could be the possible reason this time that a new primary was not created? Is it a frequent error that can happen or it seems to be a temporary issue?

Hi Chris! Thanks for your help. When we say reconnect does it mean to create a new MongoClient? And if yes should I try to perform read operation on primary only or try reconnecting to some replica.
And also do you think this error no longer primary means primary node failure happens. If that is the case ideally Atlas should track that in server logs which they provide. right?

I updated this post with the Atlas tag.

With Atlas first check the cluster activity. Automatic updating is one of those things that happen with Atlas. First check the project’s activity feed before delving in to the logs, lots of events will show here.

An election for a new Primary can take a some time, in my experience usually in the single to tens of seconds.

You should be able to reuse your existing mongo client. Subsequent calls, i.e. GetDatabase, will succeed when the driver can reconnect.

1 Like

Thanks a ton! I will see the cluster activity logs.

1 Like


We saw similar issue on our server too. Is there any solution for this issue?