Invalid URL Error When One Replica Set Member Is Unreachable

Hey,
I’m currently facing the issue that mongoose stops working if one of the replica set members given in the connection string is offline.
I currently have 2 replica set members on two different servers and I just want mongoose to use one or the other, depending on which one is reachable. The problem here is that as soon as I stop one of the servers, it is throwing the following error:

node:internal/errors:465
    ErrorCaptureStackTrace(err);
    ^

TypeError [ERR_INVALID_URL]: Invalid URL
    at new NodeError (node:internal/errors:372:5)
    at URL.onParseError (node:internal/url:563:9)
    at new URL (node:internal/url:643:5)
    at isAtlas (/test/node_modules/mongoose/lib/helpers/topology/isAtlas.js:17:17)
    at MongooseServerSelectionError.assimilateError (/test/node_modules/mongoose/lib/error/serverSelection.js:35:35)
    at /test/node_modules/mongoose/lib/connection.js:813:36
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  input: 'host4.example.com:27017',
  code: 'ERR_INVALID_URL'
}

When I start the “host4.example.com” server, everything works again.
Any idea on how to tell mongoose to ignore that one of the servers is not available?

Thanks in advance for the help :slight_smile:

Hi @Xge_N_A and welcome in the MongoDB Community :muscle: !

Can you please share the connection string (redact user, password and any sensible data) so we can have an idea of what it looks like?

If you could also share the piece of code that helps you connect (the options used, etc) this could help.

Also and most importantly: 2 nodes Replica Set are IMPOSSIBLE. This isn’t a valid cluster architecture and should never exist.

MongoDB performs elections to elect a Primary node when a majority of the voting members of the Replica Set can be reached.

In a 2 nodes RS, majority = 2/2 + 1 = 2… So you need 2 of the 2 nodes to be up and running to elect a Primary. If one of these 2 nodes go down, the remaining node cannot become Primary (because the majority can’t be reached anymore) and if the remaining node happens to be the Primary, it will immediately perform a Step Down operation to become Secondary and prevent any write operation.

If you are in a testing environment, setting up a single node RS is perfectly fine and this will unlock all the cool features like Change Streams or Multi-docs ACID Transactions.

But a production environment always need minimum 3 data bearing nodes (ie one Primary and 2 Secondaries). So you can afford to lose one node as now the majority = 2 with 3 nodes available.

I suggest you have a look to this free Training on the MongoDB University that will explain all the details and subtleties of MongoDB clusters.

Cheers,
Maxime.

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.