Primary node going down/failed once in week

Hello Greetings!!

I have 3 node mongoDB replica running in production. but once in a week primary node down/failed. I went through /var/logs/mongodb.log file. but all the logs are informational logs not finding the reason for why mongodb service down/failed (systemctl status mongodb ). Please advise how can I troubleshoot the issue.

Thanks

do you have a preferred primary set (priority) or do the primary changes and the failure occurs on all 3? if it is the second case you might be looking at the wrong server’s log files.

you may check the logs of all 3 nodes, find out the last time the connection got lost and a new primary was assigned, and check that same timestamp proximity again for failures.

Thank you for the information, Mine is setup to assign secondary as primary once primary is down. Subsequent question is, when the primary is down and secondary is primary. how to setup create mongodb connection string. Please advise…

I guess it is to use a replicaset connection string where you supply the address of all 3 servers.

Other than official documentation, this might also help with many practical examples in it.
How do you connect to a replicaset from a MongoDB shell? - Stack Overflow

your app will try all 3 connections if any fails, then if it hits on a secondary, it will be redirected to the current primary (if a secondary preference is not made)

Thank you …appreciate for the quick help