How to handle mongodb pod or node failures in Kubernetes?

I am exploring deployment of mongodb in a kubernetes cluster (standalone or stateful set). I deployed a standalone mongodb POD and used a NFS share for the database volume mount. While testing a node failure scenario, I observed that the mongodb database POD remained in the terminating state and a new mongodb POD on an alive worker node remained in a retry state as the mongodb instance was not able to come online due to existing lock files in the database directory. Is this expected?
What should the deployment model for Mongodb be in Kubernetes? Does it support a node/pod failure scenario by bringing up the pod on another node in the cluster or need to use the mongodb replica feature?

1 Like

Hi @Sudhir_Harikant
Welcome to the MongoDB Community!!

This is an expected behaviour from MongoDB as the mongod.lock prevents

  1. two instances of mongod to write on the same FS.
  2. the datapath is not accessed by multiple mongod simultaneously

It is recommended to use replica set feature for redundancy or else have a graceful shut down for the pod.
Please find the documentation here: Graceful Shut down

Let us know if you have any more questions.

Regards
Aasawari

2 Likes

Hi @Aasawari ,

Thank you for confirming that behavior. This makes it clear to me now.

Regards,
Sudhir

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.