mongodb pods are not rescheduling in kubernetes

I have deployed a MongoDB replica set using a StatefulSet in Kubernetes, with persistent volumes (PVs) attached to an NFS server. My cluster consists of 3 master nodes and 3 worker nodes. When I shut down worker1, the MongoDB pods running on that node remain in a terminating state and are not being rescheduled to another available worker node. Can someone help identify the issue and suggest how to resolve it?

It seems this issue is related to Kubernetes scheduling and terminating Pods rather than the Operator itself. Your workloads will not be rescheduled when Pods are in Terminating state.

I suggest checking Events associated with a Pod and verifying that’s keeping them in Terminating state. This is very likely the root of the problem you’re facing. Alternatively you can delete a Pod with “–force” option but I’m not sure how your NFS and Persistent Volumes will behave.