Mongo 7 SOMETIMES starts with no replicaset in Openshift

Hi folks,

I’m struggling with one thing.

I’ve deployed my docker Mongo 7 app to Openshift cluster. Initially everything seems to work just fine, but SOMETIMES after killing pod I’m receiving such information

{"t":{"$date":"2024-01-30T09:15:03.377+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":400}}

Mongo starts without replicaset, it results with corrupting my system. It’s interesting because if I kill the pod for a few times. It’s going to be “fixed” eventually and replicaset is going to start normally.

I’ve created replicaset like this

rs.initiate(
 {
 _id: 'rs0',
 members: [
 { _id: 0, host: 'mongodb-server:27017'},
 ]
 });

I’ve been looking for multiple threads related with this issue but all of them left with no solution

Thanks in advance

1 Like

Hi @Maciek_Langvaille and welcome to the community!
If you want to automate the initialization of the replica set after the kill of pod, you could somehow exploit the following command:

Regards

Are you only creating a replica set with a single node or are you adding them with rs.add() after?

It’s only initiated, I do not use rs.add() and it was working until OpenShift deployment

I believe that I need to initiate replicaset only once during the initial initialization. After that, if pod is restarted the data still should be available as I’m using PersistentVolume to store /data/db.

Hi @Maciek_Langvaille,
Normally yes, but not knowing quite how a pod works, I assumed that when it is killed, a process is permanently deleted (as if starting from a machine that has the data, but is devoid of the previously existing processes). “Starts in a clean state”
So consequently I assume that it is necessary to re-initialize the replica set since it is not enough to have the parameter in the configuration file.

Let me know if what I wrote makes sense!

Regards

Hi,

I still do not understand why it’s working sometimes. As far as I checked the command rs.status() returns info when it’s run “without” replica. Moreover I don’t think it’s about shuttdown signal.
I’ve been testing SIGKILL(delete now) and SIGTERM(wait until mongo is shutted down) and in both ways Mongo can be corrupted in another pod