[Sharded cluster] Upgrading docker image is not re-generating indexes (mongo v5)

Hello Mongo Folks,

I am deploying a mongo-sharded cluster using the bitnami mongo-sharded helm chart for Kubernetes.

The configuration is the following:

2 mongoS
2 replicas per shard, 1 arbiter
3 replicas for config servers

Mongo version: 5
Docker image: bitnami mongo-sharded - 5.0.5-debian-10-r13

We have indexes of ~60GB of RAM in the cluster, with one database per client (~100). Some are few MBs, some are few GBs.
The cluster is on using nodes of 32GB RAM and 4 CPU.

Problem: When I bumped the docker image from 5.0.5-debian-10-r0 to 5.0.5-debian-10-r13,
I got a rolling update with taking down my arbiter and my secondary for each shard. Once they got up, my primary was taken down directly as well, and updated.
I would expect the indexes to be back in RAM shortly afterwards, but no increase is noticeable if I check the memory usage.

The indexes that are still present in the database if I list them. But the RAM of the pods is super low and is not increasing. See attached picture:

Description of the picture:

  1. yesterday, I manually re-created the indexes of one database running on one shard (after a rolling update, the indexes are not getting recreated). That’s why the memory is low.
  2. From 9h50 to 11h05, I manually recreated all the indexes of 1 database running on the shard. 5~10min are enough to generate the indexes.
  3. From 11h05 on, I start a rolling update to modify the docker image from 5.0.5-debian-10-r13 to 5.0.5-debian-10-r12. After each pod restart, the index seems “stale”. The memory doesn’t increase as it is supposed to be.

Am I missing something?

Best Regards

PS: I opened a github issue on the bitnami mongo-sharded helm chart, and one user advised me to post here as well as it may be a more general question.

Welcome to the MongoDB Community Forums @Francois_LP !

There is no need to regenerate indexes after an upgrade outside of the very rare possibility of an index format change which would be mentioned as part of a major version upgrade. If I’m inferring versions correctly from the Bitnami naming it looks like your upgrade was from MongoDB 5.0.5 => 5.0.5 with something in the Debian 10 image changing from r0 to r13.

I believe your described behaviour may be as expected: indexes are not loaded into RAM until required by queries or updates. Was your sharded cluster being actively used during the period where RAM usage was not changing significantly?

Note: in MongoDB 4.4+ the Mirrored Reads feature can be used to pre-warm the caches of electable secondary replica set members by mirroring a sample of supported operations from the primary. The mirrorReads feature is enabled by default with a low sampling rate (0.01) that you could adjust to mitigate the impact of planned failover for maintenance/upgrades.



Thanks for your answer, I am going to do more extensive tests and come back to you with updates :slight_smile: