Vm.max_map_count is too low in replicaset running in K8s

I have deployed mongodb to K8s with bitnami helm charts, and it is running as replicaset. When I log into mongodb terminal on secondary I see notification: “vm.max_map_count is too low”.

When checking on container, I see:

$ sysctl vm.max_map_count
vm.max_map_count = 26214

Per documentation the setting should be

vm.max_map_count value of 128000

So why am I getting this warning? Or should I increase the vm_max_count?

Yes. As you’re running bitnami, best check their documentation.

Thanks. But there was nothing on Bitnami mongodb documentation about the vm_max_count.
I managed to found only the mongodb doc with “default”.

Also, I found this topic about how to increase the value with helm and init-container kubernetes - Setting vm.max_map_count for mongodb with helm chart - Stack Overflow.
But, I’m confused about what the value should be, and If meant to be configured, why it is not documented/taken in bitnami helm charts?

How to count this value if provided default is not enought:

sysctl vm.max_map_count
vm.max_map_count = 26214
AND
cat /proc/sys/vm/max_map_count
262144
?

I found this Jira ticket, but I dont understand what the “2x max connections” is referring to:
[SERVER-51233] Warn on startup if vm.max_map_count < 2 * max connections - MongoDB Jira

Oh, seems to be a bug with defaults values: [DOCS-14280] Documentation incorrectly states that configuration parameter net.maxIncomingConnections has a default value of 65536 - MongoDB Jira

Anyhow we have now limited connections on application side, so this is irrelevant.