High TCMalloc thread cache usage on Kubernetes

Hi, apologies if this has already been answered somewhere but searching for “memory usage” issues has only turned up a huge number of posts about setting the wiredTiger cache size. I thought that was our issue too until delving a little deeper…

We’re running 4.2.18 on Kubernetes using a StatefulSet with 3 pods. The pod memory limit is set to 8GB and we can see this has been correctly detected in db.hostInfo() as system.memSizeMB is 8192. The wiredTiger cache has also as expected been set to 3.5GB automatically. We are however only using a small dataset so only a few hundred MB of this is being used. What we see instead is TCMalloc’s “Bytes in thread cache freelists” gradually increasing until the pod is inevitably OOM-killed. This seems strange as in all other reports of similar memory issues this value is in the order of MB whilst ours just keeps climbing into multiple GB.

We had hoped to be able to limit the total memory used by MongoDB and set the Kubernetes limit a bit higher, to give some room for overhead, but this doesn’t appear to be possible. Best we’ve come up with so far is to try to tame TCMalloc by setting tcmallocReleaseRate. According to the Mongo docs a value of 10 is the top end of the reasonable range but setting it to this doesn’t seem to make any noticeable difference in our case.

Anyone seen this behaviour before and/or have any pointers on what to try next?

Hi. Having similar issues ourselves. We found this article that might be useful to you.