Linux oom-killer

Hello,

I use WiredTiger engine and ave storage configuration done

storage:
  wiredTiger:
    engineConfig:
      cacheSizeGB: 9

Server RAM: 24 GB.
Information from systemctl status mongod
Memory: 19.8 GB (limit 20.1GB).

Sometimes without any reason I have

Jan  4 09:27:26 s kernel: [21037324.308350] conn315770 invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

Jan  4 09:27:26 s kernel: [21037324.308627] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/system.slice/mongod.service,task_memcg=/system.slice/mongod.service,task=mongod,pid=237767,uid=107
Jan  4 09:27:26 s kernel: [21037324.308731] Memory cgroup out of memory: Killed process 237767 (mongod) total-vm:22557452kB, anon-rss:21047624kB, file-rss:4536kB, shmem-rss:0kB, UID:107 pgtables:41772kB oom_score_adj:0

How possible to identify why MongoDB uses more RAM than possible to use?

free -m output

               total        used        free      shared  buff/cache   available
Mem:           24048       16848        2020           0        5179        6807
Swap:              0           0           0

Hello @Staff_IT ,

Welcome to The MongoDB Community Forums! :wave:

The most common reason for OOMkilled process is that the process is using more RAM than what the server has, and the server has no swap configured. Anecdotally, this also typically mean that the hardware is under-provisioned for the workload.

Setting the WT cache does not mean that the whole mongod will adhere to that much memory. MongoDB uses memory on top of WT cache for other database purposes e.g. query processing, incoming connections, etc. Currently there is no method to limit this memory usage.

One straightforward way to prevent this OOMkill is to provision a swap space. However if the hardware is actually underprovisioned for the workload, it will make it very slow due to swapping. However there’s less chance of MongoDB getting OOMkilled by the kernel.

Regards,
Tarun

2 Likes

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.