Due to the nature of my data (being very many collections) I have a lot of problems starting
mongod. I have about 140k
*.wt files on disk, although during the normal running of the server I only have about 2k open pointers as once. So this large number of files is only a problem on startup, which takes about 5 mins.
I’ve posted about
ulimits before and have resolved that problem in
systemd by using a drop-in config that sets
LimitNOFILE=200000 plus the corresponding
nproc config as mentioned here. I think this side of things is resolved.
However I have hit a new ceiling. Today (amidst upgrade to 4.4), starting
mongod causes the server to run out of memory. The box has 4G of RAM and a 2G swap disk. It only runs MongoDB and under normal operation the
mongod process seems to use about half the available RAM.
So I suppose my question is what can I do to get the server to start with my 4G of RAM? Do I need a bigger box just to get it started? If so, is it possible to calculate the ram I’ll need based the number of files? Perhaps increasing the size of the swap disk would fix it? Disk space is not a problem.
The next size up of my Linode VPS is 8G and will double the monthly cost for all my
mongodb servers. Not the end of the world, but as I only seem to need this RAM on startup, I’m wondering if I can hold off on that upgrade.
Suggestions much appreciated.