Too many open files

When dealing with very large collections i came across timeout issues when querying and inserting.
Even though the collections were indexed and optimized to combine small documents together.

The rate of insertion for a given collection a day 172,800 documents per day.

I solved this by partitioning the DB into months.
ASDF -> ASDF_10_2018 , ASDF_11_2018 … ASDF_05_2020

by doing so the collection size under each db was decreased and i did not encounter any query or insertion issues.

The things is that there are multiple DB’s like ASDF and this has been running for more then 2 years.
MongoDb now as an issues with the number of open files it has to hold in order to maintain my partitioned DB’s and collections.

Questions :

  1. Is there a better way of solving the problems that occurred due to the large collections.
  2. Is there a way to tell mongo to not open so many files.
    Most of the Db’s are old and not commonly queried and can stay dormant on my part, is there a way to tell Mongo not to open the files related to them ?

If this is linux server, maybe increasing limits will be enough. By default number of open files for user is 1024. You can check it with command ulimit -a. You can increase it in /etc/security/limits.conf (for user which is running mongo service). For application servers I usually set it for something like 65000.

1 Like