Our application is still being put together and as we have more pieces added we’ve started seeing the total number of connections to MongoDB spike when certain pods are restarted. We’ve traced it down to how MongoClient handles the connection pool. For our app, to meet SLAs, we need a lot of available connections in the pool for brief periods of time so they are set to a min of 10, a max of 100, and a timeout of 10 min. Our app uses multiple instances of MongoClient, one to access each logical collection of DBs/Collections for security reasons. On app startup all of these instances are created thus each creates its pool which starts with 100 connections. 10 minutes after startup we see the connection go down dramatically when the first prune of the pools are done. With the current state of the app there are 8 components (and it will be growing a lot) using an average of 13 instances of MongoClient so that means at system startup we’re creating over 10,000 connections to MongoDB. We need a ways to better control connection creation at startup, like only create the min in each pool, or timeout the extra connections very quickly. We need a simple solution, like config changes or simple code changes since we don’t have time to do any redesign or reimplementation at this point. Our SEs are insisting we have one MongoDB configuration for the entire system. We’re arguing to tune each service independently regarding min/max/timeout values but I doubt we’ll win. If we truly have to have high max pool values and many minute of timeout are there any ways we can better control the creation of connections at startup time?
Please confirm the specific driver & version you are using. The
MongoClient class name is a standard convention used by several drivers.
Also, what sort of deployment do you have: standalone, replica set, or sharded cluster?
We are using the Java driver version 3.11 and a replica set.
See this recent StackOverflow post on Managing Mongodb connections in Java as Object Oriented.
There is useful information related to connection pool settings (or configuration), application of these settings on MongoClient within Java code accessing MongoDB server.
That is exactly what we are already doing…we use a shared MongoClient for each db/collection that requires a unique userid/password for security reasons. Plus our application is spread out over a lot of Pods. We cannot change the security requirements so we have to keep the separate MongoClients for each unique userid/password required. The problem stems from the fact that when a MongoClient creates the underlying connection pool it always creates the maximum number of connections. Multiply this by the number of MongoClients we need across all the Pods that are starting and you get well over 10,000 connections at system startup just with the parts we have running so far. As the rest of the system comes online that number would easily triple or more.