Sudden drop in mongodb connections as ConnectionPoolExpired

Hi mongo geeks,
I hope you are doing well
I have storage issue where mongodb opens a lot of connections then they all gets dropped suddenly and I found this log lines,


{"t":{"$date":"2023-09-12T19:16:41.306+00:00"},"s":"I",  "c":"CONNPOOL", "id":22572,   "ctx":"MirrorMaestro","msg":"Dropping all pooled connections","attr":{"hostAndPort":"10.10.0.175:27017","error":"ConnectionPoolExpired: Pool for 10.10.0.175:27017 has expired."}}
{"t":{"$date":"2023-09-12T19:16:44.242+00:00"},"s":"I",  "c":"CONNPOOL", "id":22572,   "ctx":"MirrorMaestro","msg":"Dropping all pooled connections","attr":

PS. I’m running mongo 4.4
seeking your usual support :pray:

Why do you think it’s a problem? what bad things are you seeing?

@Kobe_W not sure but is this graph looks normal to you?

this sudden drop in the connections is not something normal from my point of view

I see similar question on stackoverflow, but no answers.

You can try tuning connection pool settings like min or max value. This might also be an internal logic in mongoDb.
After all, Mongodb drivers will take a good care of connection pooling for most use cases by default.

I personally wouldn’t spend too much time on this, if i 'm not noticing any other issues from application level (e.g. high latency on requests).

Hi @Ahmed_Asim.

Try to observe memory usage during spikes. There may be some correlation, if you could post the graphics it would be interesting.

Best!

1 Like

@Kobe_W t’s my question I believe, I posted on stackoverflow also :smiley:

anyone knows what are the tcp parameters for the host os that might affect the number of the connections ?
I mean we might be hitting the limit of the TCP/IP connections

OS related parameters can be comprehensive to tune. e.g. this. You can try mongodb configuration first as that’s easier.

thanks @Kobe_W any idea which config could be related to this ?
as I know the net.maxIncomingConnections to limited to the os limits which in my case are :

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 126920
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 32768
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

see this it reached 8.94K then sudden drop which is vey weird !!

I suspected the idle connections and look what it found here :

var idleConnections = db.serverStatus().connections.available;
print("Number of idle connections: " + idleConnections);

Number of idle connections: 837677

not sure why all of this connections are idle and how to check it ?

if your deployment is big or you have many connections from those drivers, 837k might be fine. It only means at the time you run that query, those connections are idle.

did you try setting a max connection number for the pools from driver side?
try using a very small number and then see if anything changes on the server side numbers.

i have no experience with atlas, so i don’t know if those numbers are aggregated from all nodes or not.