Golang maxPoolSize monitoring?


Not sure if I should open a support ticket, use the community, so I will try here first, so hopefully others can search and find this. Thanks in advance.

The main question is how can we monitor within a Golang process the number of MongoDB connections please?

The reason is that I would like to be able to monitor and alarm before we reach maxPoolSize.
Ideally, we could add a flag to enable Prometheus metrics, but otherwise if we could somehow query the mongo client to find out how many it has.

We are using the go.mongodb.org/mongo-driver v1.10.1, and recently I’ve been increasing the maxPoolSize which has helped performance a lot.

Current config we’re using is:

MongoMaxConnIdleTimeMins = 5 // The default is 0 (indefinite)
MongoMaxConnecting       = 0 // The default is 2
MongoMaxPoolSize         = 200 // The default is 100
MongoMinPoolSize         = 10  // The default is 0

However, recently we started seeing error messages like this.

rpc error: code = DeadlineExceeded desc = timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 200, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 200.

( We do have known issues with our Mongo DB performance, which we are working on. )

It would be awesome if we could monitor these x3 numbers in the error message.

cursors: 0
transactions: 0
other operations: 200

Also, what is “other operations”, and how can I debug that to find out more?


Hi @Dave_Seddon,

All official MongoDB Drivers (include the Golang Driver) implement the Connection Monitoring and Pooling specification which defines the various events that should be raised during the operational lifecycle of a connection pool.

I have a short post that relates to MongoDB Go monitoring, however the pool counters are not exposed publicly which may make the type of reporting you’re trying to do a little more difficult.

Creating a PoolMonitor with some custom counter tracking however should enable you to do the type of reporting you are after.

For example, below in db_client.go we define a structure that contains a mongo.Client instance and some counters that are managed by connection pool events:

// db_client.go
package main

import (


type dbClient struct {
  ID          primitive.ObjectID // the Client ID
  client      *mongo.Client
  ConnectionCreated  int
  ConnectionPoolCreated int
  ConnectionClosed  int
  ConnectionReady int
  ConnectionCheckOutFailed int
  ConnectionCheckedOut int
  ConnectionCheckedIn int
  ConnectionPoolCleared int
  ConnectionPoolClosed int
  checkedOut []uint64

func newDbClient(ctx context.Context, uri string) (*dbClient, error) {
  newClient := &dbClient{
    ID: primitive.NewObjectID(),

  monitor := &event.PoolMonitor{
    Event: newClient.HandlePoolEvent,

  // set additional options (read preference, read concern, etc) as needed
  opts := options.Client().ApplyURI(uri).SetPoolMonitor(monitor)
  var err error
  newClient.client, err = mongo.Connect(ctx, opts)
  if err != nil {
    return nil, err
  _ = newClient.client.Ping(ctx, readpref.Nearest())
  return newClient, nil

func (d *dbClient) HandlePoolEvent(evt *event.PoolEvent) {
  switch evt.Type {
  case event.ConnectionCreated:
  case event.PoolCreated:
  case event.ConnectionClosed:
  case event.ConnectionReady:
  case event.GetFailed:
  case event.GetSucceeded:
    d.checkedOut = append(d.checkedOut, evt.ConnectionID)
  case event.ConnectionReturned:
  case event.PoolCleared:
  case event.PoolClosedEvent:

func (d *dbClient) Close(ctx context.Context) {
  _ = d.client.Disconnect(ctx)

func (d *dbClient) UniqueConnections() int {
  u := 0
  m := make(map[uint64]bool)

  for _, val := range d.checkedOut {
    if _, ok := m[val]; !ok {
      m[val] = true

  return u

func (d *dbClient) PrintStats(section string) {
  fmt.Printf("-- %s --\n", section)
  fmt.Printf("Pools: Created[%d] Cleared[%d] Closed[%d]\n", d.ConnectionPoolCreated, d.ConnectionPoolCleared, d.ConnectionPoolClosed)
  fmt.Printf("Conns: Created[%d] Ready[%d] Ch-in[%d] Ch-out[%d] Ch-out-fail[%d] Ch-out-uniq [%d] Closed[%d]\n", d.ConnectionCreated, d.ConnectionReady, d.ConnectionCheckedIn, d.ConnectionCheckedOut, d.ConnectionCheckOutFailed, d.UniqueConnections(), d.ConnectionClosed)

This wrapper can be used to print out the internal counters at any point by calling PrintStats("..."):

URI := "mongodb://.../test?...&minPoolSize=1maxPoolSize=100"
ctx := context.Background()

mongoClient, err := newDbClient(ctx, URI)
if err != nil {

defer func() {

Hopefully the above example helps illustrate one possible approach and enables you to move forward with a solution appropriate for you use case.

1 Like

From mongo/driver/topology/errors.go it appears this is the result of totalConnectionCount - PinnedCursorConnections - PinnedTransactionConnections

@alexbevi Thanks for the reply and for your great blogs!

This looks like a reasonable approach, although I can’t help but feel that these counters must already exist within the “driver”, so it’s double handling.

Are those increments in HandlePoolEvent concurrency safe? I would have thought atomic increments are required ( Not sure if you’ve seen this talk Bjorn Rabenstein - Prometheus: Designing and Implementing a Modern Monitoring Solution in G - YouTube ). We might try using prometheus counters I guess. Something like:

import (
   hmmm this forum thingy won't let me post links

var (
	pC = promauto.NewCounterVec(
			Subsystem: "mongo_counters",
			Name:      "my_service",
			Help:      "my_service mongo_counters counts",

func (d *dbClient) HandlePoolEvent(evt *event.PoolEvent) {

Although it would be kind of nice to have increments and decrements, so we know the current number

	pG = promauto.NewGauge(
			Subsystem: "connections_gauge",
			Name:      "my_service",
			Help:      "my_service connection gauge",

func (d *dbClient) HandlePoolEvent(evt *event.PoolEvent) {
	switch evt.Type {
	case event.ConnectionCreated:
	case event.ConnectionClosed:

I will play around and see what I can come up with.

Thanks again!