MongoDB Node.js Driver vs AWS Lambda with Provisioned Concurrency

We’re experiencing connectivity issues between AWS Lambda (Node.js) and MongoDB Atlas. We use the official MongoDB Node.js driver in our back-end built on AWS Lambda. We establish the connection at the INIT phase, then use and re-use it in subsequent INVOKE phases. For better responsiveness, we also use provisioned concurrency of AWS Lambda. The latter means that sometimes the INIT phase completes up to several hours before the first invocation.

// index.mjs
import { MongoClient } from 'mongodb';

// INIT
const client = new MongoClient('mongodb://');
await client.connect();
const collection = client.db('foo').collection('bar');
// end of INIT

export default async function () {
  // INVOKE (up to 3 hrs after "end of INIT" or previous INVOKE)
  const doc = collection.findOne({});
  return doc;
  // end of INVOKE
}

Recently (mid-Jun 2025) we began noticing two types of errors at findOne from the example code above:

  1. This socket has been ended by the other party
  2. PoolClearedError

These errors don’t occur every time. The longer the “dormancy” time between invocations (or before the first one), the greater the likelihood, but it’s never 100%. And we’re yet to see an error when the time is less than 1000 seconds.

So far we’ve been able to mitigate this with the following: if more than 1000 seconds of inactivity is detected, then db().admin().ping(), and if that fails, then also re-connect with await client.close(); await client.connect();. After that, the client is usable again. It doesn’t look like an optimal solution though, at minimum because of the ping() overhead for any function that was kept sleeping for more than 1000 seconds by the provisioned concurrency feature.

MongoDB driver version: 6.17.0
Node.js 22 in AWS Lambda

Hey @Egor_Petrov, since the drivers have retryability baked in and pooled connections will be recreated on certain types of failure, if you didn’t add the additional db().admin().ping(), would the operation in the Lambda still execute successfully (even though the error may be logged)?

No. When I say “noticed errors at fineOne” it means the the operation failed and threw one of the two errors. So no, it’s not simply about errors/warnings logged somewhere.

I would also expect the driver to have some reconnection/retry logic. Could it be “misguided” by AWS Lambda runtime suppressing all activity before/between invocations?

One interesting occurrence of the socket error before ping+close+connect were added was like this:

  • Init at T
  • Invoked at T+2hrs > “This socket has been ended by the other party”
  • Invoked again T+2h0m7s > Everything worked fine

Here’s the stack trace of the socket error from that occurrence:

Error: This socket has been ended by the other party
    at genericNodeError (node:internal/errors:983:15)
    at wrappedFn (node:internal/errors:537:14)
    at TLSSocket.writeAfterFIN [as write] (node:net:580:14)
    at Connection.writeCommand (/opt/nodejs/node_modules/mongodb/src/cmap/connection.ts:718:21)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async Connection.sendWire (/opt/nodejs/node_modules/mongodb/src/cmap/connection.ts:464:7)
    at async Connection.sendCommand (/opt/nodejs/node_modules/mongodb/src/cmap/connection.ts:542:18)
    at async Connection.command (/opt/nodejs/node_modules/mongodb/src/cmap/connection.ts:633:22)
    at async Server.command (/opt/nodejs/node_modules/mongodb/src/sdam/server.ts:342:21)
    at async FindOperation.execute (/opt/nodejs/node_modules/mongodb/src/operations/find.ts:130:12)

Just FYI I’ve filed https://jira.mongodb.org/browse/NODE-7067 to have the team look into this.

1 Like