Mongodb timeout error on AWS Lambda

I have a node lambda function that queries mongoDb using mongoose.

About 50% of the time, seemingly randomly, I get the following error upon trying to connect: MongoNetworkTimeoutError: connection timed out

While MongoDb seems to recommend using context.callbackWaitsForEmptyEventLoop = false and trying to reuse the same connection between calls, I read other posts that said the fix for this would be to actively re-open a connection every time. I tried that but it’s still happening. I also tried playing with values for
socketTimeoutMS and connectTimeoutMS to no avail.
Does anyone have any ideas? This is a significant blocker for me right now - thanks!

Here’s my code:

let  conn = mongoose.createConnection(process.env.MONGO_URI, {
        bufferCommands: false, // Disable mongoose buffering
        bufferMaxEntries: 0, // and MongoDB driver buffering
        useNewUrlParser: true,
        useUnifiedTopology: true,
        socketTimeoutMS: 45000,

      try {
        await conn
        console.log('Connected correctly to server')

      } catch (err) {
        console.log('Error connecting to DB')

  await conn 

And here’s the full error output from Cloudwatch:

    "errorType": "Runtime.UnhandledPromiseRejection",
    "errorMessage": "MongoNetworkTimeoutError: connection timed out",
    "reason": {
        "errorType": "MongoNetworkTimeoutError",
        "errorMessage": "connection timed out",
        "name": "MongoNetworkTimeoutError",
        "stack": [
            "MongoNetworkTimeoutError: connection timed out",
            "    at connectionFailureError (/var/task/node_modules/mongodb/lib/core/connection/connect.js:342:14)",
            "    at TLSSocket.<anonymous> (/var/task/node_modules/mongodb/lib/core/connection/connect.js:310:16)",
            "    at Object.onceWrapper (events.js:420:28)",
            "    at TLSSocket.emit (events.js:314:20)",
            "    at TLSSocket.EventEmitter.emit (domain.js:483:12)",
            "    at TLSSocket.Socket._onTimeout (net.js:484:8)",
            "    at listOnTimeout (internal/timers.js:554:17)",
            "    at processTimers (internal/timers.js:497:7)"
    "promise": {},
    "stack": [
        "Runtime.UnhandledPromiseRejection: MongoNetworkTimeoutError: connection timed out",
        "    at process.<anonymous> (/var/runtime/index.js:35:15)",
        "    at process.emit (events.js:326:22)",
        "    at process.EventEmitter.emit (domain.js:483:12)",
        "    at processPromiseRejections (internal/process/promises.js:209:33)",
        "    at processTicksAndRejections (internal/process/task_queues.js:98:32)",
        "    at runNextTicks (internal/process/task_queues.js:66:3)",
        "    at listOnTimeout (internal/timers.js:523:9)",
        "    at processTimers (internal/timers.js:497:7)"
1 Like

@Boris_Wexler Did you get a solution for this? I am stuck in the same scenario. Already have tried connectionTimeout and socketTimeout, but that dosen’t seem to work. What next I am thinking is to give a try for is connection pooling.

@Boris_Wexler or @Avani_Khabiya where you guys able to find a solution for this? I am stuck on this same issue as well

do you have access restrictions on your cluster? namely IP access list?

timeout error are mostly related to:

  • no running instance
  • wrong DB address/port
  • and strict access list.

your connection works half the time, so it might be the last one.

If you do not have static IP contracts on your AWS, then the host IP of your app may change during its lifetime. Then if you have restricted access to your MongoDB cluster, this may cause those timeouts you get.

to eliminate this possibility, or to make sure it is the culprit, edit your access list to give access from anywhere, from “”, then monitor your app if you get the same error again.

1 Like

@Avani_Khabiya @Shawn_Varughese, are you able to find a solution for this?

What version of MongoDB is this? And @Bruno_Feltrin what version are you using? Can I see your script/config?

No i was never able to find a solution, this ultimately boiled down to a connection max limit. Due to the nature of lambda being short run and constantly spinning up new instances it caused new connections. The adjustment to time outs and all of that did not work. We have tried so many options and we still keep hitting the connection max issue due to the nature of lambda. Any tips would be help here

Alright, at least we have a clue to follow.

There is this statement on the following page: Manage Connections with AWS Lambda — MongoDB Atlas

Don’t define a new MongoClient object each time you invoke your function. Doing so causes the driver to create a new database connection with each function call. This can be expensive and can result in your application exceeding database connection limits.

if you haven’t tried it yet, check if it helps.

I am facing the same issue :raising_hand_man:. The root cause does not appear to be the connection’s maximum limit. I create only one MongoDB client for each Lambda, and the total Lambda concurrency for my account is around 20.