Hello,
I am getting the below error logs after the MongoDB election occur.
(node:9837) UnhandledPromiseRejectionWarning: MongoServerSelectionError: connect ECONNREFUSED 192.168.50.50:27019
at Timeout._onTimeout
(node:9837) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 52)
192.168.50.50 It was primary node before MongoDB election is occurred, After MongoDB election the NodeJS application still trying to connect with old primary node(192.168.50.50) which was shut down due to some reason. And after sometime when 192.168.50.50 node is running up, It will become secondary MongoDB node and NodeJS application trying to connect with same IP (192.168.50.50 currently this is secondary node). Hence now application getting the below error.
(node:9837) UnhandledPromiseRejectionWarning: MongoServerSelectionError: not primary
at Timeout._onTimeout
(node:9837) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode)
Note: IT IS RANDOM ISSUE AND GETTING ON RANDOM NODEJS SERVER
Why MongoDB node driver connect with old mongo instance?
Why it behave randomly?
Application structure:
- I have socket.io cluster with 9 NodeJS server deployed in kubernetes environment
- MongoDB replica-set cluster with 3 node and deployed on bare metal.
- Mongo version 4.4.8
4 Node version 12.16.1 - Node driver version for MongoDB: 4.9.1
async function runAdapter() {
try {
const {
MongoClient
} = require('mongodb');
const client = new MongoClient(common.appConfigObj.mongoConfig.url, common.MongoClientOption);
await client.connect();
const db = await client.db(common.appConfigObj.mongoConfig.dbName);
db.createCollection(common.appConfigObj.mongoConfig.cappedCollection, {
capped: true,
size: common.appConfigObj.mongoConfig.size ? common.appConfigObj.mongoConfig.size : 100000,
max: common.appConfigObj.mongoConfig.max ? common.appConfigObj.mongoConfig.max : 5000,
}, (err, data) => {
if (err) {
logger.info(`In HapiServer : hapiServerFun : mongo : error while creating the cappped collection : ${err} `);
if (err.message == `Collection already exists. NS: ${common.appConfigObj.mongoConfig.dbName}.${common.appConfigObj.mongoConfig.cappedCollection}`)
connectAdapter(db.collection(common.appConfigObj.mongoConfig.cappedCollection))
else
runAdapter();
} else if (data) {
logger.info(`In HapiServer : hapiServerFun : mongo : capped collection successfully created `);
connectAdapter(db.collection(common.appConfigObj.mongoConfig.cappedCollection))
}
})
const connectAdapter = (coll) => {
socketIO.adapter(createAdapter(coll));
coll.isCapped().then((data) => {
logger.info(`In HapiServer : hapiServerFun : mongo : cappedCollection : data : ${data}`);
}).catch((err) => {
logger.error(`In HapiServer : hapiServerFun : mongo : cappedCollection : error : ${err}`);
})
logger.info(`In HapiServer : hapiServerFun : mongo : cappedCollection : `);
logger.info(`In HapiServer : hapiServerFun : mongo : mongo connection done`);
console.log(`In HapiServer : hapiServerFun : mongo : mongo connection done`)
return;
}
} catch (err) {
console.log(`In HapiServer : hapiServerFun : mongo connection Exception : : ${err}`)
logger.info(`In HapiServer : hapiServerFun : mongo connection Exception : ${err}`);
logger.error(`In HapiServer : hapiServerFun : mongo connection Exception : ${err}`);
return runAdapter();
}
}
runAdapter();
Can anyone help to find out the issue. Please let me know if you want anything else.