MongoDB source connector: Issue when running connector for a deployment level

Setup:
MongoDB source connector v 1.5.1
Sharded cluster v4.2

We are running connector on deployment and listening to certain collections with the match command. The load we are seeing from the collections is very low.

“pipeline”: “[ { $project: { “updateDescription”:0 }}, { $match: { “ns.coll”: { “$in”: [“coll1” ,“coll2”, “coll3”, “coll4” ] } } } ]”,

We are seeing the resume change stream issue once every 2 days (which matches the oplog window/limit we have)

An exception occurred when trying to get the next item from the Change Stream: Query failed with error code 286 and error message ‘Error on remote shard server_name:27018 :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.’ on server server_name:27017
We have experience running connectors targeted against specific collections handling high volume but have not seen the issue. But this is the connector we are running
against a deployment. Are we missing any configuration on the connector/cluster?

P.S: We also looked into some docs and are testing the connector with heartbeat.interval.ms configuration since we assume that this may be due to infrequently updated namespaces : https://docs.mongodb.com/kafka-connector/current/troubleshooting/recover-from-invalid-resume-token/#std-label-kafka-troubleshoot-invalid-resume-token.

Are we missing any specific configuration on connector/cluster? Thanks in advance.