Sharded cluster performance issues with Nodejs driver

We have sharded cluster in two regions with one shard currently. There is no sharding enabled on the collections and balancing is off mode. Two servers in Europe and one in Asia. All servers are containing same setup with router, config server and data node.
Data nodes are one replica set with the primary in Europe
Config servers are one replica set with the primary in Europe.
MongoDB 4.4.0 is used. In data nodes we have tags named - region: “europe” and region: “asia” - depending region where the nodes are.

Client is a NodeJS with the 3.6 driver located in asia region and connects to this region router. Client uses readPreference=nearest&readPreferenceTags=role:asia
Client makes one specific query that does findOne query. In the router logs we can see that read preference is set. However, read request takes 200ms which means it tries to fetch data from europe region.
When we set client connection directly to the data nodes describing all replica set members and add same readPreference and readPreferenceTag then read request takes 3ms and data is retreived from asia as it supposed to do.
Query that client is making:
db.getCollection('myTest').find({userId: 'SrPxPwXXSDqO7ede3y', _id: ObjectId("5f61fc091e244c157f43401"), deletedAt: null})
What can be done do debug it further or is there any reasonable explanation for this issue?

This problem I described is in our production. We tested with small NodeJS script this behavior also. When using only one readPreferenceTag then response times are always good (~3ms). But when adding multiple tags and failover in the client configuration then some requests are routed to other mongo data nodes. Time-to-time getting (~200ms)
Furthermore, one interesting observation. When using maxStalenessSeconds in the client (for example: &maxStalenessSeconds=120) then Mongo router crashes and do not come up after this parameter is removed.
For example:

Update:
We are using connection level read preference. Seems those parameters are not picked up. From Mongo router logs we can see those tags since connection level is logged in there. From network dump we can see that client do not send those tags nor readPreference to the router in the query level as we can see only readPreference: secondaryPreferred in there.
We needed to specify query level readpreferences.

Based on behaviour observation it remains unclear how are members considered worthy. According to doc there’s latency consideration present, but it’s not clear if it’s latency between driver and mongo router or latency between primary and secondary. Issue is that when querying from region which has one local secondary present, but primary with other secondary is in remote region, query seems to end up randomly in both regions.