Can't SRV to multi-region cluster with AWS connection peering

Hi all, I have a cluster with following electable nodes, peering and access entries

As you can see, I have 2 separate VPCs (one for east one for west), each with their own peering connection to the corresponding Atlas VPC (with different CIDRs).

The us-east-1 works perfectly, the us-west-1 times out trying to reach the server. I have triple checked the config on both ends (including route tables etc).

I have read about SRV but still don’t fully understand it - based on the nodes configured above, can I use the same connection string for both East and West? Or will West only work if East fails over?

My srv connection host looks like this:

Is there any other suggestions on how I can debug the connection from us-west-1?

I came a cross a similar question here but I have all the recommended settings.

Any help would be great


Got some help from Mongo support - for anyone else having the same issue:

In short, you will need to peer all VPCs that need to connect to your Atlas cluster to both the us-east-1 VPC and us-west-1 VPC in your Atlas project so that your applications can reach all nodes in the cluster.

Let me provide some additional context to frame the issue a bit more.

Because the cluster is a replica set, MongoDB drivers by default will attempt to connect to every node in the cluster in order to maintain high availability. With the default read preference of primary, at minimum, the driver will need to be able to connect to the cluster PRIMARY node in order to successfully establish a connection.

This cluster’s primary region is us-east-1. So, the cluster’s PRIMARY node will most likely be in this region (as it is currently). When your applications in the VPCs that are peered to the us-east-1 Atlas region attempt to connect, they are able to successfully reach the nodes in us-east-1, but will fail to reach the node(s) in us-west-1, as there is no peering connection between the application VPC and the us-west-1 region. However, because the PRIMARY node is in us-east-1, the driver deems the connection a success because it was able to reach the PRIMARY.

For applications in the VPC peered to the us-west-1 region, they only have access to the nodes in the us-west-1 region and cannot reach the nodes in us-east-1 via peering because there is no peering connection created between the VPC and us-east-1. So, when they attempt to connect, they can reach the SECONDARY node(s) in that region, but they cannot find the PRIMARY node. This causes the connection to fail/time out.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.