Atlas deployments in multiple regions must have a peering connection for each Atlas region.
For example: If you have a VPC in Sydney and Atlas deployments in Sydney and Singapore, create two peering connections.
Right now we just have 3 notes in us-east-1 and VPC peering set up, so a very simple setup.
Is this a single VPC peering connection from your prod app/env in us-east-1 to the 3 node atlas cluster in us-east-1? Or do you also mean to include the backup prod app/env as well? i.e. 2 peering connections to the Atlas cluster, 1 for each environment.
Is this a single VPC peering connection from your prod app/env in us-east-1 to the 3 node atlas cluster in us-east-1 ? Or do you also mean to include the backup prod app/env as well?
Correct. Right now we just have a nodejs app in a private us-east-1 VPC that connects to an atlas 3-node cluster that is also in us-east-1 via peering.
We want to move to a 2:2:1 atlas setup with us-east-1, us-east-2, and us-west-2 respectively
In the above case, we’d have a hot-hot setup with:
Our prod nodejs app is in us-east-1 (same setup as we have today), then
we would have a duplicate prod nodejs app in us-east-2.
The “backup/disaster recovery” app in us-east-2 would always be running but wouldn’t be getting any traffic. Then, if us-east-1 were to ever go down, we can divert traffic from our us-east-1 app to our us-east-2 app via DNS.
We’re hoping that having the 2:2:1 means both apps in each region will already be connected to the same mongodb atlas, so nothing needs to be done for the database during a us-east-1 outage – it can handle connections from our app in us-east-2 or us-east-1 without additional work.
In this case, we need to have a VPC peering with us-west-2 even though we have no application running in a VPC there? I would think it’d be just two peerings between nodejs app in us-east-1 and us-east-2.
This was the simplest (in terms of devops labor) setup we could come up with that also would let us keep RPO/RTO under an hour for a regional outage
I’m assuming your app connects to MongoDB using an official MongoDB driver but please correct me if i’m wrong here.
In saying so, one of the main requirements for an official MongoDB driver is for it to be able to connect directly to each replica set member. The applications on your end in the us-east-1 and us-east-2 regions must be able to connect to all members of the replica set, in this particular case, the members that exist in Atlas VPC’s in us-east-1 , us-east-2 and us-west-1 .
Additionally, the Atlas documentation advises that a peering connection must be made for each region that your cluster is deployed on. Based off this, you will need a peering connections with the us-west-1 VPC where the 1 node exists as well.
I’ve set up 3 peering connections to us-east-1 , us-east-2 and us-west-1 but no luck
I presume you’ve set up the 3 above peering connections to/from your us-east-1 application first to test before replicating a similar set up for the us-east-2 backup prod app (or vice-versa) but correct me if i’m wrong here.
Any docs on how to debug peering connections?
Unfortunately there isn’t any specific AWS peering connection troubleshooting documentation to my knowledge on how to debug peering connections. However, in saying so, can you advise on the following:
If all 3 of the peering connections on the Atlas UI are showing as Available
If you deselected the Same as application VPC region box in the Atlas VPC peering modal where applicable?
If you’ve added the application VPC CIDR / Security Group ID’s in the Network Access List?
If you’re receive any error messages being received when setting up the connection on Atlas
If you’re receive any error messages being returned by the application when attempting to connect using the peering connections
The following pages may help troubleshoot the issue: