Hello,
I’m trying to setup a VPC peering between my MongoDB Atlas DB (3 nodes replica set in 3 different region (EU / US / Asia)) and my application that runs in 3 AWS EKS clusters (EU / US / Asia).
I created the first VPC peering between Atlas & AWS Asia (accepted the peering + added the CIDR in the route table).
From my pod, running in the Asia cluster, I can run the dig
command on the Atlas Asia replica set URL, and it resolves to a internal IP ==> ok, it worked
But, i cannot connect to my cluster URL (i get a “failed: Name has no usable address issue”)
What is the best way of achieving this setup (e.g, VPC peering between Atlas multi region replicaset and AWS multi region app EKS clusters) ?
Do I need to create 9 VPC peering?
Thanks!
What i mean is that I will have to:
For ap-southeast-1 EKS:
- setup VPC peering in Atlas using my app AWS VPC ID from ap-southeast-1 with Atlas VPC region ap-southeast-1
- setup VPC peering in Atlas using my app AWS VPC ID from ap-southeast-1 with Atlas VPC region us-east-1
- setup VPC peering in Atlas using my app AWS VPC ID from ap-southeast-1 with Atlas VPC region eu-central-1
and then repeat for all 3 EKS cluster, correct?
Yes you would want to ensure each app tier VPC context can reach each Atlas-side regional VPC context. That is assuming you are connecting directly to a replica set in MongoDB Atlas where MongoDB’s drivers use client side load balancing and discovery to identify and route which member of the set is the primary. For completeness, if you’re connecting to a sharded cluster then the requirements are simplified: you just need to be able to reach a part of the cluster for example within the local region which in turn can route to the rest of the cluster. There are some availability risks with this approach unless there are more than one mongos in the region.
You may find that you’ll prefer to use Atlas private endpoints (AWS PrivateLink) which has a few advantages here… First of all it avoids needing a large number of peering connections which in turn has the baggage of requiring none of those peers have overlapping CIDR blocks.
You may find that AWS Services like Transit Gateway may make setting up your multi-region network easier.
Thanks a lot for your answer! I haven’t posted an update of my current situation, but somehow, you manage to understand my current issue
- I created VPC peering between 1 EKS region (ap) and the 3 atlas region of my cluster. Worked fine, perfect!
- I wanted to create the 2nd VPC peering with another EKS region, but I couldn’t, as ALL my apps VPC are using the same CIDR
Soooo … it seems that I will have to check this privateLink solution in order to bypass the CIDR limitation.
How should I set it up? Any existing doc?
Should I create 1 private endpoint per region?
Does it means that each EKS cluster (1 in each region) will have a different connection URL?
Thanks !
After testing it out, I have to setup 1 private link per region of my cluster (so 3 private link total) AND AWS site-to-site VPN based on this: https://www.mongodb.com/docs/atlas/security-private-endpoint/?_ga=2.78915092.1245902753.1708940147-670633815.1705595053#limitations
Any better solution to avoid using IP whitelisting ?
An alternative to site-to-site VPN is to peer your app VPCs in region_a|b|c
to each other so that an app client in region_a
can leverage the transitive nature of Privatelink over a peering context to access Atlas in region_b
or region_c
. With peering, you could avoid IP allowlisting between your VPCs.