Serverless instance connection issue with C# driver through AWS VPC

Hi all,

I’m running into some trouble with an AWS lambda function written in C# that attempts to connect to a MongoDB serverless instance and perform a basic CRUD operation.

My lambda function is written with the latest C# MongoDB drivers (2.19.1) and the target framework/runtime is .NET 6.

I am able to connect to the serverless instance and successfully insert/delete when I allow connections from all IP addresses, so I don’t believe the code is the problem. I’ve followed all the instructions for creating a private endpoint with AWS PrivateLink in both the network access tab in Atlas and through AWS VPC/the VPC tab of the lambda function itself. Both the endpoint and endpoint service statuses are showing as available in Atlas and the AWS VPC endpoint status is also available. However, when I remove IP access from 0.0.0.0 my function times out when attempting to connect to the serverless instance.

After tweaking countless AWS settings with no success, I tried replacing the C# lambda with a node.js script. I didn’t change any of the VPC settings or connection strings and the updated lambda ran successfully using the private endpoint.

Has anyone experienced something similar to this or had success using C# to connect to a serverless instance via VPC? I’m wondering if there may be a driver issue specific to VPC connections as I know the C# code works and the private endpoint configuration works when using node.js.

Thanks in advance!

Hi, @Cameron_McNair,

I am not aware of any issues with connecting to Atlas via a VPC using the .NET/C# Driver. When you connect to a private endpoint, the driver performs a SRV lookup to determine the FQDNs of the cluster members. Those FQDNs will then be used to create SslStream instances to connect to the individual cluster members. Internally the .NET Framework will perform a DNS lookup to resolve the FQDNs to A records. These A records will be the private IP addresses of the cluster members. This is the same process that all drivers use to resolve mongodb+srv:// connection strings to a list of IP addresses for cluster members, including the Node.js Driver. So it is not immediately obvious why you can connect using the Node.js Driver but not the .NET/C# Driver with the same network settings.

Please provide the complete error message with stack trace (removing any usernames/passwords and other sensitive data) so that we can investigate further.

Sincerely,
James

Thanks @James_Kovacs for the quick response. I was finally able to figure the problem and, unsurprisingly, it was user error. I didn’t realize that the connection strings for private endpoints are slightly different from the connection strings for standard connections. I had written and tested my C# code before I set up the private endpoint and it didn’t cross my mind that the connection string would be different. Then when I wrote the node.js script, I copied the correct connection string from atlas and didn’t notice the difference.

Glad that you resolved your issue.

A short explanation of why we use different FQDNs for public versus PrivateLink. When PrivateLink was first deployed, I recall that we used the same connection string for both public and private connection strings and relied on split-horizon DNS capabilities of Route53 to resolve public versus private IP addresses. In theory this works great. In practice - due to DNS caching - this was problematic. If you were initially connected via the public network, your DNS stack would cache the public IPs. When you enabled PrivateLink, you would have those public IPs cached until the TTL expired. Our solution was to differentiate PrivateLink connection strings by adding -pri into the FQDN thus creating two sets of FQDNs, which would be cached independently and correctly.