New to this. Can't connect to Atlas from GCP

Hi all

This is completely new territory to me. I’ve set up an Atlas M10 instance, established VPC peering between Atlas and my GCP custom network, created a Kubernetes cluster in that same custom network.

Now, I create a busybox pod in my cluster, and launch nslookup against my Atlas cluster name. It says it can’t resolve the name. Am I missing something? If it can’t resolve the FQDN name, how my applications even be able to connect using the generated connection string (in Atlas GUI).

Please help.

btw, SRV records seem to work fine though - i am able to resolve them via DNS. … i must be doing something royally stupid, but I don’t know what it is. :frowning:

Hi @Jesum_Yip,

Welcome to the community! Thanks for contributing.

Glad to hear that using the SRV record works.

Now, I create a busybox pod in my cluster, and launch nslookup against my Atlas cluster name. It says it can’t resolve the name.

Are you able to provide nslookup command being used as well as the full output?

Look forward to hearing from you.

Kind Regards,
Jason

1 Like

I am new to this, and spent the last few hours trying and learning but I think I understand it better now.

When I create a cluster, I should connect to it using the SRV record as this is the cluster’s name. The individual shard names are the nodes in the cluster. The SRV record has a reference to these shard names. By specifying mongodb+srv:// in the connection string, I am telling the driver to please use the SRV record. Hence, the URI that comes after that is the SRV record. Is my understanding correct?


The image shows I am able to connect to it after I deployed a pod with mondb-clients in it. (Don’t worry, I already changed the password).

Here is the nslookup output

/ # nslookup -debug -type=SRV _mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net
Server:         10.9.80.10
Address:        10.9.80.10:53

Query #0 completed in 10ms:
Non-authoritative answer:
_mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net service = 0 0 27017 tyk-mongodb-shard-00-00-pri.8sjy7.mongodb.net
_mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net service = 0 0 27017 tyk-mongodb-shard-00-01-pri.8sjy7.mongodb.net
_mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net service = 0 0 27017 tyk-mongodb-shard-00-02-pri.8sjy7.mongodb.net

So by performing the connection this way, and I used the private connection (you can see the -pri in the name above), this means all my connectivity is flowing via the VPC peering that I have established. Is this correct? Hence less risk of data being sniffed across the wire.

So I think the problem I am now facing is with the helm chart of an app developed using Go. This app is called Tyk (it’s an API gateway).

In the yaml file, I have specified the value of the mongoDB connection string exactly as in the screenshot I provided above mongodb+srv://…

And during the installation of the components referenced by the helm chart, I am seeing connectivity failures to MongoDB. My tests in manually connecting to the Atlas instance was done using a pod deployed in the same K8 cluster, so I think network, and IP address whitelisting is sorted out. This means my next step is to ask Tyk why this is failing and how to troubleshoot it further.

Hi @Jesum_Yip,

Thanks for getting back to me with that information and the nslookup output.

The SRV record has a reference to these shard names. By specifying mongodb+srv:// in the connection string, I am telling the driver to please use the SRV record. Hence, the URI that comes after that is the SRV record. Is my understanding correct?

Yes, your understanding here is correct. However, the shard names you are referencing are specific to Sharded Clusters. The SRV record references the hostnames of the nodes within your cluster. Since you’ve mentioned this is an M10 cluster, I would assume that this is a standard replica set and not a sharded cluster.

So by performing the connection this way, and I used the private connection (you can see the -pri in the name above), this means all my connectivity is flowing via the VPC peering that I have established. Is this correct?

Yes, this is also correct.

And during the installation of the components referenced by the helm chart, I am seeing connectivity failures to MongoDB. My tests in manually connecting to the Atlas instance was done using a pod deployed in the same K8 cluster, so I think network, and IP address whitelisting is sorted out.

It does sound like there is no network, Atlas configuration or cluster issues from your description at this stage. However, to better troubleshoot this would you be able to provide the full connectivity failure errors you’re receiving?

Kind Regards,
Jason

1 Like

All I see in the pod logs is an infinite loop trying to connect to Mongodb. I don’t see the reason for the failure. Let me speak with a tyk representative to see how I can get more detailed error logs.

1 Like

Thanks for the update @Jesum_Yip, please update here if you find a resolution from the tyk representatives so that users in future may also be able to implement the same possible fix.

1 Like

I finally got it working. Looks like you are right - the driver doesn’t understand the +SRV keyword in the URI.

I had to finally use all the individual sharded clusters. I also couldn’t use ssl=true in the URI. Tyk didn’t like it. Instead, I had to modify the helm chart useSSL value to be TRUE.

This is the final URI I used (I am quite sure 27017 is not required because https://docs.mongodb.com/manual/reference/connection-string/ says that it will default to 27017 if no port is specified).

mongodb://username:password@tyk-mongodb-shard-00-00-pri.8sjy7.mongodb.net:27017,tyk-mongodb-shard-00-01-pri.8sjy7.mongodb.net:27017,tyk-mongodb-shard-00-02-pri.8sjy7.mongodb.net:27017/tyk-dashboard?&authSource=admin

I also did a double check to ensure the connection was not going through public internet - I had a look at the database access history and I can see the incoming connections are from a 10.x.x.x private subnet range.

Thank you!

2 Likes

I finally got it working.

Glad to hear @Jesum_Yip! Thanks for the update.

As an additional note, while connecting without SRV works, all official MongoDB drivers that are compatible with MongoDB server v3.6+ should support SRV connection URI. The real issue could be related to network configuration on the deployment environment.

Kind Regards,
Jason

1 Like