Navigation
This version of the documentation is archived and no longer supported.

Connect to a MongoDB Database Resource from Outside Kubernetes

On this page

The following procedure describes how to connect to a MongoDB resource deployed in Kubernetes from outside of the Kubernetes cluster.

Prerequisite

Compatible MongoDB Versions

For your databases to be accessed outside of Kubernetes, they must run MongoDB 4.2.3 or later.

Procedure

How you connect to a MongoDB resource that the Kubernetes Operator deployed from outside of the Kubernetes cluster depends on the resource.

This procedure uses the following example:

20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-standalone>
spec:
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  credentials: <mycredentials>
  type: Standalone
  persistent: true
  exposedExternally: true
...

To connect to your Kubernetes Operator-deployed MongoDB standalone resource from outside of the Kubernetes cluster:

1

Open your standalone resource YAML file.

2

Copy the sample standalone resource.

Change the settings of this YAML file to match your desired standalone configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-standalone>
spec:
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  credentials: <mycredentials>
  type: Standalone
  persistent: true
15
16
  exposedExternally: true
...
3

Paste the copied example section into your existing standalone resource.

Open your preferred text editor and paste the object specification at the end of your resource file in the spec section.

4

Change the highlighted settings to your preferred values.

Key Type Necessity Description Example
spec.exposedExternally Boolean Optional Set this value to true to allow external services to connect to the MongoDB deployment. This results in Kubernetes creating a NodePort service. true
5

Save your standalone config file.

6

Update and restart your standalone deployment.

In any directory, invoke the following Kubernetes command to update and restart your {k8sResource}}:

kubectl apply -f <standalone-conf>.yaml
7

Discover the dynamically assigned NodePorts.

Discover the dynamically assigned NodePort:

kubectl get services -n <metadata.namespace>

The list output should contain an entry similar to the following:

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE

<my-standalone>           NodePort    10.102.27.116   <none>        27017:30994/TCP   8m30s
  • Kubernetes exposes mongod on port 27017 within the Kubernetes container.
  • The NodePort service exposes the mongod via port 30994. NodePorts range from 30000 to 32767, inclusive.
8

Test the connection to the standalone.

To connect to your deployment from outside of the Kubernetes cluster, run the mongod command with the external FQDN of a node as the --host flag.

Example

If a node in the Kubernetes cluster has an external FQDN of ec2-54-212-23-143.us-west-2.compute.amazonaws.com, you can connect to this standalone instance from outside of the Kubernetes cluster using the following command:

mongosh --host ec2-54-212-23-143.us-west-2.compute.amazonaws.com \
  --port 30994

Tip

To obtain the external DNS of your Kubernetes cluster, you can run the following command:

kubectl describe nodes

This command displays the external DNS in the Addresses.ExternalDNS section of the output.

Alternatively, you can output the external DNS directly by running:

kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalDNS")].address }'

Important

This procedure explains the least complicated way to enable external connectivity. Other utilities can be used in production.

To connect to your Kubernetes Operator-deployed MongoDB replica set resource from outside of the Kubernetes cluster:

1

Deploy a replica set with the Kubernetes Operator.

If you haven’t deployed a replica set, follow the instructions to deploy one.

You must enable TLS for the replica set by providing a value for the spec.security.certsSecretPrefix setting. The replica set must use a custom CA certificate stored with spec.security.tls.ca.

2

Add Subject Alternate Names to your TLS certificates.

Add each external DNS name to the certificate SAN.

3

Create a NodePort for each Pod.

Invoke the following commands to create the NodePorts:

kubectl expose pod/<my-replica-set>-0 --type="NodePort" --port 27017
kubectl expose pod/<my-replica-set>-1 --type="NodePort" --port 27017
kubectl expose pod/<my-replica-set>-2 --type="NodePort" --port 27017
4

Discover the dynamically assigned NodePorts.

Discover the dynamically assigned NodePorts:

$ kubectl get svc | grep <my-replica-set>
<my-replica-set>-0     NodePort   172.30.39.228   <none>  27017:30907/TCP  16m
<my-replica-set>-1     NodePort   172.30.185.136  <none>  27017:32350/TCP  16m
<my-replica-set>-2     NodePort   172.30.84.192   <none>  27017:31185/TCP  17m
<my-replica-set>-svc   ClusterIP  None            <none>  27017/TCP        38m

NodePorts range from 30000 to 32767, inclusive.

5

Open your replica set resource YAML file.

6

Copy the sample replica set resource.

Change the settings of this YAML file to match your desired replica set configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-replica-set>
spec:
  members: 3
  version: "4.2.2-ent"
  type: ReplicaSet
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
  credentials: <mycredentials>
  persistent: true
15
16
17
18
19
20
21
22
23
  security:
    tls:
      enabled: true
  connectivity:
    replicaSetHorizons:
      - "example-website": "web1.example.com:30907"
      - "example-website": "web2.example.com:32350"
      - "example-website": "web3.example.com:31185"
...
7

Paste the copied example section into your existing replica set resource.

Open your preferred text editor and paste the object specification at the end of your resource file in the spec section.

8

Change the highlighted settings to your preferred values.

Key Type Necessity Description Example
spec.connectivity
collection Conditional

Add this parameter and values if you need your database to be accessed outside of Kubernetes. This setting allows you to provide different DNS settings within the Kubernetes cluster and to the Kubernetes cluster. The Kubernetes Operator uses split horizon DNS for replica set members. This feature allows communication both within the Kubernetes cluster and from outside Kubernetes.

You may add multiple external mappings per host.

Split Horizon Requirements

  • Make sure that each value in this array is unique.
  • Make sure that the number of entries in this array matches the value given in spec.members.
  • Provide a value for the spec.security.certsSecretPrefix setting to enable TLS. This method to use split horizons requires the Server Name Indication extension of the TLS protocol.
See Setting
spec.security
.tls.certsSecretPrefix
string Required Add the <prefix> of the secret name that contains your MongoDB deployment’s TLS certificates. devDb
9

Confirm the external hostnames and NodePort values in your replica set resource.

Confirm that the external hostnames in the spec.connectivity.replicaSetHorizons setting are correct.

External hostnames should match the DNS names of Kubernetes worker nodes. These can be any nodes in the Kubernetes cluster. Kubernetes nodes use internal routing if the pod runs on another node.

Set the ports in spec.connectivity.replicaSetHorizons to the NodePort values that you discovered.

Example

15
16
17
18
19
20
21
22
23
  security:
    tls:
      enabled: true
  connectivity:
    replicaSetHorizons:
      - "example-website": "web1.example.com:30907"
      - "example-website": "web2.example.com:32350"
      - "example-website": "web3.example.com:31185"
...
10

Save your replica set config file.

11

Update and restart your replica set deployment.

In any directory, invoke the following Kubernetes command to update and restart your {k8sResource}}:

kubectl apply -f <replica-set-conf>.yaml
12

Test the connection to the replica set.

In the development environment, for each host in a replica set, run the following command:

mongosh --host <my-replica-set>/web1.example.com \
      --port 30907
      --ssl \
      --sslAllowInvalidCertificates

Note

Don’t use the --sslAllowInvalidCertificates flag in production.

In production, for each host in a replica set, specify the TLS certificate and the CA to securely connect to client tools or applications:

mongosh --host <my-replica-set>/web1.example.com \
  --port 30907 \
  --tls \
  --tlsCertificateKeyFile server.pem \
  --tlsCAFile ca-pem

If the connection succeeds, you should see:

Enterprise <my-replica-set> [primary]

To connect to your Kubernetes Operator-deployed MongoDB replica set resource from outside of the Kubernetes cluster with OpenShift:

1

Deploy a replica set with the Kubernetes Operator.

If you haven’t deployed a replica set, follow the instructions to deploy one.

You must enable TLS for the replica set by providing a value for the spec.security.certsSecretPrefix setting. The replica set must use a custom CA certificate stored with spec.security.tls.ca.

2

Configure services to ensure connectivity.

  1. Paste the following example services into a text editor:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: my-external-0
    spec:
      ports:
        - name: mongodb
          protocol: TCP
          port: 443
          targetPort: 27017
      selector:
        statefulset.kubernetes.io/pod-name: my-external-0
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: my-external-1
    spec:
      ports:
        - name: mongodb
          protocol: TCP
          port: 443
          targetPort: 27017
      selector:
        statefulset.kubernetes.io/pod-name: my-external-1
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: my-external-2
    spec:
      ports:
        - name: mongodb
          protocol: TCP
          port: 443
          targetPort: 27017
      selector:
        statefulset.kubernetes.io/pod-name: my-external-2
    
    ...
    

    Note

    If the spec.selector has entries that target headless services or applications, OpenShift may create a software firewall rule explicitly dropping connectivity. Review the selectors carefully and consider targeting the stateful set pod members directly as seen in the example. Routes in OpenShift offer port 80 or port 443. This example service uses port 443.

  2. Change the settings to your preferred values.

  3. Save this file with a .yaml file extension.

  4. To create the services, invoke the following kubectl command on the services file you created:

    kubectl apply -f <my-external-services>.yaml
    
3

Configure routes to ensure TLS terminination passthrough.

  1. Paste the following example routes into a text editor:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    ---
    apiVersion: v1
    kind: Route
    metadata:
      name: my-external-0
    spec:
      host: my-external-0.{redacted}
      to:
        kind: Service
        name: my-external-0
      tls:
        termination: passthrough
    ---
    apiVersion: v1
    kind: Route
    metadata:
      name: my-external-1
    spec:
      host: my-external-1.{redacted}
      to:
        kind: Service
        name: my-external-1
      tls:
        termination: passthrough
    ---
    apiVersion: v1
    kind: Route
    metadata:
      name: my-external-2
    spec:
      host: my-external-2.{redacted}
      to:
        kind: Service
        name: my-external-2
      tls:
        termination: passthrough
    
    ...
     
    

    Note

    To ensure the TLS SNI negotiation with mongod necessary for mongod to respond with the correct horizon replica set topology for the drivers to use, you must set TLS termination passthrough.

  2. Change the settings to your preferred values.

  3. Save this file with a .yaml file extension.

  4. To create the routes, invoke the following kubectl command on the routes file you created:

    kubectl apply -f <my-external-routes>.yaml
    
4

Add Subject Alternate Names to your TLS certificates.

Add each external DNS name to the certificate SAN.

5

Open your replica set resource YAML file.

6

Configure your replica set resource YAML file.

Use the following example to edit your replica set resource YAML file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-external
  namespace: mongodb
spec:
  type: ReplicaSet
  members: 3
  version: 4.2.2-ent
  opsManager:
    configMapRef:
      name: {redacted}
  credentials: {redacted}
  persistent: false
  security:
    tls:
      # TLS must be enabled to allow external connectivity
      enabled: true
    authentication:
      enabled: true
      modes: ["SCRAM","X509"]
  connectivity:
    # The "localhost" routes are included to enable the creation of localhost
    # TLS SAN in the CSR, per OpenShift route requirements.
    # "ocroute" is the configured route in OpenShift.
    replicaSetHorizons:
      - "ocroute": "my-external-0.{redacted}:443"
        "localhost": "localhost:27017"
      - "ocroute": "my-external-1.{redacted}:443"
        "localhost": "localhost:27018"
      - "ocroute": "my-external-2.{redacted}:443"
        "localhost": "localhost:27019"

...

Note

OpenShift clusters require localhost horizons if you intend to use the Kubernetes Operator to create each CSR. If you manually create your TLS certificates, ensure you include localhost in the SAN list.

7

Change the settings to your preferred values.

Key Type Necessity Description Example
spec.connectivity
collection Conditional

Add this parameter and values if you need your database to be accessed outside of Kubernetes. This setting allows you to provide different DNS settings within the Kubernetes cluster and to the Kubernetes cluster. The Kubernetes Operator uses split horizon DNS for replica set members. This feature allows communication both within the Kubernetes cluster and from outside Kubernetes.

You may add multiple external mappings per host.

Split Horizon Requirements

  • Make sure that each value in this array is unique.
  • Make sure that the number of entries in this array matches the value given in spec.members.
  • Provide a value for the spec.security.certsSecretPrefix setting to enable TLS. This method to use split horizons requires the Server Name Indication extension of the TLS protocol.
See Setting
spec.security
.tls.certsSecretPrefix
string Required Add the <prefix> of the secret name that contains your MongoDB deployment’s TLS certificates. devDb
8

Save your replica set config file.

9

Create the necessary TLS certificates and Kubernetes secrets.

Configure TLS for your replica set. Create one secret for the MongoDB replica set and one for the certificate authority. The Kubernetes Operator uses these secrets to place the TLS files in the pods for MongoDB to use.

10

Update and restart your replica set deployment.

In any directory, invoke the following Kubernetes command to update and restart your {k8sResource}}:

kubectl apply -f <replica-set-conf>.yaml
11

Test the connection to the replica set.

The Kubernetes Operator should deploy the MongoDB replica set, configured with the horizon routes created for ingress. After the Kubernetes Operator completes the deployment, you may connect with the horizon using TLS connectivity. If the certificate authority is not present on your workstation, you can view and copy it from a MongoDB pod using the following command:

oc exec -it my-external-0 -- cat /mongodb-automation/ca.pem

To test the connections, run the following command:

Note

In the following example, for each member of the replica set, use your replica set names and replace {redacted} with the domain that you manage.

mongosh --host my-external/my-external-0.{redacted} \
      --port 443
      --ssl \
      --tlsAllowInvalidCertificates

Warning

Don’t use the --tlsAllowInvalidCertificates flag in production.

In production, for each host in a replica set, specify the TLS certificate and the CA to securely connect to client tools or applications:

mongosh --host my-external/my-external-0.{redacted} \
  --port 443 \
  --tls \
  --tlsCertificateKeyFile server.pem \
  --tlsCAFile ca-pem

If the connection succeeds, you should see:

Enterprise <my-replica-set> [primary]

For this procedure, you must deploy a TLS-enabled sharded MongoDB cluster in the Kubernetes Operator. Provide the external DNS names (SANs) for each member of the MongoDB sharded cluster.

The SAN for each MongoDB hosts corresponds to:

<mdb-resource-name><shard><pod-index>.<external-domain>
<mdb-resource-name><config><pod-index>.<external-domain>
<mdb-resource-name><mongos><pod-index>.<external-domain>

Each TLS certificate requires the FQDN (SAN) that corresponds to the FQDN that this host has outside the sharded cluster deployed with the Kubernetes Operator.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-sharded-cluster>
spec:
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  shardCount: 2
  mongodsPerShardCount: 3
  mongosCount: 2
  configServerCount: 3
  credentials: my-secret
  type: ShardedCluster
  persistent: true
  exposedExternally: true
  security:
    tls:
      enabled: true
      additionalCertificateDomains:
        - "additional-cert-test.com"
...

To connect to your Kubernetes Operator-deployed MongoDB sharded cluster resource from outside of the Kubernetes cluster:

1

Open your sharded cluster resource YAML file.

2

Copy the sample sharded cluster resource.

Change the settings of this YAML file to match your desired sharded cluster configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-sharded-cluster>
spec:
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  shardCount: 2
  mongodsPerShardCount: 3
  mongosCount: 2
  configServerCount: 3
  credentials: my-secret
  type: ShardedCluster
  persistent: true
19
20
21
22
23
24
25
  exposedExternally: true
  security:
    tls:
      enabled: true
      additionalCertificateDomains:
        - "additional-cert-test.com"
...
3

Paste the copied example section into your existing sharded cluster resource.

Open your preferred text editor and paste the object specification at the end of your resource file in the spec section.

4

Change the highlighted settings to your preferred values.

Key Type Necessity Description Example
spec.exposedExternally Boolean Optional Set this value to true to allow external services to connect to the MongoDB deployment. This results in Kubernetes creating a NodePort service. true
spec.security.tls
collection Optional List of every domain that should be added to TLS certificates to each pod in this deployment. When you set this parameter, every CSR that the Kubernetes Operator transforms into a TLS certificate includes a SAN in the form <pod name>.<additional cert domain>. true
spec.security
string Required Add the <prefix> of the secret name that contains your MongoDB deployment’s TLS certificates. devDb
5

Save your sharded cluster config file.

6

Update and restart your sharded cluster deployment.

In any directory, invoke the following Kubernetes command to update and restart your {k8sResource}}:

kubectl apply -f <sharded-cluster-conf>.yaml
7

Discover the dynamically assigned NodePorts.

Discover the dynamically assigned NodePort:

kubectl get services -n <metadata.namespace>

The list output should contain an entry similar to the following:

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE

<my-sharded cluster>      NodePort    10.106.44.30    <none>        27017:30078/TCP   10s
  • Kubernetes exposes mongod on port 27017 within the Kubernetes container.
  • The NodePort service exposes the mongod via port 30078. NodePorts range from 30000 to 32767, inclusive.
8

Test the connection to the sharded cluster.

To connect to your deployment from outside of the Kubernetes cluster, run the mongod command with the external FQDN of a node as the --host flag.

Example

If a node in the Kubernetes cluster has an external FQDN of ec2-54-212-23-143.us-west-2.compute.amazonaws.com, you can connect to this sharded cluster instance from outside of the Kubernetes cluster using the following command:

mongosh --host ec2-54-212-23-143.us-west-2.compute.amazonaws.com \
  --port 30078

Tip

To obtain the external DNS of your Kubernetes cluster, you can run the following command:

kubectl describe nodes

This command displays the external DNS in the Addresses.ExternalDNS section of the output.

Alternatively, you can output the external DNS directly by running:

kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalDNS")].address }'