Multi-Kubernetes cluster MongoDB deployments allow you to add MongoDB instances in global clusters that span multiple geographic regions for increased availability and global distribution of data.
Prerequisites
Before you begin the following procedure, perform the following actions:
Install
kubectl
.Install Mongosh
Complete the GKE Clusters procedure or the equivalent.
Complete the TLS Certificates procedure or the equivalent.
Complete the Istio Service mesh procedure or the equivalent.
Complete the Deploy the MongoDB Operator procedure.
Complete the Multi-Cluster Ops Manager procedure procedure. You can skip this step if you use Cloud Manager instead of Ops Manager.
Set the required environment variables as follows:
# This script builds on top of the environment configured in the setup guides. # It depends (uses) the following env variables defined there to work correctly. # If you don't use the setup guide to bootstrap the environment, then define them here. # ${K8S_CLUSTER_0_CONTEXT_NAME} # ${K8S_CLUSTER_1_CONTEXT_NAME} # ${K8S_CLUSTER_2_CONTEXT_NAME} # ${MDB_NAMESPACE} export RS_RESOURCE_NAME=mdb export MONGODB_VERSION="8.0.5-ent"
Source Code
You can find all included source code in the MongoDB Kubernetes Operator repository.
Procedure
Create a CA certificate.
Run the following script to create the required CA Certificate with your certificate issuer.
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF 2 apiVersion: cert-manager.io/v1 3 kind: Certificate 4 metadata: 5 name: mdb-cert 6 spec: 7 dnsNames: 8 - "*.${MDB_NAMESPACE}.svc.cluster.local" 9 duration: 240h0m0s 10 issuerRef: 11 name: my-ca-issuer 12 kind: ClusterIssuer 13 renewBefore: 120h0m0s 14 secretName: cert-prefix-mdb-cert 15 usages: 16 - server auth 17 - client auth 18 EOF
Deploy the MongoDBMultiCluster
resource.
Set spec.credentials
, spec.opsManager.configMapRef.name
,
which you defined in the Multi-Cluster Ops Manager procedure;
define your security settings and deploy the MongoDBMultiCluster
resource.
In the following code sample, duplicateServiceObjects
is set to false
to enable DNS proxying
in Istio.
Note
To enable the cross-cluster DNS resolution by the Istio service mesh, this tutorial creates service objects with a single ClusterIP address per each Kubernetes Pod.
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF 2 apiVersion: mongodb.com/v1 3 kind: MongoDBMultiCluster 4 metadata: 5 name: ${RS_RESOURCE_NAME} 6 spec: 7 type: ReplicaSet 8 version: ${MONGODB_VERSION} 9 opsManager: 10 configMapRef: 11 name: mdb-org-project-config 12 credentials: mdb-org-owner-credentials 13 duplicateServiceObjects: false 14 persistent: true 15 backup: 16 mode: enabled 17 externalAccess: {} 18 security: 19 certsSecretPrefix: cert-prefix 20 tls: 21 ca: ca-issuer 22 authentication: 23 enabled: true 24 modes: ["SCRAM"] 25 clusterSpecList: 26 - clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME} 27 members: 2 28 - clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME} 29 members: 1 30 - clusterName: ${K8S_CLUSTER_2_CONTEXT_NAME} 31 members: 2 32 EOF
Verify that the MongoDBMultiCluster
resource is running.
Run the following command to confirm that the MongoDBMultiCluster
resource is running.
1 echo; echo "Waiting for MongoDB to reach Running phase..." 2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" wait --for=jsonpath='{.status.phase}'=Running "mdbmc/${RS_RESOURCE_NAME}" --timeout=900s 3 echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" 4 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods 5 echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" 6 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods 7 echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" 8 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
Create a MongoDB user and password.
Run the following command to create a MongoDB user and password. Please use strong passwords for your deployments.
1 kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF 2 apiVersion: v1 3 kind: Secret 4 metadata: 5 name: rs-user-password 6 type: Opaque 7 stringData: 8 password: password 9 --- 10 apiVersion: mongodb.com/v1 11 kind: MongoDBUser 12 metadata: 13 name: rs-user 14 spec: 15 passwordSecretKeyRef: 16 name: rs-user-password 17 key: password 18 username: "rs-user" 19 db: "admin" 20 mongodbResourceRef: 21 name: ${RS_RESOURCE_NAME} 22 roles: 23 - db: "admin" 24 name: "root" 25 EOF 26 27 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" wait --for=jsonpath='{.status.phase}'=Updated -n "${MDB_NAMESPACE}" mdbu/rs-user --timeout=300s
Verify connectivity.
Run the mongosh
following command to ensure that you can access your running MongoDB instance.
1 external_ip="$(kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" svc "${RS_RESOURCE_NAME}-0-0-svc-external" -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")" 2 3 mkdir -p certs 4 kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" cm/ca-issuer -o=jsonpath='{.data.ca-pem}' > certs/ca.crt 5 6 mongosh --host "${external_ip}" --username rs-user --password password --tls --tlsCAFile certs/ca.crt --tlsAllowInvalidHostnames --eval "db.runCommand({connectionStatus : 1})"
{ authInfo: { authenticatedUsers: [ { user: 'rs-user', db: 'admin' } ], authenticatedUserRoles: [ { role: 'root', db: 'admin' } ] }, ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1747925179, i: 1 }), signature: { hash: Binary.createFromBase64('T1ZP+QUFgBXayfOsRI6XFdEmjKI=', 0), keyId: Long('7507281432415305733') } }, operationTime: Timestamp({ t: 1747925179, i: 1 }) }