To make your multi-cluster Ops Manager and the Application Database deployment resilient to entire data center or zone failures, deploy the Ops Manager Application and the Application Database on multiple Kubernetes clusters.
To learn more about the architecture, networking, limitations, and peformance of multi-Kubernetes cluster deployments for Ops Manager resources, see:
Overview
Note
The following procedure requires that you deploy a service mesh across all of your Kubernetes clusters. If you need to deploy Ops Manager across multiple Kubernetes clusters without a service mesh, please see Multi-Cluster Ops Manager Without a Service Mesh to learn more.
When you deploy the Ops Manager Application and the Application Database using the procedure in this section, you:
- Use GKE (Google Kubernetes Engine) and Istio service mesh as tools that help demonstrate the multi-Kubernetes cluster deployment. 
- Install the Kubernetes Operator on one of the member Kubernetes clusters known as the operator cluster. The operator cluster acts as a Hub in the "Hub and Spoke" pattern used by the Kubernetes Operator to manage deployments on multiple Kubernetes clusters. 
- Deploy the operator cluster in the - $OPERATOR_NAMESPACEand configure this cluster to watch- $NAMESPACEand manage all member Kubernetes clusters.
- Deploy the Application Database and the Ops Manager Application on a single member Kubernetes cluster to demonstrate similarity of a multi-cluster deployment to a single cluster deployment. A single cluster deployment with - spec.topologyand- spec.applicationDatabase.topologyset to- MultiClusterprepares the deployment for adding more Kubernetes clusters to it.
- Deploy an additional Application Database replica set on the second member Kubernetes cluster to improve Application Database's resiliency. You also deploy an additional Ops Manager Application instance in the second member Kubernetes cluster. 
- Create valid certificates for TLS encryption, and establish TLS-encrypted connections to and from the Ops Manager Application and between the Application Database's replica set members. When running over HTTPS, Ops Manager runs on port - 8443by default.
- Enable backup using S3-compatible storage and deploy the Backup Daemon on the third member Kubernetes cluster. To simplify setting up S3-compatible storage buckets, you deploy the MinIO Operator. You enable the Backup Daemon only on one member cluster in your deployment. However, you can configure other member clusters to host the Backup Daemon resources as well. Only S3 backups are supported in multi-cluster Ops Manager deployments. 
Prerequisites
Install Tools
Before you can begin the deployment, install the following required tools:
- Install Helm. Installing Helm is required for the installation of the Kubernetes Operator. 
- Prepare the GCP project so that you can use it to create GKE (Google Kubernetes Engine) clusters. In the following procedure, you create three new GKE clusters, with a total of seven - e2-standard-4low-cost Spot VMs.
Authorize into gcloud CLI
Install gcloud CLI and authorize into it:
gcloud auth login 
Install the kubectl mongodb plugin
The kubectl mongodb plugin automates the configuration of the Kubernetes clusters. This allows the Kubernetes Operator to deploy resources, necessary roles, and services for accounts for the Ops Manager Application, Application Database, and MongoDB resources on these clusters.
To install the kubectl mongodb plugin:
Download your desired Kubernetes Operator package version.
Download your desired Kubernetes Operator package version from the Release Page of the MongoDB Controllers for Kubernetes Operator Repository.
The package's name uses this pattern:
kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz.
Use one of the following packages:
- kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz
- kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz
- kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz
- kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz
Locate the kubectl mongodb plugin binary and copy it to its desired destination.
Find the kubectl-mongodb binary in the unpacked directory and move it
to its desired destination, inside the PATH for the Kubernetes Operator user,
as shown in the following example:
mv kubectl-mongodb /usr/local/bin/kubectl-mongodb 
Now you can run the kubectl mongodb plugin using the following commands:
kubectl mongodb multicluster setup kubectl mongodb multicluster recover 
To learn more about the supported flags, see the MongoDB kubectl plugin Reference.
Clone the MongoDB Controllers for Kubernetes Operator Repository
Clone the MongoDB Controllers for Kubernetes Operator repository, change into the mongodb-kubernetes
directory, and check out the current version.
git clone https://github.com/mongodb/mongodb-kubernetes.git cd mongodb-kubernetes git checkout 1.5.0 cd public/architectures 
Important
Some steps in this guide work only if you run them from
the public/samples/ops-manager-multi-cluster directory.
Set up Environment Variables
All steps in this guide reference the environment variables defined in env_variables.sh.
1 export MDB_GKE_PROJECT="### Set your GKE project name here ###" 2 3 export NAMESPACE="mongodb" 4 export OPERATOR_NAMESPACE="mongodb-operator" 5 6 comma-separated key=value pairs 7 export OPERATOR_ADDITIONAL_HELM_VALUES="" 8 9 Adjust the values for each Kubernetes cluster in your deployment. 10 The deployment script references the following variables to get values for each cluster. 11 export K8S_CLUSTER_0="k8s-mdb-0" 12 export K8S_CLUSTER_0_ZONE="europe-central2-a" 13 export K8S_CLUSTER_0_NUMBER_OF_NODES=3 14 export K8S_CLUSTER_0_MACHINE_TYPE="e2-standard-4" 15 export K8S_CLUSTER_0_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_0_ZONE}_${K8S_CLUSTER_0}" 16 17 export K8S_CLUSTER_1="k8s-mdb-1" 18 export K8S_CLUSTER_1_ZONE="europe-central2-b" 19 export K8S_CLUSTER_1_NUMBER_OF_NODES=3 20 export K8S_CLUSTER_1_MACHINE_TYPE="e2-standard-4" 21 export K8S_CLUSTER_1_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_1_ZONE}_${K8S_CLUSTER_1}" 22 23 export K8S_CLUSTER_2="k8s-mdb-2" 24 export K8S_CLUSTER_2_ZONE="europe-central2-c" 25 export K8S_CLUSTER_2_NUMBER_OF_NODES=1 26 export K8S_CLUSTER_2_MACHINE_TYPE="e2-standard-4" 27 export K8S_CLUSTER_2_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_2_ZONE}_${K8S_CLUSTER_2}" 28 29 Comment out the following line so that the script does not create preemptible nodes. 30 DO NOT USE preemptible nodes in production. 31 export GKE_SPOT_INSTANCES_SWITCH="--preemptible" 32 33 export S3_OPLOG_BUCKET_NAME=s3-oplog-store 34 export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store 35 36 minio defaults 37 export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local" 38 export S3_ACCESS_KEY="console" 39 export S3_SECRET_KEY="console123" 40 41 export OFFICIAL_OPERATOR_HELM_CHART="mongodb/mongodb-kubernetes" 42 export OPERATOR_HELM_CHART="${OFFICIAL_OPERATOR_HELM_CHART}" 43 44 (Optional) Change the following setting when using the external URL. 45 This env variable is used in OpenSSL configuration to generate 46 server certificates for Ops Manager Application. 47 export OPS_MANAGER_EXTERNAL_DOMAIN="om-svc.${NAMESPACE}.svc.cluster.local" 48 49 export OPS_MANAGER_VERSION="7.0.4" 50 export APPDB_VERSION="7.0.9-ubi8" 
Adjust the settings in the previous example for your needs as instructed in the comments and source them into your shell as follows:
source env_variables.sh 
Important
Each time after you update env_variables.sh, run source env_variables.sh
to ensure that the scripts in this section use updated variables.
Procedure
This procedure applies to deploying an Ops Manager instance on multiple Kubernetes clusters.
Create Kubernetes clusters.
You may skip this step if you already have installed and configured your own Kubernetes clusters with a service mesh.
- Create three GKE (Google Kubernetes Engine) clusters: - 1 - gcloud container clusters create "${K8S_CLUSTER_0}" \ - 2 - --zone="${K8S_CLUSTER_0_ZONE}" \ - 3 - --num-nodes="${K8S_CLUSTER_0_NUMBER_OF_NODES}" \ - 4 - --machine-type "${K8S_CLUSTER_0_MACHINE_TYPE}" \ - 5 - ${GKE_SPOT_INSTANCES_SWITCH:-""} - 1 - gcloud container clusters create "${K8S_CLUSTER_1}" \ - 2 - --zone="${K8S_CLUSTER_1_ZONE}" \ - 3 - --num-nodes="${K8S_CLUSTER_1_NUMBER_OF_NODES}" \ - 4 - --machine-type "${K8S_CLUSTER_1_MACHINE_TYPE}" \ - 5 - ${GKE_SPOT_INSTANCES_SWITCH:-""} - 1 - gcloud container clusters create "${K8S_CLUSTER_2}" \ - 2 - --zone="${K8S_CLUSTER_2_ZONE}" \ - 3 - --num-nodes="${K8S_CLUSTER_2_NUMBER_OF_NODES}" \ - 4 - --machine-type "${K8S_CLUSTER_2_MACHINE_TYPE}" \ - 5 - ${GKE_SPOT_INSTANCES_SWITCH:-""} 
- Set your default gcloud project: - 1 - gcloud config set project "${MDB_GKE_PROJECT}" 
- Obtain credentials and save contexts to the current - kubeconfigfile. By default, this file is located in the- ~/.kube/configdirectory and referenced by the- $KUBECONFIGenvironment variable.- 1 - gcloud container clusters get-credentials "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" - 2 - gcloud container clusters get-credentials "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" - 3 - gcloud container clusters get-credentials "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" - All - kubectlcommands reference these contexts using the following variables:- $K8S_CLUSTER_0_CONTEXT_NAME
- $K8S_CLUSTER_1_CONTEXT_NAME
- $K8S_CLUSTER_2_CONTEXT_NAME
 
- Verify that - kubectlhas access to Kubernetes clusters:- 1 - echo "Nodes in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" get nodes - 3 - echo; echo "Nodes in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" - 4 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" get nodes - 5 - echo; echo "Nodes in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" - 6 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" get nodes - 1 - Nodes in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 - 2 - NAME STATUS ROLES AGE VERSION - 3 - gke-k8s-mdb-0-default-pool-267f1e8f-d0dg Ready <none> 38m v1.29.7-gke.1104000 - 4 - gke-k8s-mdb-0-default-pool-267f1e8f-pmgh Ready <none> 38m v1.29.7-gke.1104000 - 5 - gke-k8s-mdb-0-default-pool-267f1e8f-vgj9 Ready <none> 38m v1.29.7-gke.1104000 - 6 - 7 - Nodes in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 - 8 - NAME STATUS ROLES AGE VERSION - 9 - gke-k8s-mdb-1-default-pool-263d341f-3tbp Ready <none> 38m v1.29.7-gke.1104000 - 10 - gke-k8s-mdb-1-default-pool-263d341f-4f26 Ready <none> 38m v1.29.7-gke.1104000 - 11 - gke-k8s-mdb-1-default-pool-263d341f-z751 Ready <none> 38m v1.29.7-gke.1104000 - 12 - 13 - Nodes in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 - 14 - NAME STATUS ROLES AGE VERSION - 15 - gke-k8s-mdb-2-default-pool-d0da5fd1-chm1 Ready <none> 38m v1.29.7-gke.1104000 
- Install Istio service mesh to allow cross-cluster DNS resolution and network connectivity between Kubernetes clusters: - 1 - CTX_CLUSTER1=${K8S_CLUSTER_0_CONTEXT_NAME} \ - 2 - CTX_CLUSTER2=${K8S_CLUSTER_1_CONTEXT_NAME} \ - 3 - CTX_CLUSTER3=${K8S_CLUSTER_2_CONTEXT_NAME} \ - 4 - ISTIO_VERSION="1.20.2" \ - 5 - ../multi-cluster/install_istio_separate_network.sh 
Create namespaces.
Note
To enable sidecar injection in Istio, the following commands add
the istio-injection=enabled labels to the $OPERATOR_NAMESPACE
and the mongodb namespaces on each member cluster.
If you use another service mesh, configure it to handle network
traffic in the created namespaces.
- Create a separate namespace, - mongodb-operator, referenced by the- $OPERATOR_NAMESPACEenvironment variable for the Kubernetes Operator deployment.
- Create the same - $OPERATOR_NAMESPACEon each member Kubernetes cluster. This is needed so that the kubectl mongodb plugin can create a service account for the Kubernetes Operator on each member cluster. The Kubernetes Operator uses these service accounts on the operator cluster to perform operations on each member cluster.- 1 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite - 3 - 4 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" - 5 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite - 6 - 7 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}" - 8 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite 
- On each member cluster, including the member cluster that serves as the operator cluster, create another, separate namespace, - mongodb. The Kubernetes Operator uses this namespace for Ops Manager resources and components.- 1 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${NAMESPACE}" - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite - 3 - 4 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${NAMESPACE}" - 5 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite - 6 - 7 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${NAMESPACE}" - 8 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite 
Optional. Authorize clusters to pull secrets from private image registries.
This step is optional if you use official Helm charts and images from the Quay registry.
1 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" create secret generic "image-registries-secret" \ 2         --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 3 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 4         --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 5 kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 6         --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 7 kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \ 8         --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson 
Optional. Check cluster connectivity.
The following optional scripts verify whether the service mesh is configured correctly for cross-cluster DNS resolution and connectivity.
- Run this script on cluster 0: - 1 - kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: apps/v1 - 3 - kind: StatefulSet - 4 - metadata: - 5 - name: echoserver0 - 6 - spec: - 7 - replicas: 1 - 8 - selector: - 9 - matchLabels: - 10 - app: echoserver0 - 11 - template: - 12 - metadata: - 13 - labels: - 14 - app: echoserver0 - 15 - spec: - 16 - containers: - 17 - - image: k8s.gcr.io/echoserver:1.10 - 18 - imagePullPolicy: Always - 19 - name: echoserver0 - 20 - ports: - 21 - - containerPort: 8080 - 22 - EOF 
- Run this script on cluster 1: - 1 - kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: apps/v1 - 3 - kind: StatefulSet - 4 - metadata: - 5 - name: echoserver1 - 6 - spec: - 7 - replicas: 1 - 8 - selector: - 9 - matchLabels: - 10 - app: echoserver1 - 11 - template: - 12 - metadata: - 13 - labels: - 14 - app: echoserver1 - 15 - spec: - 16 - containers: - 17 - - image: k8s.gcr.io/echoserver:1.10 - 18 - imagePullPolicy: Always - 19 - name: echoserver1 - 20 - ports: - 21 - - containerPort: 8080 - 22 - EOF 
- Run this script on cluster 2: - 1 - kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: apps/v1 - 3 - kind: StatefulSet - 4 - metadata: - 5 - name: echoserver2 - 6 - spec: - 7 - replicas: 1 - 8 - selector: - 9 - matchLabels: - 10 - app: echoserver2 - 11 - template: - 12 - metadata: - 13 - labels: - 14 - app: echoserver2 - 15 - spec: - 16 - containers: - 17 - - image: k8s.gcr.io/echoserver:1.10 - 18 - imagePullPolicy: Always - 19 - name: echoserver2 - 20 - ports: - 21 - - containerPort: 8080 - 22 - EOF 
- Run this script to wait for the creation of StatefulSets: - 1 - kubectl wait --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver0-0 --timeout=60s - 2 - kubectl wait --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver1-0 --timeout=60s - 3 - kubectl wait --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver2-0 --timeout=60s 
- Create Pod service on cluster 0: - 1 - kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: v1 - 3 - kind: Service - 4 - metadata: - 5 - name: echoserver0-0 - 6 - spec: - 7 - ports: - 8 - - port: 8080 - 9 - targetPort: 8080 - 10 - protocol: TCP - 11 - selector: - 12 - statefulset.kubernetes.io/pod-name: "echoserver0-0" - 13 - EOF 
- Create Pod service on cluster 1: - 1 - kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: v1 - 3 - kind: Service - 4 - metadata: - 5 - name: echoserver1-0 - 6 - spec: - 7 - ports: - 8 - - port: 8080 - 9 - targetPort: 8080 - 10 - protocol: TCP - 11 - selector: - 12 - statefulset.kubernetes.io/pod-name: "echoserver1-0" - 13 - EOF 
- Create Pod service on cluster 2: - 1 - kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: v1 - 3 - kind: Service - 4 - metadata: - 5 - name: echoserver2-0 - 6 - spec: - 7 - ports: - 8 - - port: 8080 - 9 - targetPort: 8080 - 10 - protocol: TCP - 11 - selector: - 12 - statefulset.kubernetes.io/pod-name: "echoserver2-0" - 13 - EOF 
- Create round robin service on cluster 0: - 1 - kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: v1 - 3 - kind: Service - 4 - metadata: - 5 - name: echoserver - 6 - spec: - 7 - ports: - 8 - - port: 8080 - 9 - targetPort: 8080 - 10 - protocol: TCP - 11 - selector: - 12 - app: echoserver0 - 13 - EOF 
- Create round robin service on cluster 1: - 1 - kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: v1 - 3 - kind: Service - 4 - metadata: - 5 - name: echoserver - 6 - spec: - 7 - ports: - 8 - - port: 8080 - 9 - targetPort: 8080 - 10 - protocol: TCP - 11 - selector: - 12 - app: echoserver1 - 13 - EOF 
- Create round robin service on cluster 2: - 1 - kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: v1 - 3 - kind: Service - 4 - metadata: - 5 - name: echoserver - 6 - spec: - 7 - ports: - 8 - - port: 8080 - 9 - targetPort: 8080 - 10 - protocol: TCP - 11 - selector: - 12 - app: echoserver2 - 13 - EOF 
- Verify Pod 0 from cluster 1: - 1 - source_cluster=${K8S_CLUSTER_1_CONTEXT_NAME} - 2 - target_pod="echoserver0-0" - 3 - source_pod="echoserver1-0" - 4 - target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" - 5 - echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" - 6 - out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ - 7 - /bin/bash -c "curl -v ${target_url}" 2>&1); - 8 - grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) - 1 - Checking cross-cluster DNS resolution and connectivity from echoserver1-0 in gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 to echoserver0-0 - 2 - SUCCESS 
- Verify Pod 1 from cluster 0: - 1 - source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME} - 2 - target_pod="echoserver1-0" - 3 - source_pod="echoserver0-0" - 4 - target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" - 5 - echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" - 6 - out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ - 7 - /bin/bash -c "curl -v ${target_url}" 2>&1); - 8 - grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) - 1 - Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver1-0 - 2 - SUCCESS 
- Verify Pod 1 from cluster 2: - 1 - source_cluster=${K8S_CLUSTER_2_CONTEXT_NAME} - 2 - target_pod="echoserver1-0" - 3 - source_pod="echoserver2-0" - 4 - target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" - 5 - echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" - 6 - out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ - 7 - /bin/bash -c "curl -v ${target_url}" 2>&1); - 8 - grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) - 1 - Checking cross-cluster DNS resolution and connectivity from echoserver2-0 in gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 to echoserver1-0 - 2 - SUCCESS 
- Verify Pod 2 from cluster 0: - 1 - source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME} - 2 - target_pod="echoserver2-0" - 3 - source_pod="echoserver0-0" - 4 - target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080" - 5 - echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}" - 6 - out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \ - 7 - /bin/bash -c "curl -v ${target_url}" 2>&1); - 8 - grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1) - 1 - Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver2-0 - 2 - SUCCESS 
- Run the cleanup script: - 1 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver0 - 2 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver1 - 3 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver2 - 4 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver - 5 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver - 6 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver - 7 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver0-0 - 8 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver1-0 - 9 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver2-0 
Deploy a multi-cluster configuration.
In this step, you use the kubectl mongodb plugin to automate the Kubernetes cluster
configuration that is necessary for the Kubernetes Operator to manage workloads
on multiple Kubernetes clusters.
Because you configure the Kubernetes clusters before you install the Kubernetes Operator, when you deploy the Kubernetes Operator for the multi-Kubernetes cluster operation, all the necessary multi-cluster configuration is already in place.
As stated in the Overview, the Kubernetes Operator has the configuration for three member clusters that you can use to deploy Ops Manager MongoDB databases. The first cluster is also used as the operator cluster, where you install the Kubernetes Operator and deploy the custom resources.
1 kubectl mongodb multicluster setup \ 2   --central-cluster="${K8S_CLUSTER_0_CONTEXT_NAME}" \ 3   --member-clusters="${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}" \ 4   --member-cluster-namespace="${NAMESPACE}" \ 5   --central-cluster-namespace="${OPERATOR_NAMESPACE}" \ 6   --create-service-account-secrets \ 7   --install-database-roles=true \ 8   --image-pull-secrets=image-registries-secret 
1 Ensured namespaces exist in all clusters. 2 creating operator cluster roles in cluster: gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 3 creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 4 creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 5 Ensured ServiceAccounts and Roles. 6 Creating KubeConfig secret mongodb-operator/mongodb-enterprise-operator-multi-cluster-kubeconfig in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 7 Ensured database Roles in member clusters. 8 Creating Member list Configmap mongodb-operator/mongodb-kubernetes-operator-member-list in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 
Install the Kubernetes Operator using the Helm chart.
- Add and update the MongoDB Helm repository. Verify that the local Helm cache refers to the correct Kubernetes Operator version: - 1 - helm repo add mongodb https://mongodb.github.io/helm-charts - 2 - helm repo update mongodb - 3 - helm search repo "${OFFICIAL_OPERATOR_HELM_CHART}" - 1 - "mongodb" already exists with the same configuration, skipping - 2 - Hang tight while we grab the latest from your chart repositories... - 3 - ...Successfully got an update from the "mongodb" chart repository - 4 - Update Complete. ⎈Happy Helming!⎈ - 5 - NAME CHART VERSION APP VERSION DESCRIPTION - 6 - mongodb/mongodb-kubernetes 1.0.0 MongoDB Kubernetes Enterprise Operator 
- Install the Kubernetes Operator into the - $OPERATOR_NAMESPACE, configured to watch- $NAMESPACEand to manage three member Kubernetes clusters. At this point in the procedure, ServiceAccounts and roles are already deployed by the- kubectl mongodbplugin. Therefore, the following scripts skip configuring them and set- operator.createOperatorServiceAccount=falseand- operator.createResourcesServiceAccountsAndRoles=false. The scripts specify the- multiCluster.clusterssetting to instruct the Helm chart to deploy the Kubernetes Operator in multi-cluster mode.- 1 - helm upgrade --install \ - 2 - --debug \ - 3 - --kube-context "${K8S_CLUSTER_0_CONTEXT_NAME}" \ - 4 - mongodb-kubernetes-operator-multi-cluster \ - 5 - "${OPERATOR_HELM_CHART}" \ - 6 - --namespace="${OPERATOR_NAMESPACE}" \ - 7 - --set namespace="${OPERATOR_NAMESPACE}" \ - 8 - --set operator.namespace="${OPERATOR_NAMESPACE}" \ - 9 - --set operator.watchNamespace="${NAMESPACE}" \ - 10 - --set operator.name=mongodb-kubernetes-operator-multi-cluster \ - 11 - --set operator.createOperatorServiceAccount=false \ - 12 - --set operator.createResourcesServiceAccountsAndRoles=false \ - 13 - --set "multiCluster.clusters={${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}}" \ - 14 - --set "${OPERATOR_ADDITIONAL_HELM_VALUES:-"dummy=value"}" - 1 - Release "mongodb-kubernetes-operator-multi-cluster" does not exist. Installing it now. - 2 - name: mongodb-kubernetes-operator-multi-cluster - 3 - LAST DEPLOYED: Mon Aug 26 10:55:49 2024 - 4 - NAMESPACE: mongodb-operator - 5 - STATUS: deployed - 6 - REVISION: 1 - 7 - TEST SUITE: None - 8 - USER-SUPPLIED VALUES: - 9 - dummy: value - 10 - multiCluster: - 11 - clusters: - 12 - - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 - 13 - - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 - 14 - - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 - 15 - namespace: mongodb-operator - 16 - operator: - 17 - createOperatorServiceAccount: false - 18 - createResourcesServiceAccountsAndRoles: false - 19 - name: mongodb-kubernetes-operator-multi-cluster - 20 - namespace: mongodb-operator - 21 - watchNamespace: mongodb - 22 - 23 - COMPUTED VALUES: - 24 - agent: - 25 - name: mongodb-agent - 26 - version: 107.0.0.8502-1 - 27 - database: - 28 - name: mongodb-kubernetes-database - 29 - version: 1.27.0 - 30 - dummy: value - 31 - initAppDb: - 32 - name: mongodb-kubernetes-init-appdb - 33 - version: 1.27.0 - 34 - initDatabase: - 35 - name: mongodb-kubernetes-init-database - 36 - version: 1.27.0 - 37 - initOpsManager: - 38 - name: mongodb-kubernetes-init-ops-manager - 39 - version: 1.27.0 - 40 - managedSecurityContext: false - 41 - mongodb: - 42 - appdbAssumeOldFormat: false - 43 - imageType: ubi8 - 44 - name: mongodb-enterprise-server - 45 - repo: quay.io/mongodb - 46 - mongodbLegacyAppDb: - 47 - name: mongodb-kubernetes-appdb-database-ubi - 48 - repo: quay.io/mongodb - 49 - multiCluster: - 50 - clusterClientTimeout: 10 - 51 - clusters: - 52 - - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 - 53 - - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 - 54 - - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 - 55 - kubeConfigSecretname: mongodb-enterprise-operator-multi-cluster-kubeconfig - 56 - performFailOver: true - 57 - namespace: mongodb-operator - 58 - operator: - 59 - additionalArguments: [] - 60 - affinity: {} - 61 - createOperatorServiceAccount: false - 62 - createResourcesServiceAccountsAndRoles: false - 63 - deployment_name: mongodb-kubernetes-operator - 64 - env: prod - 65 - maxConcurrentReconciles: 1 - 66 - mdbDefaultArchitecture: non-static - 67 - name: mongodb-kubernetes-operator-multi-cluster - 68 - namespace: mongodb-operator - 69 - nodeSelector: {} - 70 - operator_image_name: mongodb-kubernetes-operator - 71 - replicas: 1 - 72 - resources: - 73 - limits: - 74 - cpu: 1100m - 75 - memory: 1Gi - 76 - requests: - 77 - cpu: 500m - 78 - memory: 200Mi - 79 - tolerations: [] - 80 - vaultSecretBackend: - 81 - enabled: false - 82 - tlsSecretRef: "" - 83 - version: 1.27.0 - 84 - watchNamespace: mongodb - 85 - watchedResources: - 86 - - mongodb - 87 - - opsmanagers - 88 - - mongodbusers - 89 - webhook: - 90 - installClusterRole: true - 91 - registerConfiguration: true - 92 - opsManager: - 93 - name: mongodb-enterprise-ops-manager-ubi - 94 - registry: - 95 - agent: quay.io/mongodb - 96 - appDb: quay.io/mongodb - 97 - database: quay.io/mongodb - 98 - imagePullSecrets: null - 99 - initAppDb: quay.io/mongodb - 100 - initDatabase: quay.io/mongodb - 101 - initOpsManager: quay.io/mongodb - 102 - operator: quay.io/mongodb - 103 - opsManager: quay.io/mongodb - 104 - pullPolicy: Always - 105 - subresourceEnabled: true - 106 - 107 - HOOKS: - 108 - MANIFEST: - 109 - --- - 110 - kind: ClusterRole - 111 - apiVersion: rbac.authorization.k8s.io/v1 - 112 - metadata: - 113 - name: mongodb-kubernetes-operator-mongodb-webhook - 114 - rules: - 115 - - apiGroups: - 116 - - "admissionregistration.k8s.io" - 117 - resources: - 118 - - validatingwebhookconfigurations - 119 - verbs: - 120 - - get - 121 - - create - 122 - - update - 123 - - delete - 124 - - apiGroups: - 125 - - "" - 126 - resources: - 127 - - services - 128 - verbs: - 129 - - get - 130 - - list - 131 - - watch - 132 - - create - 133 - - update - 134 - - delete - 135 - --- - 136 - kind: ClusterRoleBinding - 137 - apiVersion: rbac.authorization.k8s.io/v1 - 138 - metadata: - 139 - name: mongodb-kubernetes-operator-multi-cluster-mongodb-operator-webhook-binding - 140 - roleRef: - 141 - apiGroup: rbac.authorization.k8s.io - 142 - kind: ClusterRole - 143 - name: mongodb-kubernetes-operator-mongodb-webhook - 144 - subjects: - 145 - - kind: ServiceAccount - 146 - name: mongodb-kubernetes-operator-multi-cluster - 147 - namespace: mongodb-operator - 148 - --- - 149 - apiVersion: apps/v1 - 150 - kind: Deployment - 151 - metadata: - 152 - name: mongodb-kubernetes-operator-multi-cluster - 153 - namespace: mongodb-operator - 154 - spec: - 155 - replicas: 1 - 156 - selector: - 157 - matchLabels: - 158 - app.kubernetes.io/component: controller - 159 - app.kubernetes.io/name: mongodb-kubernetes-operator-multi-cluster - 160 - app.kubernetes.io/instance: mongodb-kubernetes-operator-multi-cluster - 161 - template: - 162 - metadata: - 163 - labels: - 164 - app.kubernetes.io/component: controller - 165 - app.kubernetes.io/name: mongodb-kubernetes-operator-multi-cluster - 166 - app.kubernetes.io/instance: mongodb-kubernetes-operator-multi-cluster - 167 - spec: - 168 - serviceAccountName: mongodb-kubernetes-operator-multi-cluster - 169 - securityContext: - 170 - runAsNonRoot: true - 171 - runAsUser: 2000 - 172 - containers: - 173 - - name: mongodb-kubernetes-operator-multi-cluster - 174 - image: "quay.io/mongodb/mongodb-kubernetes-operator:1.27.0" - 175 - imagePullPolicy: Always - 176 - args: - 177 - - -watch-resource=mongodb - 178 - - -watch-resource=opsmanagers - 179 - - -watch-resource=mongodbusers - 180 - - -watch-resource=mongodbmulticluster - 181 - command: - 182 - - /usr/local/bin/mongodb-kubernetes-operator - 183 - volumeMounts: - 184 - - mountPath: /etc/config/kubeconfig - 185 - name: kube-config-volume - 186 - resources: - 187 - limits: - 188 - cpu: 1100m - 189 - memory: 1Gi - 190 - requests: - 191 - cpu: 500m - 192 - memory: 200Mi - 193 - env: - 194 - - name: OPERATOR_ENV - 195 - value: prod - 196 - - name: MDB_DEFAULT_ARCHITECTURE - 197 - value: non-static - 198 - - name: WATCH_NAMESPACE - 199 - value: "mongodb" - 200 - - name: NAMESPACE - 201 - valueFrom: - 202 - fieldRef: - 203 - fieldPath: metadata.namespace - 204 - - name: CLUSTER_CLIENT_TIMEOUT - 205 - value: "10" - 206 - - name: IMAGE_PULL_POLICY - 207 - value: Always - 208 - # Database - 209 - - name: MONGODB_ENTERPRISE_DATABASE_IMAGE - 210 - value: quay.io/mongodb/mongodb-kubernetes-database - 211 - - name: INIT_DATABASE_IMAGE_REPOSITORY - 212 - value: quay.io/mongodb/mongodb-kubernetes-init-database - 213 - - name: INIT_DATABASE_VERSION - 214 - value: 1.27.0 - 215 - - name: DATABASE_VERSION - 216 - value: 1.27.0 - 217 - # Ops Manager - 218 - - name: OPS_MANAGER_IMAGE_REPOSITORY - 219 - value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi - 220 - - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY - 221 - value: quay.io/mongodb/mongodb-kubernetes-init-ops-manager - 222 - - name: INIT_OPS_MANAGER_VERSION - 223 - value: 1.27.0 - 224 - # AppDB - 225 - - name: INIT_APPDB_IMAGE_REPOSITORY - 226 - value: quay.io/mongodb/mongodb-kubernetes-init-appdb - 227 - - name: INIT_APPDB_VERSION - 228 - value: 1.27.0 - 229 - - name: OPS_MANAGER_IMAGE_PULL_POLICY - 230 - value: Always - 231 - - name: AGENT_IMAGE - 232 - value: "quay.io/mongodb/mongodb-agent:107.0.0.8502-1" - 233 - - name: MDB_AGENT_IMAGE_REPOSITORY - 234 - value: "quay.io/mongodb/mongodb-agent" - 235 - - name: MONGODB_IMAGE - 236 - value: mongodb-enterprise-server - 237 - - name: MONGODB_REPO_URL - 238 - value: quay.io/mongodb - 239 - - name: MDB_IMAGE_TYPE - 240 - value: ubi8 - 241 - - name: PERFORM_FAILOVER - 242 - value: 'true' - 243 - - name: MDB_MAX_CONCURRENT_RECONCILES - 244 - value: "1" - 245 - volumes: - 246 - - name: kube-config-volume - 247 - secret: - 248 - defaultMode: 420 - 249 - secretname: mongodb-enterprise-operator-multi-cluster-kubeconfig 
- Check the Kubernetes Operator deployment: - 1 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" rollout status deployment/mongodb-kubernetes-operator-multi-cluster - 2 - echo "Operator deployment in ${OPERATOR_NAMESPACE} namespace" - 3 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get deployments - 4 - echo; echo "Operator pod in ${OPERATOR_NAMESPACE} namespace" - 5 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get pods - 1 - Waiting for deployment "mongodb-kubernetes-operator-multi-cluster" rollout to finish: 0 of 1 updated replicas are available... - 2 - deployment "mongodb-kubernetes-operator-multi-cluster" successfully rolled out - 3 - Operator deployment in mongodb-operator namespace - 4 - NAME READY UP-TO-DATE AVAILABLE AGE - 5 - mongodb-kubernetes-operator-multi-cluster 1/1 1 1 10s - 6 - 7 - Operator pod in mongodb-operator namespace - 8 - NAME READY STATUS RESTARTS AGE - 9 - mongodb-kubernetes-operator-multi-cluster-54d786b796-7l5ct 2/2 Running 1 (4s ago) 10s 
Prepare TLS certificates.
In this step, you enable TLS for the Application Database and the Ops Manager Application.
If you don't want to use TLS, remove the following fields from the MongoDBOpsManager
resources:
- Optional. Generate keys and certificates: - Use the - opensslcommand line tool to generate self-signed CAs and certificates for testing purposes.- 1 - mkdir certs || true - 2 - 3 - cat <<EOF >certs/ca.cnf - 4 - [ req ] - 5 - default_bits = 2048 - 6 - prompt = no - 7 - default_md = sha256 - 8 - distinguished_name = dn - 9 - x509_extensions = v3_ca - 10 - 11 - [ dn ] - 12 - C=US - 13 - ST=New York - 14 - L=New York - 15 - O=Example Company - 16 - OU=IT Department - 17 - CN=exampleCA - 18 - 19 - [ v3_ca ] - 20 - basicConstraints = CA:TRUE - 21 - keyUsage = critical, keyCertSign, cRLSign - 22 - subjectKeyIdentifier = hash - 23 - authorityKeyIdentifier = keyid:always,issuer - 24 - EOF - 25 - 26 - cat <<EOF >certs/om.cnf - 27 - [ req ] - 28 - default_bits = 2048 - 29 - prompt = no - 30 - default_md = sha256 - 31 - distinguished_name = dn - 32 - req_extensions = req_ext - 33 - 34 - [ dn ] - 35 - C=US - 36 - ST=New York - 37 - L=New York - 38 - O=Example Company - 39 - OU=IT Department - 40 - CN=${OPS_MANAGER_EXTERNAL_DOMAIN} - 41 - 42 - [ req_ext ] - 43 - subjectAltName = @alt_names - 44 - keyUsage = critical, digitalSignature, keyEncipherment - 45 - extendedKeyUsage = serverAuth, clientAuth - 46 - 47 - [ alt_names ] - 48 - DNS.1 = ${OPS_MANAGER_EXTERNAL_DOMAIN} - 49 - DNS.2 = om-svc.${NAMESPACE}.svc.cluster.local - 50 - EOF - 51 - 52 - cat <<EOF >certs/appdb.cnf - 53 - [ req ] - 54 - default_bits = 2048 - 55 - prompt = no - 56 - default_md = sha256 - 57 - distinguished_name = dn - 58 - req_extensions = req_ext - 59 - 60 - [ dn ] - 61 - C=US - 62 - ST=New York - 63 - L=New York - 64 - O=Example Company - 65 - OU=IT Department - 66 - CN=AppDB - 67 - 68 - [ req_ext ] - 69 - subjectAltName = @alt_names - 70 - keyUsage = critical, digitalSignature, keyEncipherment - 71 - extendedKeyUsage = serverAuth, clientAuth - 72 - 73 - [ alt_names ] - 74 - multi-cluster mongod hostnames from service for each pod - 75 - DNS.1 = *.${NAMESPACE}.svc.cluster.local - 76 - single-cluster mongod hostnames from headless service - 77 - DNS.2 = *.om-db-svc.${NAMESPACE}.svc.cluster.local - 78 - EOF - 79 - 80 - generate CA keypair and certificate - 81 - openssl genrsa -out certs/ca.key 2048 - 82 - openssl req -x509 -new -nodes -key certs/ca.key -days 1024 -out certs/ca.crt -config certs/ca.cnf - 83 - 84 - generate OpsManager's keypair and certificate - 85 - openssl genrsa -out certs/om.key 2048 - 86 - openssl req -new -key certs/om.key -out certs/om.csr -config certs/om.cnf - 87 - 88 - generate AppDB's keypair and certificate - 89 - openssl genrsa -out certs/appdb.key 2048 - 90 - openssl req -new -key certs/appdb.key -out certs/appdb.csr -config certs/appdb.cnf - 91 - 92 - generate certificates signed by CA for OpsManager and AppDB - 93 - openssl x509 -req -in certs/om.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/om.crt -days 365 -sha256 -extfile certs/om.cnf -extensions req_ext - 94 - openssl x509 -req -in certs/appdb.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/appdb.crt -days 365 -sha256 -extfile certs/appdb.cnf -extensions req_ext 
- Create secrets with TLS keys: - If you prefer to use your own keys and certificates, skip the previous generation step and put the keys and certificates into the following files: - certs/ca.crt- CA certificates. These are not necessary when using trusted certificates.
- certs/appdb.key- private key for the Application Database.
- certs/appdb.crt- certificate for the Application Database.
- certs/om.key- private key for Ops Manager.
- certs/om.crt- certificate for Ops Manager.
 - 1 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-cert \ - 2 - --cert=certs/om.crt \ - 3 - --key=certs/om.key - 4 - 5 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-db-cert \ - 6 - --cert=certs/appdb.crt \ - 7 - --key=certs/appdb.key - 8 - 9 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap om-cert-ca --from-file="mms-ca.crt=certs/ca.crt" - 10 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap appdb-cert-ca --from-file="ca-pem=certs/ca.crt" 
Install Ops Manager.
At this point, you have prepared the environment and the Kubernetes Operator to deploy the Ops Manager resource.
- Create the necessary credentials for the Ops Manager admin user that the Kubernetes Operator will create after deploying the Ops Manager Application instance: - 1 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${NAMESPACE}" create secret generic om-admin-user-credentials \ - 2 - --from-literal=Username="admin" \ - 3 - --from-literal=Password="Passw0rd@" \ - 4 - --from-literal=FirstName="Jane" \ - 5 - --from-literal=LastName="Doe" 
- Deploy the simplest - MongoDBOpsManagercustom resource possible (with TLS enabled) on a single member cluster, which is also known as the operator cluster.- This deployment is almost the same as for the single-cluster mode, but with - spec.topologyand- spec.applicationDatabase.topologyset to- MultiCluster.- Deploying this way shows that a single Kubernetes cluster deployment is a special case of a multi-Kubernetes cluster deployment on a single Kubernetes member cluster. You can start deploying the Ops Manager Application and the Application Database on as many Kubernetes clusters as necessary from the beginning, and don't have to start with the deployment with only a single member Kubernetes cluster. - At this point, you have prepared the Ops Manager deployment to span more than one Kubernetes cluster, which you will do later in this procedure. - 1 - kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: mongodb.com/v1 - 3 - kind: MongoDBOpsManager - 4 - metadata: - 5 - name: om - 6 - spec: - 7 - topology: MultiCluster - 8 - version: "${OPS_MANAGER_VERSION}" - 9 - adminCredentials: om-admin-user-credentials - 10 - security: - 11 - certsSecretPrefix: cert-prefix - 12 - tls: - 13 - ca: om-cert-ca - 14 - clusterSpecList: - 15 - - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" - 16 - members: 1 - 17 - applicationDatabase: - 18 - version: "${APPDB_VERSION}" - 19 - topology: MultiCluster - 20 - security: - 21 - certsSecretPrefix: cert-prefix - 22 - tls: - 23 - ca: appdb-cert-ca - 24 - clusterSpecList: - 25 - - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" - 26 - members: 3 - 27 - backup: - 28 - enabled: false - 29 - EOF 
- Wait for the Kubernetes Operator to pick up the work and reach the - status.applicationDatabase.phase=Pendingstate. Wait for both the Application Database and Ops Manager deployments to complete.- 1 - echo "Waiting for Application Database to reach Pending phase..." - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s - 1 - Waiting for Application Database to reach Pending phase... - 2 - mongodbopsmanager.mongodb.com/om condition met 
- Deploy Ops Manager. The Kubernetes Operator deploys Ops Manager by performing the following steps. It: - Deploys the Application Database's replica set nodes and waits for the MongoDB processes in the replica set to start running. 
- Deploys the Ops Manager Application instance with the Application Database's connection string and waits for it to become ready. 
- Adds the Monitoring MongoDB Agent containers to each Application Database's Pod. 
- Waits for both the Ops Manager Application and the Application Database Pods to start running. 
 - 1 - echo "Waiting for Application Database to reach Running phase..." - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s - 3 - echo; echo "Waiting for Ops Manager to reach Running phase..." - 4 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s - 5 - echo; echo "Waiting for Application Database to reach Pending phase (enabling monitoring)..." - 6 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s - 7 - echo "Waiting for Application Database to reach Running phase..." - 8 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s - 9 - echo; echo "Waiting for Ops Manager to reach Running phase..." - 10 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s - 11 - echo; echo "MongoDBOpsManager resource" - 12 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om - 13 - echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" - 14 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods - 15 - echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" - 16 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods - 1 - Waiting for Application Database to reach Running phase... - 2 - mongodbopsmanager.mongodb.com/om condition met - 3 - 4 - Waiting for Ops Manager to reach Running phase... - 5 - mongodbopsmanager.mongodb.com/om condition met - 6 - 7 - Waiting for Application Database to reach Pending phase (enabling monitoring)... - 8 - mongodbopsmanager.mongodb.com/om condition met - 9 - Waiting for Application Database to reach Running phase... - 10 - mongodbopsmanager.mongodb.com/om condition met - 11 - 12 - Waiting for Ops Manager to reach Running phase... - 13 - mongodbopsmanager.mongodb.com/om condition met - 14 - 15 - MongoDBOpsManager resource - 16 - NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS - 17 - om 7.0.4 Running Running Disabled 13m - 18 - 19 - Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 - 20 - NAME READY STATUS RESTARTS AGE - 21 - om-0-0 2/2 Running 0 10m - 22 - om-db-0-0 4/4 Running 0 69s - 23 - om-db-0-1 4/4 Running 0 2m12s - 24 - om-db-0-2 4/4 Running 0 3m32s - 25 - 26 - Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 - Now that you have deployed a single-member cluster in a multi-cluster mode, you can reconfigure this deployment to span more than one Kubernetes cluster. 
- On the second member cluster, deploy two additional Application Database replica set members and one additional instance of the Ops Manager Application: - 1 - kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: mongodb.com/v1 - 3 - kind: MongoDBOpsManager - 4 - metadata: - 5 - name: om - 6 - spec: - 7 - topology: MultiCluster - 8 - version: "${OPS_MANAGER_VERSION}" - 9 - adminCredentials: om-admin-user-credentials - 10 - security: - 11 - certsSecretPrefix: cert-prefix - 12 - tls: - 13 - ca: om-cert-ca - 14 - clusterSpecList: - 15 - - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" - 16 - members: 1 - 17 - - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" - 18 - members: 1 - 19 - applicationDatabase: - 20 - version: "${APPDB_VERSION}" - 21 - topology: MultiCluster - 22 - security: - 23 - certsSecretPrefix: cert-prefix - 24 - tls: - 25 - ca: appdb-cert-ca - 26 - clusterSpecList: - 27 - - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" - 28 - members: 3 - 29 - - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" - 30 - members: 2 - 31 - backup: - 32 - enabled: false - 33 - EOF 
- Wait for the Kubernetes Operator to pick up the work (pending phase): - 1 - echo "Waiting for Application Database to reach Pending phase..." - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s - 1 - Waiting for Application Database to reach Pending phase... - 2 - mongodbopsmanager.mongodb.com/om condition met 
- Wait for the Kubernetes Operator to finish deploying all components: - 1 - echo "Waiting for Application Database to reach Running phase..." - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s - 3 - echo; echo "Waiting for Ops Manager to reach Running phase..." - 4 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s - 5 - echo; echo "MongoDBOpsManager resource" - 6 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om - 7 - echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" - 8 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods - 9 - echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" - 10 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods - 1 - Waiting for Application Database to reach Running phase... - 2 - mongodbopsmanager.mongodb.com/om condition met - 3 - 4 - Waiting for Ops Manager to reach Running phase... - 5 - mongodbopsmanager.mongodb.com/om condition met - 6 - 7 - MongoDBOpsManager resource - 8 - NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS - 9 - om 7.0.4 Running Running Disabled 20m - 10 - 11 - Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 - 12 - NAME READY STATUS RESTARTS AGE - 13 - om-0-0 2/2 Running 0 2m56s - 14 - om-db-0-0 4/4 Running 0 7m48s - 15 - om-db-0-1 4/4 Running 0 8m51s - 16 - om-db-0-2 4/4 Running 0 10m - 17 - 18 - Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 - 19 - NAME READY STATUS RESTARTS AGE - 20 - om-1-0 2/2 Running 0 3m27s - 21 - om-db-1-0 4/4 Running 0 6m32s - 22 - om-db-1-1 4/4 Running 0 5m5s 
Enable backup.
In a multi-Kubernetes cluster deployment of the Ops Manager Application, you can configure
only S3-based backup storage. This procedure refers to S3_*
defined in env_variables.sh.
- Optional. Install the MinIO Operator. - This procedure deploys S3-compatible storage for your backups using the MinIO Operator. You can skip this step if you have AWS S3 or other S3-compatible buckets available. Adjust the - S3_*variables accordingly in env_variables.sh in this case.- 1 - kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \ - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - - 3 - 4 - kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \ - 5 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - - 6 - 7 - add two buckets to the tenant config - 8 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \ - 9 - --type='json' \ - 10 - -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]" 
- Before you configure and enable backup, create secrets: - s3-access-secret- contains S3 credentials.
- s3-ca-cert- contains a CA certificate that issued the bucket's server certificate. In the case of the sample MinIO deployment used in this procedure, the default Kubernetes Root CA certificate is used to sign the certificate. Because it's not a publicly trusted CA certificate, you must provide it so that Ops Manager can trust the connection.
 - If you use publicly trusted certificates, you may skip this step and remove the values from the - spec.backup.s3Stores.customCertificateSecretRefsand- spec.backup.s3OpLogStores.customCertificateSecretRefssettings.- 1 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-access-secret \ - 2 - --from-literal=accessKey="${S3_ACCESS_KEY}" \ - 3 - --from-literal=secretKey="${S3_SECRET_KEY}" - 4 - 5 - minio TLS secrets are signed with the default k8s root CA - 6 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-ca-cert \ - 7 - --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")" 
Re-deploy Ops Manager with backup enabled.
- The Kubernetes Operator can configure and deploy all components, the Ops Manager Application, the Backup Daemon instances, and the Application Database's replica set nodes in any combination on any member clusters for which you configure the Kubernetes Operator. - To illustrate the flexibility of the multi-Kubernetes cluster deployment configuration, deploy only one Backup Daemon instance on the third member cluster and specify zero Backup Daemon members for the first and second clusters. - 1 - kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF - 2 - apiVersion: mongodb.com/v1 - 3 - kind: MongoDBOpsManager - 4 - metadata: - 5 - name: om - 6 - spec: - 7 - topology: MultiCluster - 8 - version: "${OPS_MANAGER_VERSION}" - 9 - adminCredentials: om-admin-user-credentials - 10 - security: - 11 - certsSecretPrefix: cert-prefix - 12 - tls: - 13 - ca: om-cert-ca - 14 - clusterSpecList: - 15 - - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" - 16 - members: 1 - 17 - backup: - 18 - members: 0 - 19 - - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" - 20 - members: 1 - 21 - backup: - 22 - members: 0 - 23 - - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}" - 24 - members: 0 - 25 - backup: - 26 - members: 1 - 27 - configuration: # to avoid configuration wizard on first login - 28 - mms.adminEmailAddr: email@example.com - 29 - mms.fromEmailAddr: email@example.com - 30 - mms.ignoreInitialUiSetup: "true" - 31 - mms.mail.hostname: smtp@example.com - 32 - mms.mail.port: "465" - 33 - mms.mail.ssl: "true" - 34 - mms.mail.transport: smtp - 35 - mms.minimumTLSVersion: TLSv1.2 - 36 - mms.replyToEmailAddr: email@example.com - 37 - applicationDatabase: - 38 - version: "${APPDB_VERSION}" - 39 - topology: MultiCluster - 40 - security: - 41 - certsSecretPrefix: cert-prefix - 42 - tls: - 43 - ca: appdb-cert-ca - 44 - clusterSpecList: - 45 - - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" - 46 - members: 3 - 47 - - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" - 48 - members: 2 - 49 - backup: - 50 - enabled: true - 51 - s3Stores: - 52 - - name: my-s3-block-store - 53 - s3SecretRef: - 54 - name: "s3-access-secret" - 55 - pathStyleAccessEnabled: true - 56 - s3BucketEndpoint: "${S3_ENDPOINT}" - 57 - s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}" - 58 - customCertificateSecretRefs: - 59 - - name: s3-ca-cert - 60 - key: ca.crt - 61 - s3OpLogStores: - 62 - - name: my-s3-oplog-store - 63 - s3SecretRef: - 64 - name: "s3-access-secret" - 65 - s3BucketEndpoint: "${S3_ENDPOINT}" - 66 - s3BucketName: "${S3_OPLOG_BUCKET_NAME}" - 67 - pathStyleAccessEnabled: true - 68 - customCertificateSecretRefs: - 69 - - name: s3-ca-cert - 70 - key: ca.crt - 71 - EOF 
- Wait until the Kubernetes Operator finishes its configuration: - 1 - echo; echo "Waiting for Backup to reach Running phase..." - 2 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s - 3 - echo "Waiting for Application Database to reach Running phase..." - 4 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s - 5 - echo; echo "Waiting for Ops Manager to reach Running phase..." - 6 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s - 7 - echo; echo "MongoDBOpsManager resource" - 8 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om - 9 - echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" - 10 - kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods - 11 - echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" - 12 - kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods - 13 - echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" - 14 - kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" get pods - 1 - Waiting for Backup to reach Running phase... - 2 - mongodbopsmanager.mongodb.com/om condition met - 3 - Waiting for Application Database to reach Running phase... - 4 - mongodbopsmanager.mongodb.com/om condition met - 5 - 6 - Waiting for Ops Manager to reach Running phase... - 7 - mongodbopsmanager.mongodb.com/om condition met - 8 - 9 - MongoDBOpsManager resource - 10 - NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS - 11 - om 7.0.4 Running Running Running 26m - 12 - 13 - Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 - 14 - NAME READY STATUS RESTARTS AGE - 15 - om-0-0 2/2 Running 0 5m11s - 16 - om-db-0-0 4/4 Running 0 13m - 17 - om-db-0-1 4/4 Running 0 14m - 18 - om-db-0-2 4/4 Running 0 16m - 19 - 20 - Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 - 21 - NAME READY STATUS RESTARTS AGE - 22 - om-1-0 2/2 Running 0 5m12s - 23 - om-db-1-0 4/4 Running 0 12m - 24 - om-db-1-1 4/4 Running 0 11m - 25 - 26 - Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 - 27 - NAME READY STATUS RESTARTS AGE - 28 - om-2-backup-daemon-0 2/2 Running 0 3m8s 
Optional. Delete the GKE (Google Kubernetes Engine) clusters and all their associated resources (VMs).
Run the following script to delete the GKE clusters and clean up your environment.
Important
The following commands are not reversible. They delete all clusters
referenced in env_variables.sh. Don't run these commands if you
wish to retain the GKE clusters, for example, if you didn't create
the GKE clusters as part of this procedure.
1 yes | gcloud container clusters delete "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" & 2 yes | gcloud container clusters delete "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" & 3 yes | gcloud container clusters delete "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" & 4 wait