Join us Sept 17 at .local NYC! Use code WEB50 to save 50% on tickets. Learn more >
MongoDB Event
Docs Menu
Docs Home
/
MongoDB Controllers for Kubernetes Operator
/ /

Multi-Cluster Ops Manager

The Ops Manager is responsible for facilitating workloads such as backing up data, monitoring database performance and more. To make your multi-cluster Ops Manager and the Application Database deployment resilient to entire data center or zone failures, deploy the Ops Manager Application and the Application Database on multiple Kubernetes clusters.

Before you begin the following procedure, perform the following actions:

  • Install kubectl.

  • Complete the GKE Clusters procedure or the equivalent.

  • Complete the TLS Certificates procedure or the equivalent.

  • Complete the Istio Service mesh procedure or the equivalent.

  • Complete the Deploy the MongoDB Operator procedure.

  • Set the required environment variables as follows:

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${OM_NAMESPACE}
export S3_OPLOG_BUCKET_NAME=s3-oplog-store
export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
# If you use your own S3 storage - set the values accordingly.
# By default we install Minio to handle S3 storage and here are set the default credentials.
export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
export S3_ACCESS_KEY="console"
export S3_SECRET_KEY="console123"
export OPS_MANAGER_VERSION="8.0.5"
export APPDB_VERSION="8.0.5-ent"

You can find all included source code in the MongoDB Kubernetes Operator repository.

1
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-cert
spec:
dnsNames:
- om-svc.${OM_NAMESPACE}.svc.cluster.local
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-db-cert
spec:
dnsNames:
- "*.${OM_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-db-cert
usages:
- server auth
- client auth
EOF
2

At this point, you have prepared the environment and the Kubernetes Operator to deploy the Ops Manager resource.

  1. Create the necessary credentials for the Ops Manager admin user that the Kubernetes Operator will create after deploying the Ops Manager Application instance:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${OM_NAMESPACE}" create secret generic om-admin-user-credentials \
    2 --from-literal=Username="admin" \
    3 --from-literal=Password="Passw0rd@" \
    4 --from-literal=FirstName="Jane" \
    5 --from-literal=LastName="Doe"
  2. Deploy the simplest MongoDBOpsManager custom resource possible (with TLS enabled) on a single member cluster, which is also known as the operator cluster.

    This deployment is almost the same as the deployment for the single-cluster mode, but with spec.topology and spec.applicationDatabase.topology set to MultiCluster.

    Deploying this way shows that a single Kubernetes cluster deployment is a special case of a multi-Kubernetes cluster deployment on a single Kubernetes member cluster. You can start deploying the Ops Manager Application and the Application Database on as many Kubernetes clusters as necessary from the beginning, and don't have to start with the deployment with only a single member Kubernetes cluster.

    At this point, you have prepared the Ops Manager deployment to span more than one Kubernetes cluster, which you will do later in this procedure.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 externalConnectivity:
    11 type: LoadBalancer
    12 security:
    13 certsSecretPrefix: cert-prefix
    14 tls:
    15 ca: ca-issuer
    16 clusterSpecList:
    17 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    18 members: 1
    19 applicationDatabase:
    20 version: "${APPDB_VERSION}"
    21 topology: MultiCluster
    22 security:
    23 certsSecretPrefix: cert-prefix
    24 tls:
    25 ca: ca-issuer
    26 clusterSpecList:
    27 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    28 members: 3
    29 backup:
    30 enabled: false
    31EOF
  3. Wait for the Kubernetes Operator to pick up the work and reach the status.applicationDatabase.phase=Pending state. Wait for both the Application Database and Ops Manager deployments to complete.

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  4. Deploy Ops Manager. The Kubernetes Operator deploys Ops Manager by performing the following steps. It:

    • Deploys the Application Database's replica set nodes and waits for the MongoDB processes in the replica set to start running.

    • Deploys the Ops Manager Application instance with the Application Database's connection string and waits for it to become ready.

    • Adds the Monitoring MongoDB Agent containers to each Application Database's Pod.

    • Waits for both the Ops Manager Application and the Application Database Pods to start running.

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    5echo; echo "MongoDBOpsManager resource"
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
    7echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7MongoDBOpsManager resource
    8NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    9om 8.0.5 Running Running Disabled 12m
    10
    11Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-682f2df6e1745e000788a1d5-24552
    12NAME READY STATUS RESTARTS AGE
    13om-0-0 2/2 Running 0 9m41s
    14om-db-0-0 4/4 Running 0 51s
    15om-db-0-1 4/4 Running 0 2m25s
    16om-db-0-2 4/4 Running 0 4m16s
    17
    18Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-682f2df6e1745e000788a1d5-24552

    Now that you have deployed a single-member cluster in a multi-cluster mode, you can reconfigure this deployment to span more than one Kubernetes cluster.

  5. On the second member cluster, deploy two additional Application Database replica set members and one additional instance of the Ops Manager Application:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 externalConnectivity:
    11 type: LoadBalancer
    12 security:
    13 certsSecretPrefix: cert-prefix
    14 tls:
    15 ca: ca-issuer
    16 clusterSpecList:
    17 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    18 members: 1
    19 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    20 members: 1
    21 applicationDatabase:
    22 version: "${APPDB_VERSION}"
    23 topology: MultiCluster
    24 security:
    25 certsSecretPrefix: cert-prefix
    26 tls:
    27 ca: ca-issuer
    28 clusterSpecList:
    29 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    30 members: 3
    31 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    32 members: 2
    33 backup:
    34 enabled: false
    35EOF
  6. Wait for the Kubernetes Operator to pick up the work (pending phase):

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    3
    4echo "Waiting for Ops Manager to reach Pending phase..."
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Pending opsmanager/om --timeout=600s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3Waiting for Ops Manager to reach Pending phase...
    4mongodbopsmanager.mongodb.com/om condition met
  7. Wait for the Kubernetes Operator to finish deploying all components:

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "MongoDBOpsManager resource"
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
    7echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7MongoDBOpsManager resource
    8NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    9om 8.0.5 Running Running Disabled 20m
    10
    11Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-682f2df6e1745e000788a1d5-24552
    12NAME READY STATUS RESTARTS AGE
    13om-0-0 2/2 Running 0 2m53s
    14om-db-0-0 4/4 Running 0 8m42s
    15om-db-0-1 4/4 Running 0 10m
    16om-db-0-2 4/4 Running 0 12m
    17
    18Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-682f2df6e1745e000788a1d5-24552
    19NAME READY STATUS RESTARTS AGE
    20om-1-0 2/2 Running 0 3m24s
    21om-db-1-0 4/4 Running 0 7m43s
    22om-db-1-1 4/4 Running 0 5m31s
3

In a multi-Kubernetes cluster deployment of the Ops Manager Application, you can configure only S3-based backup storage. This procedure refers to S3_* defined in env_variables.sh.

  1. Optional. Install the MinIO Operator.

    This procedure deploys S3-compatible storage for your backups using the MinIO Operator. You can skip this step if you have AWS S3 or other S3-compatible buckets available. Adjust the S3_* variables accordingly in env_variables.sh in this case.

    1kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
    2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    3
    4kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
    5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    6
    7# add two buckets to the tenant config
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
    9 --type='json' \
    10 -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
    11
    12kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "tenant-tiny" istio-injection=enabled --overwrite
  2. Before you configure and enable backup, create secrets:

    • s3-access-secret - contains S3 credentials.

    • s3-ca-cert - contains a CA certificate that issued the bucket's server certificate. In the case of the sample MinIO deployment used in this procedure, the default Kubernetes Root CA certificate is used to sign the certificate. Because it's not a publicly trusted CA certificate, you must provide it so that Ops Manager can trust the connection.

    If you use publicly trusted certificates, you may skip this step and remove the values from the spec.backup.s3Stores.customCertificateSecretRefs and spec.backup.s3OpLogStores.customCertificateSecretRefs settings.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-access-secret \
    2 --from-literal=accessKey="${S3_ACCESS_KEY}" \
    3 --from-literal=secretKey="${S3_SECRET_KEY}"
    4
    5# minio TLS secrets are signed with the default k8s root CA
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-ca-cert \
    7 --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
4
  1. The Kubernetes Operator can configure and deploy all components, the Ops Manager Application, the Backup Daemon instances, and the Application Database's replica set nodes in any combination on any member clusters for which you configure the Kubernetes Operator.

    To illustrate the flexibility of the multi-Kubernetes cluster deployment configuration, deploy only one Backup Daemon instance on the third member cluster and specify zero Backup Daemon members for the first and second clusters.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 externalConnectivity:
    11 type: LoadBalancer
    12 security:
    13 certsSecretPrefix: cert-prefix
    14 tls:
    15 ca: ca-issuer
    16 clusterSpecList:
    17 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    18 members: 1
    19 backup:
    20 members: 0
    21 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    22 members: 1
    23 backup:
    24 members: 0
    25 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
    26 members: 0
    27 backup:
    28 members: 1
    29 applicationDatabase:
    30 version: "${APPDB_VERSION}"
    31 topology: MultiCluster
    32 security:
    33 certsSecretPrefix: cert-prefix
    34 tls:
    35 ca: ca-issuer
    36 clusterSpecList:
    37 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    38 members: 3
    39 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    40 members: 2
    41 backup:
    42 enabled: true
    43 s3Stores:
    44 - name: my-s3-block-store
    45 s3SecretRef:
    46 name: "s3-access-secret"
    47 pathStyleAccessEnabled: true
    48 s3BucketEndpoint: "${S3_ENDPOINT}"
    49 s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
    50 customCertificateSecretRefs:
    51 - name: s3-ca-cert
    52 key: ca.crt
    53 s3OpLogStores:
    54 - name: my-s3-oplog-store
    55 s3SecretRef:
    56 name: "s3-access-secret"
    57 s3BucketEndpoint: "${S3_ENDPOINT}"
    58 s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
    59 pathStyleAccessEnabled: true
    60 customCertificateSecretRefs:
    61 - name: s3-ca-cert
    62 key: ca.crt
    63EOF
  2. Wait until the Kubernetes Operator finishes its configuration:

    1echo; echo "Waiting for Backup to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
    3echo "Waiting for Application Database to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "Waiting for Ops Manager to reach Running phase..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    7echo; echo "MongoDBOpsManager resource"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
    11echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    12kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
    1Waiting for Backup to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3Waiting for Application Database to reach Running phase...
    4mongodbopsmanager.mongodb.com/om condition met
    5
    6Waiting for Ops Manager to reach Running phase...
    7mongodbopsmanager.mongodb.com/om condition met
    8
    9MongoDBOpsManager resource
    10NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    11om 8.0.5 Running Running Running 23m
    12
    13Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-682f2df6e1745e000788a1d5-24552
    14NAME READY STATUS RESTARTS AGE
    15om-0-0 2/2 Running 0 5m46s
    16om-db-0-0 4/4 Running 0 11m
    17om-db-0-1 4/4 Running 0 13m
    18om-db-0-2 4/4 Running 0 15m
    19
    20Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-682f2df6e1745e000788a1d5-24552
    21NAME READY STATUS RESTARTS AGE
    22om-1-0 2/2 Running 0 6m17s
    23om-db-1-0 4/4 Running 0 10m
    24om-db-1-1 4/4 Running 0 8m24s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2-682f2df6e1745e000788a1d5-24552
    27NAME READY STATUS RESTARTS AGE
    28om-2-backup-daemon-0 2/2 Running 0 2m31s
5

To configure credentials, you must create an Ops Manager organization, generate programmatic API keys in the Ops Manager UI, and create a secret with your Load Balancer IP. See Create Credentials for the Kubernetes Operator to learn more.

Back

Deploy the Operator

On this page