Docs Menu
Docs Home
/
Enterprise Kubernetes Operator
/ /

Multi-Cluster Ops Manager Without a Service Mesh

The Ops Manager is responsible for facilitating workloads such as backing up data, monitoring database performance and more. To make your multi-cluster Ops Manager and the Application Database deployment resilient to entire data center or zone failures, deploy the Ops Manager Application and the Application Database on multiple Kubernetes clusters.

Before you begin the following procedure, perform the following actions:

  • Install kubectl.

  • Complete the GKE Clusters procedure or the equivalent.

  • Complete the TLS Certificates procedure or the equivalent.

  • Complete the ExternalDNS procedure or the equivalent.

  • Complete the Deploy the MongoDB Operator procedure.

  • Set the required environment variables as follows:

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${OM_NAMESPACE}
export S3_OPLOG_BUCKET_NAME=s3-oplog-store
export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
# If you use your own S3 storage - set the values accordingly.
# By default we install Minio to handle S3 storage and here are set the default credentials.
export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
export S3_ACCESS_KEY="console"
export S3_SECRET_KEY="console123"
export OPS_MANAGER_VERSION="8.0.5"
export APPDB_VERSION="8.0.5-ent"

You can find all included source code in the MongoDB Kubernetes Operator repository.

1
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-cert
spec:
dnsNames:
- ${OPS_MANAGER_EXTERNAL_DOMAIN}
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-db-cert
spec:
dnsNames:
- "*.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-db-cert
usages:
- server auth
- client auth
EOF
2
mkdir -p certs
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.crt']}" | base64 --decode > certs/tls.crt
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.key']}" | base64 --decode > certs/tls.key
gcloud compute ssl-certificates create om-certificate --certificate=certs/tls.crt --private-key=certs/tls.key
3

This load balancer distributes traffic between all the replicas of Ops Manager across all 3 clusters.

gcloud compute firewall-rules create fw-ops-manager-hc \
--action=allow \
--direction=ingress \
--target-tags=mongodb \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--rules=tcp:8443
gcloud compute health-checks create https om-healthcheck \
--use-serving-port \
--request-path=/monitor/health
gcloud compute backend-services create om-backend-service \
--protocol HTTPS \
--health-checks om-healthcheck \
--global
gcloud compute url-maps create om-url-map \
--default-service om-backend-service
gcloud compute target-https-proxies create om-lb-proxy \
--url-map om-url-map \
--ssl-certificates=om-certificate
gcloud compute forwarding-rules create om-forwarding-rule \
--global \
--target-https-proxy=om-lb-proxy \
--ports=443
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
fw-ops-manager-hc default INGRESS 1000 tcp:8443 False
NAME PROTOCOL
om-healthcheck HTTPS
NAME BACKENDS PROTOCOL
om-backend-service HTTPS
NAME DEFAULT_SERVICE
om-url-map backendServices/om-backend-service
NAME SSL_CERTIFICATES URL_MAP REGION CERTIFICATE_MAP
om-lb-proxy om-certificate om-url-map
4
ip_address=$(gcloud compute forwarding-rules describe om-forwarding-rule --global --format="get(IPAddress)")
gcloud dns record-sets create "${OPS_MANAGER_EXTERNAL_DOMAIN}" --zone="${DNS_ZONE}" --type="A" --ttl="300" --rrdatas="${ip_address}"
5
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${OM_NAMESPACE}" create secret generic om-admin-user-credentials \
--from-literal=Username="admin" \
--from-literal=Password="Passw0rd@" \
--from-literal=FirstName="Jane" \
--from-literal=LastName="Doe"
6
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: false
EOF
7

Wait for both the Application Database and Ops Manager deployments to complete.

echo "Waiting for Application Database to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
echo "Waiting for Ops Manager to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Pending opsmanager/om --timeout=600s
Waiting for Application Database to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
8
svcneg0=$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg0}" \
--network-endpoint-group-zone="${K8S_CLUSTER_0_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
svcneg1=$(kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg1}" \
--network-endpoint-group-zone="${K8S_CLUSTER_1_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
9
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.5 Running Running Disabled 15m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 12m
om-db-0-0 3/3 Running 0 3m50s
om-db-0-1 3/3 Running 0 4m38s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 12m
om-1-1 1/1 Running 0 8m46s
om-db-1-0 3/3 Running 0 2m2s
om-db-1-1 3/3 Running 0 2m54s
10
kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
# add two buckets to the tenant config
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
--type='json' \
-p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
11
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-access-secret \
--from-literal=accessKey="${S3_ACCESS_KEY}" \
--from-literal=secretKey="${S3_SECRET_KEY}"
# minio TLS secrets are signed with the default k8s root CA
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-ca-cert \
--from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
12
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
backup:
members: 0
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
backup:
members: 0
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 0
backup:
members: 1
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: true
s3Stores:
- name: my-s3-block-store
s3SecretRef:
name: "s3-access-secret"
pathStyleAccessEnabled: true
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
s3OpLogStores:
- name: my-s3-oplog-store
s3SecretRef:
name: "s3-access-secret"
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
pathStyleAccessEnabled: true
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
EOF
13
echo; echo "Waiting for Backup to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Backup to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.4 Running Running Running 14m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 11m
om-db-0-0 3/3 Running 0 5m35s
om-db-0-1 3/3 Running 0 6m20s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 11m
om-1-1 1/1 Running 0 8m28s
om-db-1-0 3/3 Running 0 3m52s
om-db-1-1 3/3 Running 0 4m48s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2-lucian
NAME READY STATUS RESTARTS AGE
om-2-backup-daemon-0 1/1 Running 0 2m
om-db-2-0 3/3 Running 0 2m55s
14

To configure credentials, you must create an Ops Manager organization, generate programmatic API keys in the Ops Manager UI, and create a secret with your Load Balancer IP. See Create Credentials for the Kubernetes Operator to learn more.

Back

TLS Certificates

On this page