On this page
- Multi-Kubernetes-Cluster Limitations
- Multi-Kubernetes-Cluster Deployment Capabilities
- Connect with DNS SRV Records
- Manage Security for Database Users
- Deploy an Ops Manager Resource, deploy the Application Database, and Connect to Ops Manager
- Set up Queryable Backups for Ops Manager Resources
- Deployment Architecture and Diagrams
- Diagram: Multi-Kubernetes Cluster Deployment with a Service Mesh
- Diagram: Multi-Kubernetes Cluster Deployment without a Service Mesh
The following limitations exist for multi-Kubernetes-cluster deployments:
Deploy only replica sets. Sharded cluster deployments aren't supported.
Use Ops Manager versions later than 5.0.7.
Deploy Ops Manager on a single cluster, and if deploying in Kubernetes, deploy Ops Manager on a central cluster. To learn more, see Using Ops Manager with Multi-Kubernetes-Cluster Deployments.
The MongoDB Enterprise Kubernetes Operator doesn't support highly-available deployments of Ops Manager across multiple Kubernetes clusters. If your deployment requires multi-site resilience for Ops Manager, deploy Ops Manager with High Availability outside of Kubernetes with both the Application Database replica set and Ops Manager spanning all sites on which you have your multi-Kubernetes-cluster deployment. In this case, in a disaster recovery scenario, you can redeploy the multi-Kubernetes-cluster deployment on another Kubernetes cluster on a remaining site and connect the deployment to the Ops Manager instance running on that healthy site outside of Kubernetes.
If you host Ops Manager in the same Kubernetes cluster as the Kubernetes Operator and the cluster fails, you can restore the multi-Kubernetes-cluster deployment to a new Kubernetes cluster. However, restoring Ops Manager into another cluster in this case is a lengthy manual process.
In addition to deploying the Application Database outside of Kubernetes, you can deploy the Application Database on selected member Kubernetes clusters in your multi-Kubernetes-cluster deployment. This mitigates some disaster recovery scenarios, such as regional failures, and increases the Application Database's resilience and availability in Ops Manager. To learn more, see Deploy an Ops Manager Resource, deploy the Application Database, and Connect to Ops Manager.
For deployments where the same Kubernetes Operator instance is not managing both the MongoDBOpsManager and MongoDB custom resources, you must manually configure KMIP backup encryption client settings in Ops Manager. To learn more, see Manually Configure KMIP Backup Encryption.
Don't add a ServiceMonitor to your
MongoDBMultiClusterresources. The Kubernetes Operator doesn't support integration with Prometheus.
You can create a new multi-Kubernetes-cluster deployment and contact MongoDB Support to help you migrate data from your existing Kubernetes deployment to a multi-Kubernetes-cluster deployment. You can't extend an existing single-Kubernetes cluster deployment to new Kubernetes clusters.
This section describes the multi-Kubernetes-cluster deployment capabilities that you can configure using the same procedures as the procedures for single clusters deployed with the Kubernetes Operator. Other multi-Kubernetes-cluster deployment capabilities have their own documentation in this guide.
To connect to the multi-Kubernetes-cluster deployment database as a user, you can use
connectionString.standardSrv: DNS seed list connection string.
This string is included in the secret that the Kubernetes Operator creates for your multi-Kubernetes-cluster deployment.
Use the same procedure for connecting to the multi-Kubernetes-cluster deployment as for single
clusters deployed with Kubernetes Operator. See Connect to a MongoDB Database Resource from Inside Kubernetes
and select the tab Using the Kubernetes Secret.
Use these methods to manage security for database users:
These procedures are the same as the procedures for single clusters deployed with the Kubernetes Operator, with the following exceptions:
The procedures apply to replica sets only. Multi-Kubernetes-cluster deployments don't support creating sharded clusters.
mongodbResourceRef, specify the name of the multi-Kubernetes-cluster deployment replica set:
To deploy an Ops Manager instance in the central cluster and connect to it, use the following procedures:
These procedures are the same as the procedures for single clusters deployed with the Kubernetes Operator with the following exceptions:
Set the context and the namespace.
If you are deploying an Ops Manager resource in a multi-Kubernetes-cluster deployment:
contextto the name of the central cluster, such as:
kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".
--namespaceto the same scope that you used for your multi-Kubernetes-cluster deployment, such as:
kubectl config --namespace "mongodb".
Configure external connectivity for Ops Manager.
To connect member clusters to the Ops Manager resource's deployment in the central cluster in a multi-Kubernetes-cluster deployment, use one of the following methods:
trueand specify the Ops Manager port in it. Use the ops-manager-external.yaml example script, modify it to your needs, and apply the configuration. For example, run:
kubectl apply \ --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \ --namespace "mongodb" \ -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
Add the central cluster and all member clusters to the same service mesh. The service mesh establishes communication from the the central and all member clusters to the Ops Manager instance. To learn more, see the Multi-Kubernetes-Cluster Quick Start procedures and see the step that references the
istio-injection=enabledlabel for Istio. Also, see Automatic sidecar injection in the Istio documentation.
Deploy Ops Manager and the Application Database on the central cluster.
You can choose to deploy Ops Manager and the Application Database only on the central cluster, using the same procedure as for single Kubernetes clusters. To learn more, see Deploy an Ops Manager instance on the central cluster with TLS encryption.
Deploy Ops Manager on the central cluster and the Application Database on selected member clusters.
You can choose to deploy Ops Manager on the central cluster and the Application Database on a subset of selected member clusters, to increase the Application Database's resilience and availability in Ops Manager. Configure the following settings in the Ops Manager CRD:
topologyto specify the
clusterSpecListand include in it the
clusterNameof each selected Kubernetes member cluster on which you want to deploy the Application Database, and the number of
members(MongoDB nodes) in each Kubernetes member cluster.
If you deploy the Application Database on selected member clusters in your multi-Kubernetes-cluster deployment, you must include the central cluster and member clusters in the same service mesh configuration. This enables bi-directional communication from Ops Manager to the Application Database.
If you deploy Ops Manager with the Kubernetes Operator, the central cluster may also host Ops Manager. In this case, you can configure queryable backups for Ops Manager resources.
You can create multi-Kubernetes-cluster deployments with or without relying on a service mesh. To learn more, see Plan for External Connectivity: Should You Use a Service Mesh?
In both the following diagrams, the MongoDB Enterprise Kubernetes Operator performs these actions:
Watches for the
MongoDBMultiClusterresource spec creation in the central cluster.
Uses the mounted
kubeconfigfile to communicate with member clusters.
Creates the necessary resources, such as ConfigMaps, Secrets, Services and StatefulSet Kubernetes objects in each member cluster corresponding to the number of replica set members in the MongoDB cluster.
Identifies the cluster for deploying each MongoDB replica set using the corresponding
MongoDBMultiClusterresource spec, and deploys the MongoDB replica sets.
Watches for the
Reconciles the resources it created to confirm that the multi-Kubernetes-cluster deployment is in the desired state.
A multi-Kubernetes-cluster deployment that uses the MongoDB Enterprise Kubernetes Operator consists of one central cluster and one or more member clusters in Kubernetes:
The central cluster has the following role:
Hosts the MongoDB Enterprise Kubernetes Operator
Acts as the control plane for the multi-Kubernetes-cluster deployment
MongoDBMultiClusterresource spec for the MongoDB replica set
Hosts Ops Manager, if you deploy Ops Manager with the Kubernetes Operator
Can also host members of the MongoDB replica set
Member clusters host the MongoDB replica sets.
Note that if the central cluster fails, you can't use the Kubernetes Operator to change your deployment until you restore access to this cluster or until you redeploy the Kubernetes Operator to another available Kubernetes cluster. To learn more, see Disaster Recovery.
The following diagram shows the high-level architecture of a multi-Kubernetes-cluster deployment across regions and availability zones. This deployment uses a service mesh, such as Istio. The service mesh:
Manages the discovery of MongoDB nodes deployed in different Kubernetes member clusters.
Handles communication between replica set members.
You can host your application on any of the member clusters inside the service mesh, such as:
On Kubernetes clusters outside of the ones that you deploy with the Kubernetes Operator, or
On the member clusters in a multi-Kubernetes-cluster deployment.
The following diagram shows the high-level architecture of a multi-Kubernetes-cluster deployment across regions and availability zones. This deployment doesn't rely on a service mesh for connectivity between the Kubernetes clusters hosting Pods with MongoDB instances.
To handle external communication between MongoDB replica set members hosted on Pods in distinct Kubernetes clusters, use external domains and DNS zones.
You can host your application on any of the member clusters.