Docs Menu

Docs HomeMongoDB Enterprise Kubernetes Operator

Prerequisites

On this page

  • Review Supported Hardware Architectures
  • Clone the MongoDB Enterprise Kubernetes Operator Repository
  • Set Environment Variables and GKE Zones
  • Set up GKE Clusters
  • Obtain User Authentication Credentials for Central and Member Clusters
  • Install Go and Helm
  • Understand Kubernetes Roles and Role Bindings
  • Set the Deployment's Scope
  • Plan for External Connectivity: Should You Use a Service Mesh?
  • Check Connectivity Across Clusters
  • Review the Requirements for Deploying Ops Manager
  • Prepare for TLS-Encrypted Connections
  • Choose GitOps or the kubectl MongoDB Plugin
  • Install the kubectl MongoDB Plugin
  • Configure Resources for GitOps

Before you create a multi-Kubernetes-cluster deployment using either the quick start or a deployment procedure, complete the following tasks:

See supported hardware architectures.

Clone the MongoDB Enterprise Kubernetes Operator repository:

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git

Set the environment variables with cluster names and the available GKE zones where you deploy the clusters, as in this example:

export MDB_GKE_PROJECT={GKE project name}
export MDB_CENTRAL_CLUSTER="mdb-central"
export MDB_CENTRAL_CLUSTER_ZONE="us-west1-a"
export MDB_CLUSTER_1="mdb-1"
export MDB_CLUSTER_1_ZONE="us-west1-b"
export MDB_CLUSTER_2="mdb-2"
export MDB_CLUSTER_2_ZONE="us-east1-b"
export MDB_CLUSTER_3="mdb-3"
export MDB_CLUSTER_3_ZONE="us-central1-a"
export MDB_CENTRAL_CLUSTER_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CENTRAL_CLUSTER_ZONE}_${MDB_CENTRAL_CLUSTER}"
export MDB_CLUSTER_1_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_1_ZONE}_${MDB_CLUSTER_1}"
export MDB_CLUSTER_2_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_2_ZONE}_${MDB_CLUSTER_2}"
export MDB_CLUSTER_3_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_3_ZONE}_${MDB_CLUSTER_3}"

Set up GKE (Google Kubernetes Engine) clusters:

1

If you have not done so already, create a Google Cloud project, enable billing on the project, enable the Artifact Registry and GKE APIs, and launch Cloud Shell by following the relevant procedures in the Google Kubernetes Engine Quickstart in the Google Cloud documentation.

2

Create one central cluster and one or more member clusters, specifying the GKE zones, the number of nodes, and the instance types, as in these examples:

gcloud container clusters create $MDB_CENTRAL_CLUSTER \
--zone=$MDB_CENTRAL_CLUSTER_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_1 \
--zone=$MDB_CLUSTER_1_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_2 \
--zone=$MDB_CLUSTER_2_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_3 \
--zone=$MDB_CLUSTER_3_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"

Obtain user authentication credentials for the central and member Kubernetes clusters and save the credentials. You will later use these credentials for running kubectl commands on these clusters.

Run the following commands:

gcloud container clusters get-credentials $MDB_CENTRAL_CLUSTER \
--zone=$MDB_CENTRAL_CLUSTER_ZONE
gcloud container clusters get-credentials $MDB_CLUSTER_1 \
--zone=$MDB_CLUSTER_1_ZONE
gcloud container clusters get-credentials $MDB_CLUSTER_2 \
--zone=$MDB_CLUSTER_2_ZONE
gcloud container clusters get-credentials $MDB_CLUSTER_3 \
--zone=$MDB_CLUSTER_3_ZONE

Install the following tools:

  1. Install Go v1.17 or later.

  2. Install Helm.

To use a multi-Kubernetes-cluster deployment, you must have a specific set of Kubernetes Roles, ClusterRoles, RoleBindings, ClusterRoleBindings, and ServiceAccounts, which you can configure in any of the following ways:

  • Follow the Multi-Kubernetes-Cluster Quick Start, which tells you how to use the MongoDB Plugin to automatically create the required objects and apply them to the appropriate clusters within your multi-Kubernetes-cluster deployment.

  • Use Helm to configure the required Kubernetes Roles and service accounts for each member cluster:

    helm template --show-only \
    templates/database-roles.yaml \
    mongodb/enterprise-operator \
    --set namespace=mongodb | \
    kubectl apply -f - \
    --context=$MDB_CLUSTER_1_FULL_NAME \
    --namespace mongodb
    helm template --show-only \
    templates/database-roles.yaml \
    mongodb/enterprise-operator \
    --set namespace=mongodb | \
    kubectl apply -f - \
    --context=$MDB_CLUSTER_2_FULL_NAME \
    --namespace mongodb
    helm template --show-only \
    templates/database-roles.yaml \
    mongodb/enterprise-operator \
    --set namespace=mongodb | \
    kubectl apply -f - \
    --context=$MDB_CLUSTER_3_FULL_NAME \
    --namespace mongodb
  • Manually create Kubernetes object .yaml files and add the required Kubernetes Roles and service accounts to your multi-Kubernetes-cluster deployment with the kubectl apply command. This may be necessary for certain highly automated workflows. MongoDB provides sample configuration files.

    For custom resources scoped to a subset of namespaces:

    For custom resources scoped to a cluster-wide namespace:

    Each file defines multiple resources. To support your deployment, you must replace the placeholder values in the following fields:

    • subjects.namespace in each RoleBinding or ClusterRoleBinding resource

    • metadata.namespace in each ServiceAccount resource

    After modifying the definitions, apply them by running the following command for each file:

    kubectl apply -f <fileName>

By default, the multi-cluster Kubernetes Operator is scoped to the namespace in which you install it. The Kubernetes Operator reconciles the MongoDBMultiCluster resource deployed in the same namespace as the Kubernetes Operator.

When you run the MongoDB kubectl plugin as part of the multi-cluster quick start, and don't modify the kubectl mongodb plugin settings, the plugin:

  • Creates a default ConfigMap named mongodb-enterprise-operator-member-list that contains all the member clusters of the multi-Kubernetes-cluster deployment. This name is hard-coded and you can't change it. See Known Issues.

  • Creates ServiceAccounts, Roles, ClusterRoles, RoleBindings and ClusterRoleBindings in the central cluster and each member cluster.

  • Applies the correct permissions for service accounts.

  • Uses the preceding settings to create your multi-Kubernetes-cluster deployment.

Once the Kubernetes Operator creates the multi-Kubernetes-cluster deployment, the Kubernetes Operator starts watching MongoDB resources in the mongodb namespace.

To configure the Kubernetes Operator with the correct permissions to deploy in a subset or all namespaces, run the following command and specify the namespaces that you would like the Kubernetes Operator to watch.

kubectl mongodb multicluster setup \
--central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
--member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
--member-cluster-namespace="mongodb2" \
--central-cluster-namespace="mongodb2" \
--cluster-scoped="true"

When you install the multi-Kubernetes-cluster deployment to multiple or all namespaces, you can configure the Kubernetes Operator to:

Note

Install and set up a single Kubernetes Operator instance and configure it to watch one, many, or all custom resources in different, non-overlapping subsets of namespaces. See also Does MongoDB support running more than one Kubernetes Operator instance?

If you set the scope for the multi-Kubernetes-cluster deployment to many namespaces, you can configure the Kubernetes Operator to watch MongoDB resources in these namespaces in the multi-Kubernetes-cluster deployment.

If you set the scope for the multi-Kubernetes-cluster deployment to all namespaces instead of the default mongodb namespace, you can configure the Kubernetes Operator to watch MongoDB resources in all namespaces in the multi-Kubernetes-cluster deployment.

A service mesh enables inter-cluster communication between the replica set members deployed in different Kubernetes clusters. Using a service mesh greatly simplifies creating multi-Kubernetes-cluster deployments and is the recommended way of deploying MongoDB across multiple Kubernetes clusters. However, if your IT organization doesn't use a service mesh, you can deploy a replica set in a multi-Kubernetes-cluster deployment without it.

Depending on your environment, do the following:

Regardless of the deployment type, a MongoDB deployment in Kubernetes must establish the following connections:

  • From the Ops Manager MongoDB Agent in the Pod to its mongod process, to enable MongoDB deployment's lifecycle management and monitoring.

  • From the Ops Manager MongoDB Agent in the Pod to the Ops Manager instance, to enable automation.

  • Between all mongod processes, to allow replication.

When the Kubernetes Operator deploys the MongoDB resources, it treats these connectivity requirements in the following ways, depending on the type of deployment:

  • In a single Kubernetes cluster deployment, the Kubernetes Operator configures hostnames in the replica set as FQDNs of a Headless Service. This is a single service that resolves the DNS of a direct IP address of each Pod hosting a MongoDB instance by the Pod's FQDN, as follows: <pod-name>.<replica-set-name>-svc.<namespace>.svc.cluster.local.

  • In a multi-Kubernetes-cluster deployment that uses a service mesh, the Kubernetes Operator creates a separate StatefulSet for each MongoDB replica set member in the Kubernetes cluster. A service mesh allows communication between mongod processes across distinct Kubernetes clusters.

    Using a service mesh allows the multi-Kubernetes-cluster deployment to:

    • Achieve global DNS hostname resolution across Kubernetes clusters and establish connectivity between them. For each MongoDB deployment Pod in each Kubernetes cluster, the Kubernetes Operator creates a ClusterIP service through the spec.duplicateServiceObjects: true configuration in the MongoDBMultiCluster resource. Each process has a hostname defined to this service's FQDN: <pod-name>-svc.<namespace>.svc.cluster.local. These hostnames resolve from DNS to a service's ClusterIP in each member cluster.

    • Establish communication between Pods in different Kubernetes clusters. As a result, replica set members hosted on different clusters form a single replica set across these clusters.

  • In a multi-Kubernetes-cluster deployment without a service mesh, the Kubernetes Operator uses the following MongoDBMultiCluster resource settings to expose all its mongod processes externally. This enables DNS resolution of hostnames between distinct Kubernetes clusters, and establishes connectivity between Pods routed through the networks that connect these clusters.

Install Istio in a multi-primary mode on different networks, using the Istio documentation. Istio is a service mesh that simplifies DNS resolution and helps establish inter-cluster communication between the member Kubernetes clusters in a multi-Kubernetes-cluster deployment. If you choose to use a service mesh, you need to install it. If you can't utilize a service mesh, skip this section and use external domains and configure DNS to enable external connectivity.

In addition, we offer the install_istio_separate_network example script. This script is based on Istio documentation and provides an example installation that uses the multi-primary mode on different networks. We don't guarantee the script's maintenance with future Istio releases. If you choose to use the script, review the latest Istio documentation for installing a multicluster, and, if necessary, adjust the script to match the documentation and your deployment. If you use another service mesh solution, create your own script for configuring separate networks to facilitate DNS resolution.

If you don't use a service mesh, do the following to enable external connectivity to and between mongod processes and the Ops Manager MongoDB Agent:

  • When you create a multi-Kubernetes-cluster deployment, use the spec.clusterSpecList.externalAccess.externalDomain setting to specify an external domain and instruct the Kubernetes Operator to configure hostnames for mongod processes in the following pattern:

    <pod-name>.<externalDomain>

    Note

    You can specify external domains only for new deployments. You can't change external domains after you configure a multi-Kubernetes-cluster deployment.

    After you configure an external domain in this way, the Ops Manager MongoDB Agents and mongod processes use this domain to connect to each other.

  • Customize external services that the Kubernetes Operator creates for each Pod in the Kubernetes cluster. Use the global configuration in the spec.externalAccess settings and Kubernetes cluster-specific overrides in the spec.clusterSpecList.externalAccess.externalService settings.

  • Configure Pod hostnames in a DNS zone to ensure that each Kubernetes Pod hosting a mongod process allows establishing an external connection to the other mongod processes in a multi-Kubernetes-cluster deployment. A Pod is considered "exposed externally" when you can connect to a mongod process by using the <pod-name>.<externalDomain> hostname on ports 27017 (this is the default database port) and 27018 (this is the database port + 1). You may also need to configure firewall rules to allow TCP traffic on ports 27017 and 27018.

After you complete these prerequisites, you can deploy a multi-Kubernetes cluster without a service mesh.

Follow the steps in this procedure to verify that service FQDNs are reachable across Kubernetes clusters.

In this example, you deploy a sample application defined in sample-service.yaml across two Kubernetes clusters.

1

Create a namespace in each of the Kubernetes clusters to deploy the sample-service.yaml.

kubectl create --context="${CTX_CLUSTER_1}" namespace sample
kubectl create --context="${CTX_CLUSTER_2}" namespace sample

Note

In certain service mesh solutions, you might need to annotate or label the namespace.

2
kubectl apply --context="${CTX_CLUSTER_1}" \
-f sample-service.yaml \
-l service=helloworld1 \
-n sample
kubectl apply --context="${CTX_CLUSTER_2}" \
-f sample-service.yaml \
-l service=helloworld2 \
-n sample
3
kubectl apply --context="${CTX_CLUSTER_1}" \
-f sample-service.yaml \
-l version=v1 \
-n sample
4

Check that the CLUSTER_1 hosting Pod is in the Running state.

kubectl get pod --context="${CTX_CLUSTER_1}" \
-n sample \
-l app=helloworld
5
kubectl apply --context="${CTX_CLUSTER_2}" \
-f sample-service.yaml \
-l version=v2 \
-n sample
6

Check that the CLUSTER_2 hosting Pod is in the Running state.

kubectl get pod --context="${CTX_CLUSTER_2}" \
-n sample \
-l app=helloworld
7

Deploy the Pod in CLUSTER_1 and check that you can reach the sample application in CLUSTER_2.

kubectl run --context="${CTX_CLUSTER_1}" \
-n sample \
curl --image=radial/busyboxplus:curl \
-i --tty \
curl -sS helloworld2.sample:5000/hello

You should see output similar to this example:

Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
8

Deploy the Pod in CLUSTER_2 and check that you can reach the sample application in CLUSTER_1.

kubectl run --context="${CTX_CLUSTER_2}" \
-n sample \
curl --image=radial/busyboxplus:curl \
-i --tty \
curl -sS helloworld1.sample:5000/hello

You should see output similar to this example:

Hello version: v1, instance: helloworld-v1-758dd55874-6x4t8

As part of the Quick Start, you deploy an Ops Manager resource on the central cluster. To learn more, see Deploy an Ops Manager Resource, deploy the Application Database, and Connect to Ops Manager.

If you plan to secure your multi-Kubernetes-cluster deployment using TLS encryption, complete the following tasks to enable internal cluster authentication and generate TLS certificates for member clusters and the MongoDB Agent:

Note

You must possess the CA certificate and the key that you used to sign your TLS certificates.

Important

The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn't support concatenated PEM files stored as Opaque secrets.

You can choose to create and maintain the resource files needed for the MongoDBMultiCluster resources deployment in a GitOps environment.

If you use a GitOps workflow, you can't use the kubectl mongodb plugin, which automatically configures role-based access control (RBAC) and creates the kubeconfig file that allows the central cluster to communicate with its member clusters. Instead, you must manually configure or build your own automation for configuring the RBAC and kubeconfig files based on the procedure and examples in Configure Resources for GitOps.

The following prerequisite sections describe how to install the kubectl MongoDB plugin if you don't use GitOps or configure resources for GitOps if you do.

Use the kubectl mongodb plugin to:

Note

If you use GitOps, you can't use the kubectl mongodb plugin. Instead, follow the procedure in Configure Resources for GitOps.

To install the kubectl mongodb plugin:

1

Download your desired Kubernetes Operator package version from the Release Page of the MongoDB Enterprise Kubernetes Operator Repository.

The package's name uses this pattern: kubectl-mongodb-multicluster_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz.

Use one of the following packages:

  • kubectl-mongodb-multicluster_{{ .Version }}_darwin_amd64.tar.gz

  • kubectl-mongodb-multicluster_{{ .Version }}_darwin_arm64.tar.gz

  • kubectl-mongodb-multicluster_{{ .Version }}_linux_amd64.tar.gz

  • kubectl-mongodb-multicluster_{{ .Version }}_linux_arm64.tar.gz

2

Unpack the package, as in the following example:

tar -zxvf kubectl-mongodb_<version>_darwin_amd64.tar.gz
3

Find the kubectl-mongodb binary in the unpacked directory and move it to its desired destination, inside the PATH for the Kubernetes Operator user, as shown in the following example:

mv kubectl-mongodb /usr/local/bin/kubectl-mongodb

Now you can run the kubectl mongodb plugin using the following commands:

kubectl mongodb multicluster setup
kubectl mongodb multicluster recover

To learn more about the supported flags, see the MongoDB kubectl plugin Reference.

If you use a GitOps workflow, you won't be able to use the kubectl mongodb plugin to automatically configure role-based access control (RBAC) or the kubeconfig file that allows the central cluster to communicate with its member clusters. Instead, you must manually configure and apply the following resource files or build your own automation based on the information below.

Note

To learn how the kubectl mongodb plugin automates the following steps, view the code in GitHub.

To configure RBAC and the kubeconfig for GitOps:

1

Use these RBAC resource examples to create your own. To learn more about these RBAC resources, see Understand Kubernetes Roles and Role Bindings.

To apply them to your central and member clusters with GitOps, you can use a tool like Argo CD.

2

The Kubernetes Operator keeps track of its member clusters using a ConfigMap file. Copy, modify, and apply the following example ConfigMap:

apiVersion: v1
kind: ConfigMap
data:
cluster1: ""
cluster2: ""
metadata:
namespace: <namespace>
name: mongodb-enterprise-operator-member-list
labels:
multi-cluster: "true"
3

The Kubernetes Operator, which runs in the central cluster, communicates with the Pods in the member clusters through the Kubernetes API. For this to work, the Kubernetes Operator needs a kubeconfig file that contains the service account tokens of the member clusters. Create this kubeconfig file by following these steps:

  1. Obtain a list of service accounts configured in the Kubernetes Operator's namespace. For example, if you chose to use the default mongodb namespace, then you can obtain the service accounts using the following command:

    kubectl get serviceaccounts -n mongodb
  2. Get the secret for each service account that belongs to a member cluster.

    kubectl get secret <service-account-name> -n mongodb -o yaml
  3. In each service account secret, copy the CA certificate and token. For example, copy <ca_certificate> and <token> from the secret, as shown in the following example:

    apiVersion: v1
    kind: Secret
    metadata:
    name: my-service-account
    namespace: mongodb
    data:
    ca.crt: <ca_certificate>
    token: <token>
  4. Copy the following kubeconfig example for the central cluster and replace the placeholders with the <ca_certificate> and <token> you copied from the service account secrets.

    apiVersion: v1
    clusters:
    - cluster:
    certificate-authority-data: <cluster-1-ca.crt>
    server: https://:
    name: kind-e2e-cluster-1
    - cluster:
    certificate-authority-data: <cluster-2-ca.crt>
    server: https://:
    name: kind-e2e-cluster-2
    contexts:
    - context:
    cluster: kind-e2e-cluster-1
    namespace: mongodb
    user: kind-e2e-cluster-1
    name: kind-e2e-cluster-1
    - context:
    cluster: kind-e2e-cluster-2
    namespace: mongodb
    user: kind-e2e-cluster-2
    name: kind-e2e-cluster-2
    kind: Config
    users:
    - name: kind-e2e-cluster-1
    user:
    token: <cluster-1-token>
    - name: kind-e2e-cluster-2
    user:
    token: <cluster-2-token>
  5. Save the kubeconfig file.

  6. Create a secret in the central cluster that you mount in the Kubernetes Operator as illustrated in the reference Helm chart. For example:

    kubectl --context="${CTX_CENTRAL_CLUSTER}" -n <operator-namespace> create secret --from-file=kubeconfig=<path-to-kubeconfig-file> <kubeconfig-secret-name>
←  Services and ToolsMulti-Kubernetes-Cluster Quick Start →