Navigation

Prerequisites

This quick start tutorial requires that you complete the following tasks:

Review Supported Hardware Architectures

See supported hardware architectures.

Set Environment Variables and GKE zones

Set the environment variables with cluster names and the available GKE zones where you deploy the clusters, as in this example:

export MDB_GKE_PROJECT={GKE project name}

export MDB_CENTRAL_CLUSTER="mdb-central"
export MDB_CENTRAL_CLUSTER_ZONE="us-west1-a"

export MDB_CLUSTER_1="mdb-1"
export MDB_CLUSTER_1_ZONE="us-west1-b"

export MDB_CLUSTER_2="mdb-2"
export MDB_CLUSTER_2_ZONE="us-east1-b"

export MDB_CLUSTER_3="mdb-3"
export MDB_CLUSTER_3_ZONE="us-central1-a"

export MDB_CENTRAL_CLUSTER_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CENTRAL_CLUSTER_ZONE}_${MDB_CENTRAL_CLUSTER}"

export MDB_CLUSTER_1_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_1_ZONE}_${MDB_CLUSTER_1}"
export MDB_CLUSTER_2_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_2_ZONE}_${MDB_CLUSTER_2}"
export MDB_CLUSTER_3_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_3_ZONE}_${MDB_CLUSTER_3}"

Set up GKE clusters

Set up GKE (Google Kubernetes Engine) clusters:

  1. Set up your Google Cloud account and the gcloud tool, using the Google Kubernetes Engine Quickstart.

  2. Create one central cluster and one or more member clusters, specifying the GKE zones, the number of nodes, and the instance types, as in these examples:

    gcloud container clusters create $MDB_CENTRAL_CLUSTER \
      --zone=$MDB_CENTRAL_CLUSTER_ZONE \
      --num-nodes=5 \
      --machine-type "e2-standard-2"
    
    gcloud container clusters create $MDB_CLUSTER_1 \
      --zone=$MDB_CLUSTER_1_ZONE \
      --num-nodes=5 \
      --machine-type "e2-standard-2"
    
    gcloud container clusters create $MDB_CLUSTER_2 \
      --zone=$MDB_CLUSTER_2_ZONE \
      --num-nodes=5 \
      --machine-type "e2-standard-2"
    
    gcloud container clusters create $MDB_CLUSTER_3 \
      --zone=$MDB_CLUSTER_3_ZONE \
      --num-nodes=5 \
      --machine-type "e2-standard-2"
    

Obtain User Authentication Credentials for Central and Member clusters

Obtain user authentication credentials for the central and member Kubernetes clusters and save the credentials. You will later use these credentials for running kubectl commands on these clusters.

Run the following commands:

gcloud container clusters get-credentials $MDB_CENTRAL_CLUSTER \
  --zone=$MDB_CENTRAL_CLUSTER_ZONE

gcloud container clusters get-credentials $MDB_CLUSTER_1 \
  --zone=$MDB_CLUSTER_1_ZONE

gcloud container clusters get-credentials $MDB_CLUSTER_2 \
  --zone=$MDB_CLUSTER_2_ZONE

gcloud container clusters get-credentials $MDB_CLUSTER_3 \
  --zone=$MDB_CLUSTER_3_ZONE

Install Tools

Install the following tools:

  1. Install Istio in a multi-primary mode on different networks, using the install_istio_separate_network script.

    To learn more, see the Install Multicluster Istio documentation.

  2. Install Go v1.17 or later.

  3. Install Helm.

Set the Deployment’s Scope

By default, the multi-cluster Kubernetes Operator is scoped to the namespace in which it is installed. The Kubernetes Operator reconciles the MongoDBMulti custom resource deployed in the same namespace as the Kubernetes Operator.

When you run the multi-cluster CLI as part of the multi-cluster quick start procedure, and don’t modify the multi-cluster CLI settings, the multi-cluster CLI:

  • Creates a single mongodb namespace in the central cluster and each member cluster.
  • Creates Service Accounts, Roles, and RoleBindings in the central cluster and each member cluster.
  • Applies the correct permissions for service accounts.
  • Uses these settings to create your multi-Kubernetes-cluster deployment.

Once the multi-Kubernetes-cluster deployment is created, the Kubernetes Operator starts watching MongoDB Kubernetes resources in the mongodb namespace.

To configure the Kubernetes Operator with the correct permissions to deploy in multiple or all namespaces, run the following command and specify the namespaces that you would like the Kubernetes Operator to watch.

cd tools/multicluster
go run main.go setup \
 -central-cluster="e2e.operator.mongokubernetes.com" \
 -member-clusters="e2e.cluster1.mongokubernetes.com,e2e.cluster2.mongokubernetes.com,e2e.cluster3.mongokubernetes.com" \
 -member-cluster-namespace="mongodb2" \
 -central-cluster-namespace="mongodb2" \
 -cluster-scoped="true"

When you install the multi-Kubernetes-cluster deployment to multiple or all namespaces, you can configure the Kubernetes Operator to:

Watch Resources in Multiple Namespaces

If you set the scope for the multi-Kubernetes-cluster deployment to many namespaces, you can configure the Kubernetes Operator to watch MongoDB Kubernetes resources in these namespaces in the multi-Kubernetes-cluster deployment.

  1. Use the mongodb-enterprise.yaml sample YAML file from the MongoDB Enterprise Kubernetes Operator GitHub repository.

  2. Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE in mongodb-enterprise.yaml to the comma-separated list of namespaces that you would like the Kubernetes Operator to watch:

    WATCH_NAMESPACE: "$namespace1,$namespace2,$namespace3"
    

Run the following command and replace the values in the last line with the namespaces that you would like the Kubernetes Operator to watch.

helm upgrade \
  --install \
  mongodb-enterprise-operator-multi-cluster \
  mongodb/enterprise-operator \
  --namespace mongodb \
  --set namespace=mongodb \
  --version <mongodb-kubernetes-operator-version>\
  --set operator.name=mongodb-enterprise-operator-multi-cluster \
  --set operator.createOperatorServiceAccount=false \
  --set "multiCluster.clusters=$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME"
  --set operator.watchNamespace="$namespace1,$namespace2,$namespace3"

Watch Resources in All Namespaces

If you set the scope for the multi-Kubernetes-cluster deployment to all namespaces instead of the default mongodb namespace, you can configure the Kubernetes Operator to watch MongoDB Kubernetes resources in all namespaces in the multi-Kubernetes-cluster deployment.

  1. Use the mongodb-enterprise.yaml sample YAML file from the MongoDB Enterprise Kubernetes Operator GitHub repository.

  2. Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE in mongodb-enterprise.yaml to "*". You must include the double quotation marks (") around the asterisk (*) in the YAML file.

    WATCH_NAMESPACE: "*"
    

Run the following command:

helm upgrade \
  --install \
  mongodb-enterprise-operator-multi-cluster \
  mongodb/enterprise-operator \
  --namespace mongodb \
  --set namespace=mongodb \
  --version <mongodb-kubernetes-operator-version>\
  --set operator.name=mongodb-enterprise-operator-multi-cluster \
  --set operator.createOperatorServiceAccount=false \
  --set "multiCluster.clusters=$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME"
  --set operator.watchNamespace="*"

Check Connectivity Across Clusters

Follow the steps in this procedure to verify that service FQDNs are reachable across Kubernetes clusters.

In this example, you deploy a sample application defined in sample-service.yaml across two Kubernetes clusters.

  1. Create a namespace in each of the Kubernetes clusters to deploy the sample-service.yaml.

    kubectl create --context="${CTX_CLUSTER_1}" namespace sample
    kubectl create --context="${CTX_CLUSTER_2}" namespace sample
    

    Note

    In certain service mesh solutions, you might need to annotate or label the namespace.

  2. Deploy the sample-service.yaml in both Kubernetes clusters.

    kubectl apply --context="${CTX_CLUSTER_1}" \
       -f sample-service.yaml \
       -l service=helloworld1 \
       -n sample
    
    kubectl apply --context="${CTX_CLUSTER_2}" \
       -f sample-service.yaml \
       -l service=helloworld2 \
       -n sample
    
  3. Deploy the sample application on CLUSTER_1.

    kubectl apply --context="${CTX_CLUSTER_1}" \
      -f sample-service.yaml \
      -l version=v1 \
      -n sample
    
  4. Check that the CLUSTER_1 hosting Pod is in the Running state.

    kubectl get pod --context="${CTX_CLUSTER_1}"
      -n sample \
      -l app=helloworld
    
  5. Deploy the sample application on CLUSTER_2.

    kubectl apply --context="${CTX_CLUSTER_2}" \
      -f sample-service.yaml \
      -l version=v2 \
      -n sample
    
  6. Check that the CLUSTER_2 hosting Pod is in the Running state.

    kubectl get pod --context="${CTX_CLUSTER_2}" \
      -n sample \
      -l app=helloworld
    
  7. Deploy the Pod in CLUSTER_1 and check that you can reach the sample application in CLUSTER_2.

    kubectl run  --context="${CTX_CLUSTER_1}" \
      -n sample \
      curl --image=radial/busyboxplus:curl \
      -i --tty \
      curl -sS helloworld2.sample:5000/hello
    

    You should see output similar to this example:

    Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
    
  8. Deploy the Pod in CLUSTER_2 and check that you can reach the sample application in CLUSTER_1.

    kubectl run --context="${CTX_CLUSTER_2}" \
      -n sample \
      curl --image=radial/busyboxplus:curl \
      -i --tty \
      curl -sS helloworld1.sample:5000/hello
    

    You should see output similar to this example:

    Hello version: v1, instance: helloworld-v1-758dd55874-6x4t8
    

Prepare for TLS-Encrypted Connections

If you plan to secure your multi-Kubernetes-cluster deployment using TLS encryption, complete the following tasks:

  • To enable internal cluster authentication, create certificates for member clusters in the multi-Kubernetes-cluster deployment.

  • Generate one TLS certificate covering the SANs of all the member clusters in the MongoDBMulti resource.

  • For each Kubernetes service that the Kubernetes Operator generates corresponding to each Pod in each member cluster, add SANs to the certificate. In your TLS certificate, the SAN for each Kubernetes service must use the following format:

    <metadata.name>-<member_cluster_index>-<n>-svc.<namespace>.svc.cluster.local
    

    where n ranges from 0 to clusterSpecList[member_cluster_index].members - 1.

  • Generate one TLS certificate for your project’s MongoDB Agents.

    • For the MongoDB Agent TLS certificate:
      • The Common Name in the TLS certificate must not be empty.
      • The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.
  • You must possess the CA certificate and the key that you used to sign your TLS certificates.

Important

The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn’t support concatenated PEM files stored as Opaque secrets.