Docs Menu

Docs HomeMongoDB Enterprise Kubernetes Operator

Plan Your Ops Manager Resource

On this page

  • Architecture
  • Considerations
  • Encryption Key
  • Application Database
  • Streamlined Configuration
  • Backup
  • Configure Ops Manager to Run over HTTPS
  • Ops Manager Application Access
  • Deploying Ops Manager in Remote or Local Mode
  • Managing External MongoDB Deployments
  • Using Ops Manager with Multi-Kubernetes-Cluster Deployments
  • Prerequisites

MongoDB Ops Manager is an enterprise application that manages, backs up, and monitors MongoDB deployments. With Ops Manager, you can scale and upgrade MongoDB, optimize queries, perform point-in-time restores, receive performance alerts, and monitor your deployments. To manage and maintain Ops Manager and its underlying database, you can use the MongoDB Enterprise Kubernetes Operator to run Ops Manager as a resource deployed in a container on Kubernetes.

Before you deploy an Ops Manager resource, make sure you read the considerations and complete the prerequisites.

For Ops Manager resource architecture details, see Ops Manager Architecture in Kubernetes.

The Kubernetes Operator generates an encryption key to protect sensitive information in the Ops Manager Application Database. The Kubernetes Operator saves this key in a secret in the same namespace as the Ops Manager resource. The Kubernetes Operator names the secret <om-resource-name>-gen-key.

Note

To avoid storing secrets in Kubernetes, you can migrate all secrets to a secret storage tool.

If you remove the Ops Manager resource, the key remains stored in the secret on the Kubernetes cluster. If you stored the Application Database in a Persistent Volume and you create another Ops Manager resource with the same name, the Kubernetes Operator reuses the secret. If you create an Ops Manager resource with a different name, then Kubernetes Operator creates a new secret and Application Database, and the old secret isn't reused.

  • When you create an instance of Ops Manager through the Kubernetes Operator in a single Kubernetes cluster deployment of MongoDB, the Ops Manager Application Database is deployed as a replica set. You can't configure the Application Database as a standalone database or sharded cluster. If you have concerns about performance or size requirements for the Application Database, contact MongoDB Support.

  • When you create an instance of Ops Manager through the Kubernetes Operator in a multi-Kubernetes-cluster deployment, the Kubernetes Operator can configure the Ops Manager Application Database on multiple member clusters.

The Kubernetes Operator automatically configures Ops Manager to monitor the Application Database that backs the Ops Manager Application. The Kubernetes Operator creates a project named <ops-manager-deployment-name>-db for you to monitor the Application Database deployment.

Ops Manager monitors the Application Database deployment, but Ops Manager doesn't manage it. You can't change the Application Database's configuration in the Ops Manager Application.

Important

The Ops Manager UI might display warnings in the <ops-manager-deployment-name>-db project stating that the agents for the Application Database are out of date. You can safely ignore these warnings.

The Kubernetes Operator enforces SCRAM-SHA-256 authentication on the Application Database.

The Kubernetes Operator creates the database user which Ops Manager uses to connect to the Application Database. This database user has the following attributes:

Username
mongodb-ops-manager
Authentication Database
admin
Roles

You can't modify the Ops Manager database user's name and roles. You create a secret to set the database user's password. You edit the secret to update the password. If you don't create a secret or delete an existing secret, the Kubernetes Operator generates a password and stores it.

To learn about other options for secret storage, see Configure Secret Storage.

The Kubernetes Operator requires that you specify the MongoDB Enterprise version for the Application Database image to enable any deployments of Ops Manager resources, including offline deployments.

After you deploy Ops Manager, you need to configure it. The regular procedure involves setting up Ops Manager through the configuration wizard. If you set some essential settings in your object specification before you deploy, you can bypass the configuration wizard.

In the spec.configuration block of your Ops Manager object specification, you need to:

Example

To disable the Ops Manager configuration wizard, configure the following settings in your spec.configuration block:

1spec:
2 configuration:
3 mms.ignoreInitialUiSetup: "true"
4 automation.versions.source: "remote"
5 mms.adminEmailAddr: cloud-manager-support@mongodb.com
6 mms.fromEmailAddr: cloud-manager-support@mongodb.com
7 mms.mail.hostname: email-smtp.us-east-1.amazonaws.com
8 mms.mail.port: "465"
9 mms.mail.ssl: "true"
10 mms.mail.transport: smtp
11 mms.minimumTLSVersion: TLSv1.2
12 mms.replyToEmailAddr: cloud-manager-support@mongodb.com

Replace the example values with the values you want your Ops Manager to use.

Kubernetes Operator enables Backup by default. The Kubernetes Operator deploys a StatefulSet comprised of one Pod to host the Backup Daemon Service and then creates a Persistent Volume Claim and Persistent Volume for the Backup Daemon's head database. The Kubernetes Operator uses the Ops Manager API to enable the Backup Daemon and configure the head database.

Important

To configure Backup, you must create the MongoDB resources or MongoDBMultiCluster resources for the oplog store and for one of the following:

  • oplog store or S3 oplog store. If you deploy both the oplog store and the S3 oplog store, Ops Manager chooses one to use for Backup at random.

  • S3 snapshot store or blockstore. If you deploy both an S3 snapshot store and a blockstore, Ops Manager chooses one to use for Backup at random.

The Ops Manager resource remains in a Pending state until you configure these Backup resources.

You can also encrypt backup jobs, but limitations apply to deployments where the same Kubernetes Operator instance is not managing both the MongoDBOpsManager and MongoDB custom resources.

You must deploy a three-member replica set to store your oplog slices.

The Oplog database only supports the SCRAM authentication mechanism. You cannot enable other authentication mechanisms.

If you enable SCRAM authentication on the oplog database, you must:

  • Create a MongoDB user resource to connect Ops Manager to the oplog database.

  • Specify the name of the user in the Ops Manager resource definition.

To configure an S3 oplog store, you must create an AWS S3 or S3-compatible bucket to store your database Backup Oplog.

You can configure the oplog store for both MongoDB resource and MongoDBMultiCluster resource, using the spec.backup.s3OpLogStores.mongodbResourceRef.name setting in the Ops Manager resource definition.

To configure a blockstore, you must deploy a replica set to store snapshots.

To configure an S3 snapshot store, you must create an AWS S3 or S3-compatible bucket to store your database Backup snapshots.

The default configuration stores snapshot metadata in the Application Database. You can also deploy a replica set to store snapshot metadata, then configure it using the spec.backup.s3Stores.mongodbResourceRef.name settings in the Ops Manager resource definition.

You can configure the S3 snapshot store for both MongoDB resource and MongoDBMultiCluster resource.

You can update any additional S3 configuration settings that Kubernetes Operator doesn't manage through the Ops Manager Application.

To disable backup after you enabled it:

  1. Set the Ops Manager Kubernetes object spec.backup.enabled setting to false.

  2. Disable backups in the Ops Manager Application.

  3. Delete the Backup Daemon Service StatefulSet:

    kubectl delete statefulset <metadata.name> -backup-daemon \
    -n <metadata.namespace>

Important

The Persistent Volume Claim and Persistent Volume for the Backup Daemon's head database are not deleted when you delete the Backup Daemon Service StatefulSet. You can retrieve stored data before you delete these Kubernetes resources.

To learn about reclaiming Persistent Volumes, see the Kubernetes documentation.

For deployments where the same Kubernetes Operator instance is not managing both the MongoDBOpsManager and MongoDB custom resources, you must manually configure KMIP backup encryption client settings in Ops Manager using the following procedure. If the Kubernetes Operator is managing both resources, see Configure KMIP Backup Encryption for Ops Manager instead.

  1. Mount the TLS secret to the MongoDBOpsManager custom resource. For example:

    apiVersion: mongodb.com/v1
    kind: MongoDBOpsManager
    metadata:
    name: ops-manager-pod-spec
    spec:
    < ... omitted ... >
    statefulSet:
    spec:
    template:
    spec:
    volumes:
    - name: kmip-client-test-prefix-mdb-latest-kmip-client
    secretName: test-prefix-mdb-latest-kmip-client
    containers:
    - name: mongodb-ops-manager
    volumeMounts:
    - mountPath: /mongodb-ops-manager/kmip/client/test-prefix-mdb-latest-kmip-client
    name: kmip-client-test-prefix-mdb-latest-kmip-client
    readOnly: true
    ...
  2. Configure the KMIP settings for your project in Ops Manager following the procedure in Configure Your Project to Use KMIP.

You can configure your Ops Manager instance created through the Kubernetes Operator to run over HTTPS instead of HTTP.

To configure your Ops Manager instance to run over HTTPS:

  1. Create a secret that contains the TLS certificate and private key.

  2. Add this secret to the Ops Manager configuration object.

For detailed instructions, see Deploy an Ops Manager Resource.

Important

If you have existing deployments, you must restart them manually after enabling HTTPS. To avoid restarting your deployments, configure HTTPS before deploying your managed resources.

To learn more, see HTTPS Enabled After Deployment.

By default, the Kubernetes Operator doesn't create a Kubernetes service to route traffic originating from outside of the Kubernetes cluster to the Ops Manager application.

To access the Ops Manager application, you can:

  • Configure the Kubernetes Operator to create a Kubernetes service.

  • Create a Kubernetes service manually. MongoDB recommends using a LoadBalancer Kubernetes service if your cloud provider supports it.

  • If you're using OpenShift, use routes.

  • Use a third-party service, such as Istio.

The simplest method is configuring the Kubernetes Operator to create a Kubernetes service that routes external traffic to the Ops Manager application. The Ops Manager deployment procedure instructs you to add the following settings to the object specification that configures the Kubernetes Operator to create a service:

You can use the Kubernetes Operator to configure Ops Manager to operate in Local or Remote mode if your environment prevents granting hosts in your Kubernetes cluster access to the Internet. In these modes, the Backup Daemons and managed MongoDB resources download installation archives from Ops Manager instead of from the Internet:

When you deploy Ops Manager with the Kubernetes Operator, Ops Manager can manage MongoDB database resources deployed:

  • To the same Kubernetes cluster as Ops Manager.

  • Outside of Kubernetes clusters.

If Ops Manager manages MongoDB database resources deployed to different Kubernetes clusters than Ops Manager or outside of Kubernetes clusters, you must:

  1. Add the mms.centralUrl setting to spec.configuration in the Ops Manager resource specification.

    Set the value to the URL by which Ops Manager is exposed outside of the Kubernetes cluster:

    spec:
    configuration:
    mms.centralUrl: https://a9a8f8566e0094380b5c257746627b82-1037623671.us-east-1.elb.example.com:8080/
  2. Update the ConfigMaps referenced by all MongoDB database resources inside the Kubernetes cluster that you deployed with the Kubernetes Operator.

    Set data.baseUrl to the same value of the spec.configuration.mms.centralUrl setting in the Ops Manager resource specification.

To deploy an Ops Manager instance in the central cluster and connect to it, use the following procedures:

These procedures are the same as the procedures for single clusters deployed with the Kubernetes Operator with the following exceptions:

  • Set the context and the namespace.

    If you are deploying an Ops Manager resource in a multi-Kubernetes-cluster deployment:

    • Set the context to the name of the central cluster, such as: kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".

    • Set the --namespace to the same scope that you used for your multi-Kubernetes-cluster deployment, such as: kubectl config --namespace "mongodb".

  • Configure external connectivity for Ops Manager.

    To connect member clusters to the Ops Manager resource's deployment in the central cluster in a multi-Kubernetes-cluster deployment, use one of the following methods:

    • Set the spec.externalConnectivity to true and specify the Ops Manager port in it. Use the ops-manager-external.yaml example script, modify it to your needs, and apply the configuration. For example, run:

      kubectl apply \
      --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \
      --namespace "mongodb" \
      -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
    • Add the central cluster and all member clusters to the same service mesh. The service mesh establishes communication from the the central and all member clusters to the Ops Manager instance. To learn more, see the Multi-Kubernetes-Cluster Quick Start procedures and see the step that references the istio-injection=enabled label for Istio. Also, see Automatic sidecar injection in the Istio documentation.

  • Deploy Ops Manager and the Application Database on the central cluster.

    You can choose to deploy Ops Manager and the Application Database only on the central cluster, using the same procedure as for single Kubernetes clusters. To learn more, see Deploy an Ops Manager instance on the central cluster with TLS encryption.

  • Deploy Ops Manager on the central cluster and the Application Database on selected member clusters.

    You can choose to deploy Ops Manager on the central cluster and the Application Database on a subset of selected member clusters, to increase the Application Database's resilience and availability in Ops Manager. Configure the following settings in the Ops Manager CRD:

    • Use topology to specify the MultiCluster value.

    • Specify the clusterSpecList and include in it the clusterName of each selected Kubernetes member cluster on which you want to deploy the Application Database, and the number of members (MongoDB nodes) in each Kubernetes member cluster.

    Note

    If you deploy the Application Database on selected member clusters in your multi-Kubernetes-cluster deployment, you must include the central cluster and member clusters in the same service mesh configuration. This enables bi-directional communication from Ops Manager to the Application Database.

    To learn more, see Deploy Ops Manager, review the multi-Kubernetes-cluster deployment example and specify MultiCluster for topology.

  1. If you have not already, run the following command to run all kubectl commands in the namespace you created:

    kubectl config set-context $(kubectl config current-context) \
    -n <metadata.namespace>

    Note

    If you are deploying an Ops Manager resource in a multi-Kubernetes-cluster deployment:

    • Set the context to the name of the central cluster, such as: kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".

    • Set the --namespace to the same scope that you used for your multi-Kubernetes-cluster deployment, such as: kubectl config --namespace "mongodb".

  2. Install the MongoDB Enterprise Kubernetes Operator.

  3. Ensure that the host on which you want to deploy Ops Manager has a minimum of five gigabytes of memory.

  1. Create a Kubernetes secret for an admin user in the same namespace as the Ops Manager resource. If you are deploying Ops Manager in a multi-Kubernetes-cluster deployment, use the same namespace that you set for your multi-Kubernetes-cluster deployment scope.

    If you're using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

    To learn about your options for secret storage, see Configure Secret Storage.

    When you deploy the Ops Manager resource, Ops Manager creates a user with these credentials and grants it the Global Owner role. Use these credentials to log in to Ops Manager for the first time. Once you deploy Ops Manager, change the password or remove this secret.

    Note

    The admin user's password must adhere to the Ops Manager password complexity requirements.

    kubectl create secret generic <adminusercredentials> \
    --from-literal=Username="<username>" \
    --from-literal=Password="<password>" \
    --from-literal=FirstName="<firstname>" \
    --from-literal=LastName="<lastname>"
  1. (Optional) To set the password for the Ops Manager database user, create a secret in the same namespace as the Ops Manager resource.

    If you're using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

    The Kubernetes Operator creates the database user that Ops Manager uses to connect to the Ops Manager Application Database. You can set the password for this database user by invoking the following command to create a secret:

    kubectl create secret generic <om-db-user-secret-name> \
    --from-literal=password="<om-db-user-password>"

    Note

    If you choose to create a secret for the Ops Manager database user, you must specify the secret's name in the Ops Manager resource definition. By default, the Kubernetes Operator looks for the password value in the password key. If you stored the password value in a different key, you must also specify that key name in the Ops Manager resource definition.

    If you don't create a secret, then the Kubernetes Operator automatically generates a password and stores it internally. To learn more, see Authentication.

  2. (Optional). To configure Backup to an S3 snapshot store, create a secret in the same namespace as the Ops Manager resource.

    If you're using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

    This secret stores your S3 credentials so that the Kubernetes Operator can connect Ops Manager to your AWS S3 or S3-compatible bucket. The secret must contain the following key-value pairs:

    Key
    Value
    accessKey
    Unique identifer of the AWS user who owns the S3 or S3-compatible bucket.
    secretKey
    Secret key of the AWS user who owns the S3 or S3-compatible bucket.

    To create the secret, invoke the following command:

    kubectl create secret generic <my-aws-s3-credentials> \
    --from-literal=accessKey="<AKIAIOSFODNN7EXAMPLE>" \
    --from-literal=secretKey="<wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY>"

    To learn more about managing S3 snapshot storage, see the Prerequisites.

←  Ops Manager Architecture in KubernetesDeploy an Ops Manager Resource →