Tutorial Part 2: Ops Manager in Kubernetes

Anton Lisovenko

#Ops Manager

Let’s briefly explore how we got here from a technical perspective. As you may know, support for MongoDB on Kubernetes has already been implemented by the MongoDB Enterprise Kubernetes Operator, or Operator for short. This was done by creating Kubernetes Custom Resources Definitions for MongoDB which describe the configuration for the MongoDB cluster to be deployed.

In August 2019 the alpha version of the MongoDBOpsManager Custom Resource was released. It allowed the Operator to start a single instance of Ops Manager in Kubernetes and the application database. The application database is used to store Ops Manager data.

In December 2019 the MongoDB Enterprise Kubernetes Operator 1.4.0 was released, at which point the MongoDBOpsManager Custom Resource was promoted to Beta with a rich feature set: high availability, backup, external access configuration, authentication for the application database and Openshift support.

This article will describe the architecture of Ops Manager in Kubernetes and provide step-by-step instructions detailing how to configure these additional Ops Manager features in Kubernetes. To follow along, it’s critical that you complete the steps in Part 1 of the tutorial.

Architecture

The Operator manages the MongoDBOpsManager Custom Resource and monitors for any updates to its specification. Each update results in a reconciliation process where the following actions are done:

  1. The Operator creates/updates the application database StatefulSet running at least 3 MongoDB instances. StatefulSet is an API object that manages stateful applications.

Note: The only type allowed is Replica Set and SCRAM-SHA authentication is always enabled. Each database pod runs an instance of MongoDB Agent which is configured directly by the Operator.

  1. The Operator creates/updates the StatefulSet running the Ops Manager pods. Ops Manager instances connect to the application database created in the previous step.
  2. The Operator ensures the StatefulSet for the Backup Daemon is running unless backup is disabled. The StatefulSet consists of a single pod. The Backup Daemon connects to the same application database as the Ops Manager instance.
  3. The Operator registers the first user with GLOBAL_OWNER role and saves a public API key to a secret for later usage. This is done only once during the Ops Manager creation.
  4. The Operator configures the Backup Daemon using Ops Manager public API according to the backup specification.

Kubernetes Diagram

Configuring Ops Manager for backup

By default, the Operator creates a StatefulSet for the Backup Daemon and performs some default configuration for it. This is not yet sufficient to backup MongoDB clusters as the Backup Daemon requires further configuration. To fully support backup it’s necessary to configure Oplog Stores and at least one Snapshot. The Oplog Store is a MongoDB cluster which is used to keep the oplog for the backed up database and the Snapshot Store is storage for snapshots regularly taken from the source database. The only supported type of Snapshot Store in the Ops Manager Beta is the S3Snapshot Store.

  1. Before configuring Ops Manager backup you will need to create an S3 Bucket in AWS or a custom S3 store. This S3 bucket will be referenced from a MongoDBOpsManager resource.

  2. After you have created an S3 bucket you will need to create a secret in Kubernetes which contains S3 credentials for the account that has read and write access to the bucket:

kubectl create secret generic s3-credentials  \
    --from-literal=accessKey="<AKIAIOSFODNN7EXAMPLE>"  \
    --from-literal=secretKey="<wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY>"  \
    -n mongodb
  1. The next two steps are creating the MongoDB replica sets for the internal databases managing the backup data (Oplog Store and Metadata for S3 Snapshot Store). For simplicity we’ll use the same admin credentials which were used in step 2 and 3 of the previous example under the header “Create a MongoDB replica set”:
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-mongodb-oplog
  namespace: mongodb
spec:
  members: 3
  version: 4.2.2
  type: ReplicaSet

  opsManager:
    configMapRef:
      name: ops-manager-connection
  credentials: om-jane-doe-credentials
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-mongodb-s3
  namespace: mongodb
spec:
  members: 3
  version: 4.2.2
  type: ReplicaSet

  opsManager:
    configMapRef:
      name: ops-manager-connection
  credentials: om-jane-doe-credentials
  1. Wait until both MongoDB resources reach “Running” state, then change the existing MongoDBOpsManager resource by adding the following backup configuration to “spec” referencing the MongoDB objects which have just been created:
backup:
  enabled: true
  oplogStores:
    - name: oplog1
      # the MongoDB resource that will act as an Oplog Store
      mongodbResourceRef:
        name: my-mongodb-oplog
  s3Stores:
    - name: s3store1
      # the MongoDB resource that will act as a storage for S3 Snapshot Store metadata
      mongodbResourceRef:
        name: my-mongodb-s3
      s3SecretRef:
        name: s3-credentials
      pathStyleAccessEnabled: true
      # change this to a s3 url you are using
      s3BucketEndpoint: s3.us-east-1.amazonaws.com
      s3BucketName: test-bucket
  1. Enable backup for the replica set labeled “my-replica-set” we created earlier in the Ops Manager UI. Only do this after the MongoDBOpsManager resource reaches the “Running” state—this will indicate that backup has been configured. To enable backup, click the “Continuous Backup” link and follow the steps to enable it. As a result, you'll get your MongoDB deployment continuously backed up.

Continuous Backup

You can also configure SCRAM-SHA authentication for backup databases—in this case it’s necessary to specify the reference to the MongoDBUser object:

mongodbResourceRef:
  name: my-mongodb-s3
mongodbUserRef:
  name: s3-user

Note: The current version (4.2.7) of Ops Manager uses an old Java driver which only works with SCRAM-SHA-1 authentication so the backup database version must be less than 4.0. In this case the Operator will enable SCRAM-SHA-1 authentication instead of SCRAM-SHA-256. This restriction is due to be fixed in the future versions of Ops Manager.

Configuring Ops Manager High Availability

Having only one instance of Ops Manager application means it won’t be available in case of an underlying node crash. Also, any change to MongoDBOpsManager configuration (and therefore the restart of the pod) will result in downtime for Ops Manager. To fix this, start multiple Ops Manager pods by changing the “spec.replicas” value to a bigger value. This will trigger the start of new pods in the StatefulSet :

spec:
  replicas: 3

Note: In the Beta version of MongoDBOpsManager it’s not possible to configure memory for the pod and the Operator sets the default to 5Gi. This means that you need to have the Kubernetes cluster of enough capacity to start “<spec.replicas>” pods for OpsManager and one pod for BackupDaemon (it also requires 5Gi). Having multiple Ops Manager pods allows for better fault tolerance and no downtime during upgrades.

Configuring Ops Manager Application Database

The application database shares all the configuration options with an ordinary Operator-deployed MongoDB replica set except for “spec.credentials” and “spec.opsManager.” The only supported topology type is ReplicaSet, but there is no need to specify “resourceType: ReplicaSet” as the Operator will assume this as a default.

The current release of Ops Manager configures the database to be always run in SCRAM-SHA-1 authentication mode and generates a random user password for it. If the user wants to provide their own password they can do so below.

  1. Create a secret:
kubectl create secret generic app-db-admin-secret \
  --from-literal=password="<om-db-user-password>" \
  -n mongodb
  1. Change the application database config to reference it:
applicationDatabase:
  members: 3
  version: 4.2.0
  persistent: true
  podSpec:
    cpu: '0.25'
    memory: 3G
  passwordSecretKeyRef:
    name: app-db-admin-secret

The Operator will configure the application database to use the new password so the administrator could login directly.

Configuring Ops Manager to Manage External MongoDB Deployments

You can configure Ops Manager in Kubernetes to manage deployments not only in the same cluster, but outside of Kubernetes. To do so, follow the instructions below:

  1. Change the Ops Manager configuration property “mms.centralUrl” to the external HTTP URL of Ops Manager. This can be done using the “spec.configuration” element. Any other Ops Manager properties can be configured this way as well:
# the Ops Manager configuration. All the values must be of type string
configuration:
  mms.fromEmailAddr: "admin@thecompany.com"
  # set this property to allow Ops Manager to manage deployments outside of
  # Kubernetes cluster
  mms.centralUrl: http://ac783a3d4b414-6999015327.us-east-2.elb.amazonaws.com:8080
  1. This change of Ops Manager configuration will trigger a rolling upgrade for the Ops Manager StatefulSet. Wait until the resource gets to the “Running” state. All MongoDB agents outside of the Kubernetes cluster will then be able to communicate with Ops Manager using the provided URL.

Note: If “mms.centralUrl” property is set then all deployments referencing the Ops Manager instance must use this URL, including the ones managed by the Operator inside the Kubernetes cluster.

Upgrading Ops Manager and Application Database

Upgrading and scaling the Ops Manager and its Application database has never been this easy.

  1. Change the relevant spec.version and spec.replicas/spec.applicationDatabase.members fields for the existing MongoDBOpsManager resource.
  2. Wait for the resource to enter the “Running” state. The Operator will take care of the rest!

What's Next

The current Beta release of Ops Manager already contains most of the features clients need. It’s internal architecture was reworked and dramatically simplified since the alpha release. Although it’s already quite robust, stay tuned for additional features and fixes that will accompany Ops Manager in GA:

  • Pod template support
  • Memory configuration for both pod and Ops manager application
  • TLS support for Application database and Ops Manager
  • Backup improvements
  • Better integration with MongoDB resources