Navigation
This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Kubernetes Operator, refer to the upgrade documentation.

Deploy an Ops Manager Instance

Alpha Release of Ops Manager Resource

Don’t use the Ops Manager resource in production environments.

You can deploy Ops Manager in a container with the Kubernetes Operator.

Prerequisites

To deploy an Ops Manager resource you must:

  1. Install the MongoDB Enterprise Kubernetes Operator 1.3.0 or newer.

  2. Ensure that the host on which you want to deploy Ops Manager has a minimum of five gigabytes of memory.

  3. Create a Kubernetes secret for an admin user in the same namespace as the Ops Manager resource.

    When you deploy the Ops Manager resource, Ops Manager creates a user with these credentials and grants it the Global Owner role. Use these credentials to log in to Ops Manager for the first time. Once Ops Manager is deployed, you should change the password or remove this secret.

    kubectl create secret generic <adminusercredentials>
      --from-literal=Username="<username>"
      --from-literal=Password="<password>"
      --from-literal=FirstName="<firstname>"
      --from-literal=LastName="<lastname>"
      -n <namespace>
    

Considerations

Encryption Key

The Kubernetes Operator generates an encryption key to protect sensitive information in the Ops Manager Application Database. The Kubernetes Operator saves this key in a secret in the same namespace as the Ops Manager resource. The Kubernetes Operator names the secret <om-resource-name>-gen-key.

If you remove the Ops Manager resource, the key remains stored in the secret on Kubernetes cluster. If you stored the Application Database in a Persistent Volume and you create another Ops Manager resource with the same name, the Kubernetes Operator reuses the secret. If you create an Ops Manager resource with a different name, then Kubernetes Operator creates a new secret and Application Database, and the old secret isn’t reused.

Application Database Replica Set

When you create an instance of Ops Manager through the Kubernetes Operator, the Ops Manager Application Database is deployed as a replica set. You can’t configure the Application Database as a standalone database or sharded cluster. If you have concerns about performance or size requirements for the Application Database, contact MongoDB Support.

Procedure

1

Copy the following example Ops Manager Kubernetes object.

Change the highlighted settings to match your desired Ops Manager configuration.

---
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
  name: <myopsmanager>
spec:
  replicas: 1
  version: <opsmanagerversion>
  adminCredentials: <adminusercredentials> # Should match metadata.name
                                           # in the Kubernetes secret
                                           # for the admin user
  applicationDatabase:
    members: 3
    version: <mongodbversion>
    persistent: true
...
2

Open your preferred text editor and paste the object specification into a new text file.

3

Configure the settings highlighted in the prior example.

Key Type Description Example
metadata.name string

Name for this Kubernetes Ops Manager object.

See also

om
spec.replicas number

Number of Ops Manager instances to run in parallel.

The minimum valid value is 1.

Highly Available Ops Manager Resources

For high availability, set this value to more than 1. Multiple Ops Manager instances can read from the same Application Database, ensuring failover if one instance is unavailable and enabling you to update the Ops Manager resource without downtime.

1
spec.version string

Version of Ops Manager to be installed.

The format should be X.Y.Z. To view available Ops Manager versions, view the container registry.

4.2.0
spec.adminCredentials string

Name of the secret you created for the Ops Manager admin user.

Note

Configure the secret to use the same namespace as the Ops Manager resource.

om-admin-secret
spec
.applicationDatabase
integer Number of members of the Ops Manager Application Database replica set. 3
spec
.applicationDatabase
string

Version of MongoDB that the Ops Manager Application Database should run.

The format should be X.Y.Z for the Community edition and X.Y.Z-ent for the Enterprise edition.

To learn more about MongoDB versioning, see see MongoDB Versioning in the MongoDB Manual.

4.0.7
spec
.applicationDatabase
boolean

Optional.

Flag indicating if this MongoDB Kubernetes resource should use Persistent Volumes for storage. Persistent volumes are not deleted when the MongoDB Kubernetes resource is stopped or restarted.

If this value is true, then spec.applicationDatabase.podSpec.persistence. single is set to its default value of 16G.

To change your Persistent Volume Claims configuration, configure the following collections to meet your deployment requirements:

  • If you want one Persistent Volume for each pod, configure the spec.applicationDatabase. single collection.

  • If you want separate Persistent Volumes for data, journals, and logs for each pod, configure the following collections:

    • spec.applicationDatabase
      .podSpec.persistence.multiple.
    • spec.applicationDatabase
      .podSpec.persistence.multiple.
    • spec.applicationDatabase
      .podSpec.persistence.multiple.

Warning

Grant your containers permission to write to your Persistent Volume. The Kubernetes Operator sets fsGroup = 2000 in securityContext This makes Kubernetes try to fix write permissions for the Persistent Volume. If redeploying the resource does not fix issues with your Persistent Volumes, contact MongoDB support.

true
4

(Optional) Configure any additional settings for an Ops Manager deployment.

You can add any of the following optional settings to the object specification file for an Ops Manager deployment:

5

Save this file with a .yaml file extension.

6

Create your Ops Manager instance.

Invoke the following kubectl command on the filename of the Ops Manager resource definition:

kubectl apply -f <opsmgr-resource>.yaml
7

Track the status of your Ops Manager instance.

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -n <namespace> -o yaml -w

The command returns the following output under the status field while the resource deploys:

status:
 applicationDatabase:
  lastTransition: "2019-11-15T19:48:01Z"
  message: AppDB Statefulset is not ready yet
  phase: Reconciling
  type: ""
  version: ""
 opsManager:
  lastTransition: "2019-11-15T19:48:01Z"
  message: Ops Manager is still waiting to start
  phase: Reconciling
  version: ""

After the resource completes the Reconciling phase, the command returns the following output under the status field:

status:
 applicationDatabase:
  lastTransition: "2019-11-05T17:26:42Z"
  phase: Running
  type: ""
  version: 4.0.7
 opsManager:
  lastTransition: "2019-11-05T17:26:34Z"
  phase: Running
  replicas: 1
  url: http://om-test-svc.dev.svc.cluster.local:8080
  version: 4.2.0

The status.opsManager.url is the connection URL of the resource, which can be used to reach Ops Manager from inside the Kubernetes cluster.

If the deployment fails, see Troubleshooting the Kubernetes Operator.

8

Access your Ops Manager instance from a browser.

  1. After the resource deploys successfully, find the external port to your Ops Manager instance.

    Invoke the following kubectl command on <metadata.name>-svc-external: metadata.name :

    kubectl get svc <metadata.name>-svc-external -n <namespace>
    

    The command returns the external port in the PORT(S) column. In the following example output, the external port is 30036:

    NAME                            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
    <metadata.name>-svc-external    NodePort   100.66.92.110    <none>        8080:30036/TCP    1d
    
  2. Set your firewall rules to allow access from the Internet to the external port on the host.

  3. Open a browser window and navigate to the Ops Manager application using the FQDN and port number.

    http://ops.example.com:30036
    
  4. Log in to Ops Manager using the admin user credentials.