Docs Menu

Docs HomeMongoDB Enterprise Kubernetes Operator

Modify Ops Manager or MongoDB Kubernetes Resource Containers

On this page

  • Define a Volume Mount for a MongoDB Kubernetes Resource
  • Tune MongoDB Kubernetes Resource Docker Images with an InitContainer
  • Build Custom Images with Dockerfile Templates

You can modify the containers in the Pods in which Ops Manager and MongoDB database resources run using the template or podTemplate setting that applies to your deployment:

To review which fields you can add to a template or a podTemplate, see the Kubernetes documentation.

When you create containers with a template or podTemplate, the Kubernetes Operator handles container creation differently based on the name you provide for each container in the containers array:

  • If the name field matches the name of the applicable resource image, the Kubernetes Operator updates the Ops Manager or MongoDB database container in the Pod to which the template or podTemplate applies:

    • Ops Manager: mongodb-enterprise-ops-manager

    • Backup Daemon Service: mongodb-backup-daemon

    • MongoDB database: mongodb-enterprise-database

    • Application Database: mongodb-enterprise-appdb

  • If the name field does not match the name of the applicable resource image, the Kubernetes Operator creates a new container in each Pod to which the template or podTemplate applies.

On-disk files in containers in Pods don't survive container crashes or restarts. Using the spec.podSpec.podTemplate setting, you can add a volume mount to persist data in a MongoDB database resource for the life of the Pod.

To create a volume mount for a MongoDB database resource:

  1. Update the MongoDB database resource definition to include a volume mount for containers in the database pods that the Kubernetes Operator creates.

    Example

    Use spec.podSpec.podTemplate to define a volume mount:

    podSpec:
    podTemplate:
    spec:
    containers:
    - name: mongodb-enterprise-database
    volumeMounts:
    - mountPath: </new/mount/path>
    name: survives-restart
    volumes:
    - name: survives-restart
    emptyDir: {}
  2. Apply the updated resource definition:

    kubectl apply -f <database-resource-conf>.yaml -n <metadata.namespace>

MongoDB resource Docker images run on RHEL and use RHEL's default system configuration. To tune the underlying RHEL system configuration in the MongoDB resource containers, add a privileged InitContainer init container using one of the following settings:

Example

MongoDB database resource Docker images use the RHEL default keepalive time of 7200. MongoDB recommends a shorter keepalive time of 120 for database deployments.

You can tune the keepalive time in the database resource Docker images if you experience network timeouts or socket errors in communication between clients and the database resources.

Tip

See also:

To tune Docker images for a MongoDB database resource container:

  1. Update the MongoDB database resource definition to append a privileged InitContainer to the database pods that the Kubernetes Operator creates.

    Example

    Change spec.podSpec.podTemplate the keepalive value to the recommended value of 120:

    spec:
    podSpec:
    podTemplate:
    spec:
    initContainers:
    - name: "adjust-tcp-keepalive"
    image: "busybox:latest"
    securityContext:
    privileged: true
    command: ["sysctl", "-w", "net.ipv4.tcp_keepalive_time=120"]
  2. Apply the updated resource definition:

    kubectl apply -f <database-resource-conf>.yaml -n <metadata.namespace>

Kubernetes adds a privileged InitContainer to each Pod that the Kubernetes Operator creates using the MongoDB resource definition.

Open a shell session to a running container in your database resource Pod and verify your changes.

Example

To follow the previous keepalive example, invoke the following command to get the current keepalive value:

> kubectl exec -n <metadata.namespace> -it <pod-name> -- cat /proc/sys/net/ipv4/tcp_keepalive_time
> 120

Tip

See also:

Operating System Configuration in the MongoDB Manual.

You can modify MongoDB Dockerfile templates to create custom Kubernetes Operator images that suit your use case. To build a custom image, you need:

  • Your custom Dockerfile, modified from a MongoDB template.

  • The MongoDB-provided context image for your template.

The Dockerfiles used to build container images are publicly available from the MongoDB Enterprise Kubernetes GitHub repository.

The Dockerfile directory is organized by resource name, version and distribution:

├── <resource name>
│ └── <image version>
│ └── <base distribution>
│ └── Dockerfile template

Copy the template you want to use to your own Dockerfile and modify as desired.

To build an image from any MongoDB Dockerfile template, you must supply its context image.

Each Dockerfile template has one associated context image, retrievable from the same Quay.io registry as the original images. Context image are always tagged in the format quay.io/mongodb/<resource-name>:<image-version>-context.

To supply a context image to docker build, include the --build-arg option with the imagebase variable set to a Quay.io tag, where <resource-name> and <image-version> match your Dockerfile template.

Example

If you want to build the mongodb-enterprise-database version 2.0.0 image for any distribution, include:

--build-arg imagebase=quay.io/mongodb/mongodb-enterprise-database:2.0.0-context

The Ubuntu distribution for mongodb-enterprise-operator version 1.9.1 is based on ubuntu:1604 by default. In this example, that base Dockerfile template is modified to use ubuntu:1804 and saved as myDockerfile.

The following command builds the custom image and gives it the tag 1.9.1-ubuntu-1804:

cat myDockerfile | docker build --build-arg imagebase=quay.io/mongodb/mongodb-enterprise-operator:1.9.1-context \
--tag mongodb-enterprise-operator:1.9.1-ubuntu-1804 -

Note

Include a hyphen (-) at the end of docker build to read the output of cat myDockerfile instead of providing a local directory as build context.

Tip

See also:

To learn more about docker build, see the Docker documentation.

←  Troubleshoot Deployments with Multiple Kubernetes ClustersReference →