Navigation

Known Issues in the MongoDB Enterprise Kubernetes Operator

mongos Instances Fail to Reach Ready State After Disabling Authentication

Note

This issue applies only to sharded clusters that meet the following criteria:

  • Deployed using the Kubernetes Operator 1.13.0
  • Use X.509 authentication
  • Use kubernetes.io/tls secrets for TLS certificates for the MongoDB Agent

If you disable authentication by setting spec.security.auth.enabled to false, the mongos Pods never reach a ready state.

As a workaround, delete each mongos Pod in your deployment.

Run the following command to list all of your Pods:

kubectl get pods

For each Pod with a name that contains mongos, delete it with the following command:

kubectl delete pod <podname>

When you delete a Pod, Kubernetes recreates it. Each Pod that Kubernetes recreates receives the updated configuration and can reach a READY state. To confirm that all of your mongos Pods are READY, run the following command:

kubectl get pods -n <metadata.namespace>

A response like the following indicates that all of your mongos Pods are READY:

NAME                                           READY   STATUS    RESTARTS   AGE
mongodb-enterprise-operator-6495bdd947-ttwqf   1/1     Running   0          50m
my-sharded-cluster-0-0                         1/1     Running   0          12m
my-sharded-cluster-1-0                         1/1     Running   0          12m
my-sharded-cluster-config-0                    1/1     Running   0          12m
my-sharded-cluster-config-1                    1/1     Running   0          12m
my-sharded-cluster-mongos-0                    1/1     Running   0          11m
my-sharded-cluster-mongos-1                    1/1     Running   0          11m
om-0                                           1/1     Running   0          42m
om-db-0                                        2/2     Running   0          44m
om-db-1                                        2/2     Running   0          43m
om-db-2                                        2/2     Running   0          43m

Update TLS Secret for the Application Database

Note

This issue applies only to Ops Manager resources deployed using the Kubernetes Operator 1.13.0.

The Kubernetes Operator doesn’t reconcile resources when you modify the secret that contains the Application Database’s TLS certificate. To force the Kubernetes Operator to reconcile resources, scale the operator down to zero replicas, then scale it up to one.

Note

This is a safe operation. Scaling the mongodb-enterprise-operator deployment does not affect the availability of your deployed Ops Manager and database resources.

Run the following command to scale down:

kubectl scale deployment mongodb-enterprise-operator --replicas=0 -n <metadata.namespace>

Run the following command to scale up:

kubectl scale deployment mongodb-enterprise-operator --replicas=1 -n <metadata.namespace>

Update Google Firewall Rules to Fix WebHook Issues

When you deploy Kubernetes Operator to GKE (Google Kubernetes Engine) private clusters, the MongoDB Kubernetes resources or MongoDBOpsManager resource creation could time out. The following message might appear in the logs:

Error setting state to reconciling: Timeout: request did not complete within requested timeout 30s”.

Google configures its firewalls to restrict access to your Kubernetes Pods. To use the webhook service, add a new firewall rule to grant GKE (Google Kubernetes Engine) control plane access to your webhook service.

The Kubernetes Operator webhook service runs on port 443.

Configure Persistent Storage Correctly

If there are no persistent volumes available when you create a resource, the resulting Pod stays in transient state and the Operator fails (after 20 retries) with the following error:

Failed to update Ops Manager automation config: Some agents failed to register

To prevent this error, either:

For testing only, you may also set persistent : false. This must not be used in production, as data is not preserved between restarts.

Remove Resources before Removing Kubernetes

Sometimes Ops Manager can diverge from Kubernetes. This mostly occurs when Kubernetes resources are removed manually. Ops Manager can keep displaying an Automation Agent which has been shut down.

If you want to remove deployments of MongoDB on Kubernetes, use the resource specification to delete resources first so no dead Automation Agents remain.

Create Separate Namespaces for Kubernetes Operator and MongoDB Resources

The best strategy is to create Kubernetes Operator and its resources in different namespaces so that the following operations would work correctly:

kubectl delete pods --all

or

kubectl delete namespace mongodb

If the Kubernetes Operator and resources sit in the same mongodb namespace, then operator would also be removed in the same operation. This would mean that it could not clean the configurations, which would have to be done in the Ops Manager Application.

HTTPS Enabled After Deployment

We recommend that you enable HTTPS before deploying your Ops Manager resources. However, if you enable HTTPS after deployment, your managed resources can no longer communicate with Ops Manager and the Kubernetes Operator reports your resources’ status as Failed.

To resolve this issue, you must delete your Pods by running the following command for each Pod:

kubectl delete pod <replicaset-pod-name>

After deletion, Kubernetes automatically restarts the deleted Pods. During this period, the resource is unreachable and incurs downtime.

Unable to Update the MongoDB Agent on Application Database Pods

You can’t use Ops Manager to upgrade the MongoDB Agents that run on the Application Database Pods. The MongoDB Agent version that runs on these Pods is embedded in the Application Database Docker image.

You can use the Kubernetes Operator to upgrade the MongoDB Agent version on Application Database Pods as MongoDB publishes new images.

Unable to Pull Enterprise Kubernetes Operator Images from IBM Cloud Paks

If you pull the Kubernetes Operator images from a container registry hosted in IBM Cloud Paks, the IBM Cloud Paks changes the names of the images by adding a digest SHA to the official image names. This action results in error messages from the Kubernetes Operator similar to the following:

Failed to apply default image tag "cp.icr.io/cp/cpd/ibm-cpd-mongodb-agent@
sha256:10.14.24.6505-1": couldn't parse image reference "cp.icr.io/cp/cpd/
ibm-cpd-mongodb-agent@sha256:10.14.24.6505-1": invalid reference format

As a workaround, update the Ops Manager Application Database resource definition in spec.applicationDatabase.podSpec.podTemplate to specify the new names for the Kubernetes Operator images that contain the digest SHAs, similar to the following example.

applicationDatabase:
  # The version specified must match the one in the image provided in the `mongod` field
  version: 4.4.11-ent
  members: 3
  podSpec:
    podTemplate:
      spec:
        containers:
          - name: mongodb-agent
            image: 'cp.icr.io/cp/cpd/ibm-cpd-mongodb-agent@sha256:689df23cc35a435f5147d9cd8a697474f8451ad67a1e8a8c803d95f12fea0b59'

Machine Memory vs. Container Memory

MongoDB versions older than 3.6.13, 4.0.9, and 4.1.9 report host system RAM, not container RAM.