Considerations¶
On this page
MANAGED_SECURITY_CONTEXT
for Kubernetes Operator OpenShift Deployments¶
When you deploy the Kubernetes Operator to OpenShift, you must set the
MANAGED_SECURITY_CONTEXT
flag to true
. This value is set for you
in the mongodb-enterprise-openshift.yaml
and values-openshift.yaml
files included in the MongoDB Enterprise Kubernetes Operator
repository.
For more information on modifying this value, see the instructions for the installation method you want to use.
Docker Container Details¶
MongoDB builds the container images from the latest builds of the following operating systems:
If you get your Kubernetes Operator from… | …the Container uses |
---|---|
quay.io or GitHub | Ubuntu 16.04 |
OpenShift | Red Hat Enterprise Linux 7 |
MongoDB, Inc. updates all packages on these images before releasing them every three weeks.
Validation Webhook¶
The Kubernetes Operator uses a webhook to prevent users from applying invalid resource definitions. The webhook rejects invalid requests. The Kubernetes Operator doesn’t create or update the resource.
The ClusterRole
and ClusterRoleBinding
for the webhook are
included in the default configuration files that you apply during
installation. To create the role and binding, you must have
cluster-admin privileges.
If you apply an invalid resource definition, the webhook returns a message that describes the error to the shell:
Error from server (shardPodSpec field is not configurable for application databases as it is for sharded clusters and appdbs are replica sets): error when creating “my-ops-manager.yaml”: admission webhook “ompolicy.mongodb.com” denied the request: shardPodSpec field is not configurable for application databases as it is for sharded clusters and appdbs are replica sets
The Kubernetes Operator doesn’t require the validation webhook to create or
update resources. If you omit the validation webhook, remove its role
and binding from the default configuration, or have insufficient
privileges to run it, the Kubernetes Operator performs the same validations
when it reconciles each resource. The Kubernetes Operator marks resources as
Failed
if validation encounters a critical error. For non-critical
errors, the Kubernetes Operator issues warnings.
GKE deployments
GKE has a known issue with the webhook when deploying to private clusters. To learn more, see Update Google Firewall Rules to Fix WebHook Issues
Kubernetes Operator Deployment Scopes¶
You can deploy the Kubernetes Operator with different scopes based on where you want to deploy Ops Manager and MongoDB Kubernetes resources resources:
- Operator in Same Namespace as Resources (Default)
- Operator in Different Namespace Than Resources
- Cluster-Wide Scope
Operator in Same Namespace as Resources¶
You scope the Kubernetes Operator to a namespace. The Kubernetes Operator watches Ops Manager and MongoDB Kubernetes resources in that same namespace.
This is the default scope when you install the Kubernetes Operator using the installation instructions.
Operator in Different Namespace Than Resources¶
You scope the Kubernetes Operator to a namespace. The Kubernetes Operator watches Ops Manager and MongoDB Kubernetes resources in the namespace you specify.
You must use helm
to install the Kubernetes Operator with this scope.
Follow the relevant helm
installation instructions,
but use the following command to set the namespace for the
Kubernetes Operator to watch:
Setting the namespace ensures that:
- The namespace you want the Kubernetes Operator to watch has the correct roles and role bindings.
- The Kubernetes Operator can watch and create resources in the namespace.
Cluster-Wide Scope¶
You scope the Kubernetes Operator to a cluster. The Kubernetes Operator watches Ops Manager and MongoDB Kubernetes resources in all namespaces in the Kubernetes cluster.
Important
You can deploy only one Operator with a cluster-wide scope per Kubernetes cluster.
You must use helm
to install the Kubernetes Operator with this scope.
Follow the relevant helm
installation instructions, but make the
following adjustments:
To set the Kubernetes Operator to watch all namespaces, invoke the following command:
Create the required service accounts for each namespace where you want to deploy Ops Manager and MongoDB Kubernetes resources:
If you install a cluster-wide Kubernetes Operator without helm
:
Ensure that
spec.template.spec.containers.name.env.name: WATCH_NAMESPACE
is set to*
in mongodb-enterprise.yaml.If you deploy the Kubernetes Operator to OpenShift, ensure that you create all required local Kubernetes service accounts and secrets. Use
oc
or the OpenShift Container Platform UI to apply the following YAML file before you deploy the Kubernetes Operator:Note
In the sample YAML file, replace
<namespace>
with the namespace that you want to deploy the Kubernetes Operator to.
Customize the CustomResourceDefinitions that the Kubernetes Operator Watches¶
Earlier versions of the Kubernetes Operator would crash on startup if any one of the MongoDB CustomResourceDefinitions was not present in the cluster. For instance, you had to install the CustomResourceDefinition for Ops Manager even if you did not plan to deploy it with the Kubernetes Operator.
You can now specify which custom resources you want the Kubernetes Operator to watch. This allows you to install the CustomResourceDefinition for only the resources that you want the Kubernetes Operator to manage.
You must use helm
to configure the Kubernetes Operator to watch only the
custom resources you specify. Follow the relevant helm
installation instructions,
but make the following adjustments:
Decide which CustomResourceDefinitions you want to install. You can install any number of the following:
Value Description mongodb
Install the CustomResourceDefinitions for the database resources and also watch those resources. mongodbusers
Install the CustomResourceDefinitions for the MongoDB user resources and also watch those resources. opsmanagers
Install the CustomResourceDefinitions for the Ops Manager resources and also watch those resources. Install each CustomResourceDefinition that you want the Kubernetes Operator to manage from the helm_chart/crds directory:
Install the Helm Chart and specify which CustomResourceDefinitions you want the Kubernetes Operator to watch.
Separate each custom resource with a comma: