Docs 菜单
Docs 主页
/
Enterprise Kubernetes Operator
/ /

没有服务网格的多集群MongoDB Ops Manager

MongoDB Ops Manager负责促进备份数据、监控数据库性能等工作负载。 为了使多集群MongoDB Ops Manager和应用程序数据库部署能够应对整个数据中心或区域故障,请在多个Kubernetes集群上部署MongoDB Ops Manager应用程序和应用程序数据库。

在开始以下过程之前,请执行以下操作:

  • 安装 kubectl

  • 完成 GKE 集群程序 或同等程序。

  • 完成 TLS 证书程序 或同等程序。

  • 完成外部 DNS 程序或同等程序。

  • 完成部署MongoDB Operator程序。

  • 按如下方式设置所需的环境变量:

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${OM_NAMESPACE}
export S3_OPLOG_BUCKET_NAME=s3-oplog-store
export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
# If you use your own S3 storage - set the values accordingly.
# By default we install Minio to handle S3 storage and here are set the default credentials.
export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
export S3_ACCESS_KEY="console"
export S3_SECRET_KEY="console123"
export OPS_MANAGER_VERSION="8.0.5"
export APPDB_VERSION="8.0.5-ent"

您可以在MongoDB Kubernetes Operator存储库中找到所有包含的源代码。

1
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-cert
spec:
dnsNames:
- ${OPS_MANAGER_EXTERNAL_DOMAIN}
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-db-cert
spec:
dnsNames:
- "*.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-db-cert
usages:
- server auth
- client auth
EOF
2
mkdir -p certs
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.crt']}" | base64 --decode > certs/tls.crt
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.key']}" | base64 --decode > certs/tls.key
gcloud compute ssl-certificates create om-certificate --certificate=certs/tls.crt --private-key=certs/tls.key
3

此负载负载均衡器在所有 3 集群中的MongoDB Ops Manager的所有副本之间分配流量。

gcloud compute firewall-rules create fw-ops-manager-hc \
--action=allow \
--direction=ingress \
--target-tags=mongodb \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--rules=tcp:8443
gcloud compute health-checks create https om-healthcheck \
--use-serving-port \
--request-path=/monitor/health
gcloud compute backend-services create om-backend-service \
--protocol HTTPS \
--health-checks om-healthcheck \
--global
gcloud compute url-maps create om-url-map \
--default-service om-backend-service
gcloud compute target-https-proxies create om-lb-proxy \
--url-map om-url-map \
--ssl-certificates=om-certificate
gcloud compute forwarding-rules create om-forwarding-rule \
--global \
--target-https-proxy=om-lb-proxy \
--ports=443
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
fw-ops-manager-hc default INGRESS 1000 tcp:8443 False
NAME PROTOCOL
om-healthcheck HTTPS
NAME BACKENDS PROTOCOL
om-backend-service HTTPS
NAME DEFAULT_SERVICE
om-url-map backendServices/om-backend-service
NAME SSL_CERTIFICATES URL_MAP REGION CERTIFICATE_MAP
om-lb-proxy om-certificate om-url-map
4
ip_address=$(gcloud compute forwarding-rules describe om-forwarding-rule --global --format="get(IPAddress)")
gcloud dns record-sets create "${OPS_MANAGER_EXTERNAL_DOMAIN}" --zone="${DNS_ZONE}" --type="A" --ttl="300" --rrdatas="${ip_address}"
5
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${OM_NAMESPACE}" create secret generic om-admin-user-credentials \
--from-literal=Username="admin" \
--from-literal=Password="Passw0rd@" \
--from-literal=FirstName="Jane" \
--from-literal=LastName="Doe"
6
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: false
EOF
7

等待应用程序数据库和MongoDB Ops Manager部署完成。

echo "Waiting for Application Database to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
echo "Waiting for Ops Manager to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Pending opsmanager/om --timeout=600s
Waiting for Application Database to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
8
svcneg0=$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg0}" \
--network-endpoint-group-zone="${K8S_CLUSTER_0_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
svcneg1=$(kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg1}" \
--network-endpoint-group-zone="${K8S_CLUSTER_1_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
9
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.5 Running Running Disabled 15m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 12m
om-db-0-0 3/3 Running 0 3m50s
om-db-0-1 3/3 Running 0 4m38s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 12m
om-1-1 1/1 Running 0 8m46s
om-db-1-0 3/3 Running 0 2m2s
om-db-1-1 3/3 Running 0 2m54s
10
kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
# add two buckets to the tenant config
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
--type='json' \
-p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
11
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-access-secret \
--from-literal=accessKey="${S3_ACCESS_KEY}" \
--from-literal=secretKey="${S3_SECRET_KEY}"
# minio TLS secrets are signed with the default k8s root CA
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-ca-cert \
--from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
12
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
backup:
members: 0
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
backup:
members: 0
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 0
backup:
members: 1
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: true
s3Stores:
- name: my-s3-block-store
s3SecretRef:
name: "s3-access-secret"
pathStyleAccessEnabled: true
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
s3OpLogStores:
- name: my-s3-oplog-store
s3SecretRef:
name: "s3-access-secret"
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
pathStyleAccessEnabled: true
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
EOF
13
echo; echo "Waiting for Backup to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Backup to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.4 Running Running Running 14m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 11m
om-db-0-0 3/3 Running 0 5m35s
om-db-0-1 3/3 Running 0 6m20s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 11m
om-1-1 1/1 Running 0 8m28s
om-db-1-0 3/3 Running 0 3m52s
om-db-1-1 3/3 Running 0 4m48s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2-lucian
NAME READY STATUS RESTARTS AGE
om-2-backup-daemon-0 1/1 Running 0 2m
om-db-2-0 3/3 Running 0 2m55s
14

要配置凭证,您必须创建一个MongoDB Ops Manager组织,在MongoDB Ops Manager用户界面中生成编程API密钥,并使用您的负载均衡器IP创建密钥。请参阅为Kubernetes Operator 创建档案以学习;了解更多信息。

后退

TLS 证书

在此页面上