El Ops Manager es responsable de facilitar cargas de trabajo como la copia de seguridad de datos, la supervisión del rendimiento de la base de datos y más. Para hacer que tu Ops Manager de múltiples clústeres y la implementación de la base de datos de la aplicación sean resilientes frente a fallos de centros de datos o zonas enteras, implementa Ops Manager Application y la base de datos de la aplicación en varios clústeres de Kubernetes.
Requisitos previos
Antes de comenzar el siguiente procedimiento, realice las siguientes acciones:
Instalar
kubectl.Complete el procedimiento de clústeres de GKE o equivalente.
Complete el procedimiento de Certificados TLS o equivalente.
Complete el procedimiento ExternalDNS o equivalente.
Complete el procedimiento Implementar el operador MongoDB.
Establezca las variables de entorno necesarias de la siguiente manera:
# This script builds on top of the environment configured in the setup guides. # It depends (uses) the following env variables defined there to work correctly. # If you don't use the setup guide to bootstrap the environment, then define them here. # ${K8S_CLUSTER_0_CONTEXT_NAME} # ${K8S_CLUSTER_1_CONTEXT_NAME} # ${K8S_CLUSTER_2_CONTEXT_NAME} # ${OM_NAMESPACE} export S3_OPLOG_BUCKET_NAME=s3-oplog-store export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store # If you use your own S3 storage - set the values accordingly. # By default we install Minio to handle S3 storage and here are set the default credentials. export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local" export S3_ACCESS_KEY="console" export S3_SECRET_KEY="console123" export OPS_MANAGER_VERSION="8.0.5" export APPDB_VERSION="8.0.5-ent"
Código fuente
Puede encontrar todo el código fuente incluido en el repositorio del operador Kubernetes de MongoDB.
Procedimiento
Generar certificados TLS.
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" apply -f - <<EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: om-cert spec: dnsNames: - ${OPS_MANAGER_EXTERNAL_DOMAIN} duration: 240h0m0s issuerRef: name: my-ca-issuer kind: ClusterIssuer renewBefore: 120h0m0s secretName: cert-prefix-om-cert usages: - server auth - client auth --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: om-db-cert spec: dnsNames: - "*.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}" - "*.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}" - "*.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}" duration: 240h0m0s issuerRef: name: my-ca-issuer kind: ClusterIssuer renewBefore: 120h0m0s secretName: cert-prefix-om-db-cert usages: - server auth - client auth EOF
Agregar certificado TLS a GCP.
mkdir -p certs kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.crt']}" | base64 --decode > certs/tls.crt kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.key']}" | base64 --decode > certs/tls.key gcloud compute ssl-certificates create om-certificate --certificate=certs/tls.crt --private-key=certs/tls.key
Cree los componentes de Kubernetes necesarios para un balanceador de carga.
Este balanceador de carga distribuye el tráfico entre todas las réplicas de Ops Manager en todos los clústeres 3.
gcloud compute firewall-rules create fw-ops-manager-hc \ --action=allow \ --direction=ingress \ --target-tags=mongodb \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:8443 gcloud compute health-checks create https om-healthcheck \ --use-serving-port \ --request-path=/monitor/health gcloud compute backend-services create om-backend-service \ --protocol HTTPS \ --health-checks om-healthcheck \ --global gcloud compute url-maps create om-url-map \ --default-service om-backend-service gcloud compute target-https-proxies create om-lb-proxy \ --url-map om-url-map \ --ssl-certificates=om-certificate gcloud compute forwarding-rules create om-forwarding-rule \ --global \ --target-https-proxy=om-lb-proxy \ --ports=443
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED fw-ops-manager-hc default INGRESS 1000 tcp:8443 False NAME PROTOCOL om-healthcheck HTTPS NAME BACKENDS PROTOCOL om-backend-service HTTPS NAME DEFAULT_SERVICE om-url-map backendServices/om-backend-service NAME SSL_CERTIFICATES URL_MAP REGION CERTIFICATE_MAP om-lb-proxy om-certificate om-url-map
Agregue un registro "A" a su zona DNS con su dominio externo.
ip_address=$(gcloud compute forwarding-rules describe om-forwarding-rule --global --format="get(IPAddress)") gcloud dns record-sets create "${OPS_MANAGER_EXTERNAL_DOMAIN}" --zone="${DNS_ZONE}" --type="A" --ttl="300" --rrdatas="${ip_address}"
Cree credenciales para el usuario administrador de Ops Manager.
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${OM_NAMESPACE}" create secret generic om-admin-user-credentials \ --from-literal=Username="admin" \ --from-literal=Password="Passw0rd@" \ --from-literal=FirstName="Jane" \ --from-literal=LastName="Doe"
Implementar Ops Manager.
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF apiVersion: mongodb.com/v1 kind: MongoDBOpsManager metadata: name: om spec: topology: MultiCluster version: "${OPS_MANAGER_VERSION}" adminCredentials: om-admin-user-credentials externalConnectivity: type: ClusterIP annotations: cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}' opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}" security: certsSecretPrefix: cert-prefix tls: ca: ca-issuer clusterSpecList: - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" members: 1 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" members: 2 applicationDatabase: version: "${APPDB_VERSION}" topology: MultiCluster security: certsSecretPrefix: cert-prefix tls: ca: ca-issuer clusterSpecList: - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" members: 2 externalAccess: externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}" externalService: annotations: external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}" - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" members: 2 externalAccess: externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}" externalService: annotations: external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}" - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}" members: 1 externalAccess: externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}" externalService: annotations: external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}" backup: enabled: false EOF
Espere a que el operador de Kubernetes ingrese a un estado pendiente.
Espere a que se completen las implementaciones de la base de datos de aplicaciones y del administrador de operaciones.
echo "Waiting for Application Database to reach Pending phase..." kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s echo "Waiting for Ops Manager to reach Pending phase..." kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Pending opsmanager/om --timeout=600s
Waiting for Application Database to reach Pending phase... mongodbopsmanager.mongodb.com/om condition met Waiting for Ops Manager to reach Pending phase... mongodbopsmanager.mongodb.com/om condition met
Configurar servicios de balanceo de carga.
svcneg0=$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}') gcloud compute backend-services add-backend om-backend-service \ --global \ --network-endpoint-group="${svcneg0}" \ --network-endpoint-group-zone="${K8S_CLUSTER_0_ZONE}" \ --balancing-mode RATE --max-rate-per-endpoint 5
svcneg1=$(kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}') gcloud compute backend-services add-backend om-backend-service \ --global \ --network-endpoint-group="${svcneg1}" \ --network-endpoint-group-zone="${K8S_CLUSTER_1_ZONE}" \ --balancing-mode RATE --max-rate-per-endpoint 5
Espere a que Ops Manager entre en estado de ejecución.
echo "Waiting for Application Database to reach Running phase..." kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s echo; echo "Waiting for Ops Manager to reach Running phase..." kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s echo; echo "MongoDBOpsManager resource" kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Application Database to reach Running phase... mongodbopsmanager.mongodb.com/om condition met Waiting for Ops Manager to reach Running phase... mongodbopsmanager.mongodb.com/om condition met MongoDBOpsManager resource NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS om 8.0.5 Running Running Disabled 15m Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian NAME READY STATUS RESTARTS AGE om-0-0 1/1 Running 0 12m om-db-0-0 3/3 Running 0 3m50s om-db-0-1 3/3 Running 0 4m38s Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian NAME READY STATUS RESTARTS AGE om-1-0 1/1 Running 0 12m om-1-1 1/1 Running 0 8m46s om-db-1-0 3/3 Running 0 2m2s om-db-1-1 3/3 Running 0 2m54s
Instalar Minio.
kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \ kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \ kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f - add two buckets to the tenant config kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \ --type='json' \ -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
Configurar secretos de Kubernetes para las copias de seguridad de Ops Manager.
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-access-secret \ --from-literal=accessKey="${S3_ACCESS_KEY}" \ --from-literal=secretKey="${S3_SECRET_KEY}" minio TLS secrets are signed with the default k8s root CA kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-ca-cert \ --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
Habilitar copias de seguridad de S3 (Minio) en Ops Manager.
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF apiVersion: mongodb.com/v1 kind: MongoDBOpsManager metadata: name: om spec: topology: MultiCluster version: "${OPS_MANAGER_VERSION}" adminCredentials: om-admin-user-credentials externalConnectivity: type: ClusterIP annotations: cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}' opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}" security: certsSecretPrefix: cert-prefix tls: ca: ca-issuer clusterSpecList: - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" members: 1 backup: members: 0 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" members: 2 backup: members: 0 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}" members: 0 backup: members: 1 applicationDatabase: version: "${APPDB_VERSION}" topology: MultiCluster security: certsSecretPrefix: cert-prefix tls: ca: ca-issuer clusterSpecList: - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}" members: 2 externalAccess: externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}" externalService: annotations: external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}" - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}" members: 2 externalAccess: externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}" externalService: annotations: external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}" - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}" members: 1 externalAccess: externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}" externalService: annotations: external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}" backup: enabled: true s3Stores: - name: my-s3-block-store s3SecretRef: name: "s3-access-secret" pathStyleAccessEnabled: true s3BucketEndpoint: "${S3_ENDPOINT}" s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}" customCertificateSecretRefs: - name: s3-ca-cert key: ca.crt s3OpLogStores: - name: my-s3-oplog-store s3SecretRef: name: "s3-access-secret" s3BucketEndpoint: "${S3_ENDPOINT}" s3BucketName: "${S3_OPLOG_BUCKET_NAME}" pathStyleAccessEnabled: true customCertificateSecretRefs: - name: s3-ca-cert key: ca.crt EOF
Espere a que Ops Manager entre en estado de ejecución.
echo; echo "Waiting for Backup to reach Running phase..." kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s echo "Waiting for Application Database to reach Running phase..." kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s echo; echo "Waiting for Ops Manager to reach Running phase..." kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s echo; echo "MongoDBOpsManager resource" kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}" kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}" kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}" kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Backup to reach Running phase... mongodbopsmanager.mongodb.com/om condition met Waiting for Application Database to reach Running phase... mongodbopsmanager.mongodb.com/om condition met Waiting for Ops Manager to reach Running phase... mongodbopsmanager.mongodb.com/om condition met MongoDBOpsManager resource NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS om 8.0.4 Running Running Running 14m Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian NAME READY STATUS RESTARTS AGE om-0-0 1/1 Running 0 11m om-db-0-0 3/3 Running 0 5m35s om-db-0-1 3/3 Running 0 6m20s Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian NAME READY STATUS RESTARTS AGE om-1-0 1/1 Running 0 11m om-1-1 1/1 Running 0 8m28s om-db-1-0 3/3 Running 0 3m52s om-db-1-1 3/3 Running 0 4m48s Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2-lucian NAME READY STATUS RESTARTS AGE om-2-backup-daemon-0 1/1 Running 0 2m om-db-2-0 3/3 Running 0 2m55s
Cree una organización MongoDB y obtenga credenciales.
Para configurar las credenciales, debe crear una organización de Ops Manager, generar claves API programáticas en la interfaz de usuario de Ops Manager y crear un secreto con la IP de su balanceador de carga.Consulte "Crear credenciales para el operador de Kubernetes" para obtener más información.