Docs Menu
Docs Home
/
Arquitecturas de Referencia
/

Clúster fragmentado de varios clústeres

Puede distribuir clústeres fragmentados de MongoDB en varios clústeres de Kubernetes. Con la funcionalidad multiclúster, puede:

  • Mejore la resiliencia de su implementación distribuyéndola en múltiples clústeres de Kubernetes, cada uno en una región geográfica diferente.

  • Configure su implementación para fragmentación geográfica implementando nodos principales de fragmentos específicos en diferentes clústeres de Kubernetes que estén ubicados más cerca de la aplicación o los clientes que dependen de esos datos, lo que reduce la latencia.

  • Optimiza tu implementación para mejorar el rendimiento. Por ejemplo, puedes implementar nodos analíticos de solo lectura para todas o algunas particiones en diferentes clústeres de Kubernetes, o con asignaciones de recursos personalizadas.

Antes de comenzar el siguiente procedimiento, realice las siguientes acciones:

  • Instalar kubectl.

  • Instalar mongosh

  • Complete el procedimiento de clústeres de GKE o equivalente.

  • Complete el procedimiento de Certificados TLS o equivalente.

  • Complete el procedimiento de malla de servicio Istio o equivalente.

  • Complete el procedimiento Implementar el operador MongoDB.

  • Complete el procedimiento de Multi-Cluster Ops Manager. Puede omitir este paso si usa Cloud Manager en lugar de Ops Manager.

  • Establezca las variables de entorno necesarias de la siguiente manera:

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${MDB_NAMESPACE}
export RESOURCE_NAME=mdb
export MONGODB_VERSION=8.0.5

Puede encontrar todo el código fuente incluido en el repositorio del operador Kubernetes de MongoDB.

1

Ejecute el siguiente comando para generar los certificados TLS necesarios para cada fragmento, sus mongos y sus servidores de configuración.

kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-0-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-0-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-1-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-1-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-2-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-2-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-config-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-config-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-mongos-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-mongos-cert
usages:
- server auth
- client auth
EOF
2

Ejecute el siguiente comando para implementar sus recursos personalizados.

kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: ${RESOURCE_NAME}
spec:
shardCount: 3
# we don't specify mongodsPerShardCount, mongosCount and configServerCount as they don't make sense for multi-cluster
topology: MultiCluster
type: ShardedCluster
version: ${MONGODB_VERSION}
opsManager:
configMapRef:
name: mdb-org-project-config
credentials: mdb-org-owner-credentials
persistent: true
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
authentication:
enabled: true
modes: ["SCRAM"]
mongos:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 2
configSrv:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 3 # config server will have 3 members in main cluster
- clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME}
members: 1 # config server will have additional non-voting, read-only member in this cluster
memberConfig:
- votes: 0
priority: "0"
shard:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 3 # each shard will have 3 members in this cluster
- clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME}
members: 1 # each shard will have additional non-voting, read-only member in this cluster
memberConfig:
- votes: 0
priority: "0"
EOF
3

Ejecute el siguiente comando para confirmar que todos los recursos estén en funcionamiento.

echo; echo "Waiting for MongoDB to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" wait --for=jsonpath='{.status.phase}'=Running "mdb/${RESOURCE_NAME}" --timeout=900s
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
4

Ejecute el siguiente comando para habilitar el acceso externo al recurso personalizado de su base de datos MongoDB. Este comando crea balanceadores de carga con direcciones IP accesibles externamente para cada instancia de MongoDB.

kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: ${RESOURCE_NAME}
spec:
shardCount: 3
# we don't specify mongodsPerShardCount, mongosCount and configServerCount as they don't make sense for multi-cluster
topology: MultiCluster
type: ShardedCluster
version: 8.0.3
opsManager:
configMapRef:
name: mdb-org-project-config
credentials: mdb-org-owner-credentials
persistent: true
externalAccess: {}
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
authentication:
enabled: true
modes: ["SCRAM"]
mongos:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 2
configSrv:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 3 # config server will have 3 members in main cluster
- clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME}
members: 1 # config server will have additional non-voting, read-only member in this cluster
memberConfig:
- votes: 0
priority: "0"
shard:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 3 # each shard will have 3 members in this cluster
- clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME}
members: 1 # each shard will have additional non-voting, read-only member in this cluster
memberConfig:
- votes: 0
priority: "0"
EOF
5

Ejecute el siguiente comando para crear un usuario y credenciales en su clúster fragmentado.

kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: sc-user-password
type: Opaque
stringData:
password: password
---
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: sc-user
spec:
passwordSecretKeyRef:
name: sc-user-password
key: password
username: "sc-user"
db: "admin"
mongodbResourceRef:
name: ${RESOURCE_NAME}
roles:
- db: "admin"
name: "root"
EOF
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" wait --for=jsonpath='{.status.phase}'=Updated -n "${MDB_NAMESPACE}" mdbu/sc-user
6

Ejecute el siguiente comando para verificar que su recurso MongoDB en su clúster fragmentado sea accesible.

# Load Balancers sometimes take longer to get an IP assigned, we need to retry
while [ -z "$(kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" svc "${RESOURCE_NAME}-mongos-0-0-svc-external" -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")" ]
do
sleep 5
done
external_ip="$(kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" svc "${RESOURCE_NAME}-mongos-0-0-svc-external" -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")"
mkdir -p certs
kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" cm/ca-issuer -o=jsonpath='{.data.ca-pem}' > certs/ca.crt
mongosh --host "${external_ip}" --username sc-user --password password --tls --tlsCAFile certs/ca.crt --tlsAllowInvalidHostnames --eval "db.runCommand({connectionStatus : 1})"
{
authInfo: {
authenticatedUsers: [ { user: 'sc-user', db: 'admin' } ],
authenticatedUserRoles: [ { role: 'root', db: 'admin' } ]
},
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1741702735, i: 1 }),
signature: {
hash: Binary.createFromBase64('kVqqNDHTI1zxYrPsU0QaYqyksJA=', 0),
keyId: Long('7480555706358169606')
}
},
operationTime: Timestamp({ t: 1741702735, i: 1 })
}

En esta página