Join us Sept 17 at .local NYC! Use code WEB50 to save 50% on tickets. Learn more >
MongoDB Event
Docs 菜单
Docs 主页
/
Enterprise Kubernetes Operator
/ /

多集群 ReplicaSet

多 Kubernetes集群MongoDB部署允许您在跨越多个地理区域的全球集群中添加MongoDB实例,以提高数据的可用性和全球分布。

在开始以下过程之前,请执行以下操作:

  • 安装 kubectl

  • 安装 mongosh

  • 完成 GKE 集群程序 或同等程序。

  • 完成 TLS 证书程序 或同等程序。

  • 完成 Istio 服务网格程序 或同等程序。

  • 完成部署MongoDB Operator程序。

  • 完成多集群MongoDB Ops Manager过程。如果您使用Cloud Manager而不是MongoDB Ops Manager,则可以跳过此步骤。

  • 按如下方式设置所需的环境变量:

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${MDB_NAMESPACE}
export RESOURCE_NAME=mdb
export MONGODB_VERSION=8.0.5

You can find all included source code in the MongoDB Kubernetes Operator repository.

1

运行以下脚本,通过证书颁发者创建所需的 CA 证书。

1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
2apiVersion: cert-manager.io/v1
3kind: Certificate
4metadata:
5 name: mdb-cert
6spec:
7 dnsNames:
8 - "*.${MDB_NAMESPACE}.svc.cluster.local"
9 duration: 240h0m0s
10 issuerRef:
11 name: my-ca-issuer
12 kind: ClusterIssuer
13 renewBefore: 120h0m0s
14 secretName: cert-prefix-mdb-cert
15 usages:
16 - server auth
17 - client auth
18EOF
2

Set spec.credentials, spec.opsManager.configMapRef.name, which you defined in the Multi-Cluster Sharded Cluster procedure; define your security settings and deploy the MongoDBMultiCluster resource. In the following code sample, duplicateServiceObjects is set to false to enable DNS proxying in Istio.

注意

为了通过 Istio 服务网格启用跨集群 DNS 解析,本教程为每个 Kubernetes Pod 创建具有单个 ClusterIP 地址的服务对象。

1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
2apiVersion: mongodb.com/v1
3kind: MongoDBMultiCluster
4metadata:
5 name: ${RESOURCE_NAME}
6spec:
7 type: ReplicaSet
8 version: ${MONGODB_VERSION}
9 opsManager:
10 configMapRef:
11 name: mdb-org-project-config
12 credentials: mdb-org-owner-credentials
13 duplicateServiceObjects: false
14 persistent: true
15 externalAccess: {}
16 security:
17 certsSecretPrefix: cert-prefix
18 tls:
19 ca: ca-issuer
20 authentication:
21 enabled: true
22 modes: ["SCRAM"]
23 clusterSpecList:
24 - clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
25 members: 2
26 - clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME}
27 members: 1
28 - clusterName: ${K8S_CLUSTER_2_CONTEXT_NAME}
29 members: 2
30EOF
3

运行以下命令,确认 MongoDBMultiCluster资源正在运行。

1echo; echo "Waiting for MongoDB to reach Running phase..."
2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" wait --for=jsonpath='{.status.phase}'=Running "mdbmc/${RESOURCE_NAME}" --timeout=900s
3echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
5echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
6kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
7echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
4

运行以下命令,创建MongoDB用户和密码。请在部署时使用强密码。

1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
2apiVersion: v1
3kind: Secret
4metadata:
5 name: rs-user-password
6type: Opaque
7stringData:
8 password: password
9---
10apiVersion: mongodb.com/v1
11kind: MongoDBUser
12metadata:
13 name: rs-user
14spec:
15 passwordSecretKeyRef:
16 name: rs-user-password
17 key: password
18 username: "rs-user"
19 db: "admin"
20 mongodbResourceRef:
21 name: ${RESOURCE_NAME}
22 roles:
23 - db: "admin"
24 name: "root"
25EOF
26
27kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" wait --for=jsonpath='{.status.phase}'=Updated -n "${MDB_NAMESPACE}" mdbu/rs-user
5

运行 mongosh 以下命令,确保您可以访问权限运行的MongoDB实例。

1# Load Balancers sometimes take longer to get an IP assigned, we need to retry
2while [ -z "$(kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" svc "${RESOURCE_NAME}-0-0-svc-external" -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")" ]
3do
4 sleep 5
5done
6
7external_ip="$(kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" svc "${RESOURCE_NAME}-0-0-svc-external" -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")"
8
9mkdir -p certs
10kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" cm/ca-issuer -o=jsonpath='{.data.ca-pem}' > certs/ca.crt
11
12mongosh --host "${external_ip}" --username rs-user --password password --tls --tlsCAFile certs/ca.crt --tlsAllowInvalidHostnames --eval "db.runCommand({connectionStatus : 1})"
{
authInfo: {
authenticatedUsers: [ { user: 'rs-user', db: 'admin' } ],
authenticatedUserRoles: [ { role: 'root', db: 'admin' } ]
},
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1741701953, i: 1 }),
signature: {
hash: Binary.createFromBase64('uhYReuUiWNWP6m1lZ5umgDVgO48=', 0),
keyId: Long('7480552820140146693')
}
},
operationTime: Timestamp({ t: 1741701953, i: 1 })
}

在此页面上