Migrate Self Managed MongoDB from EC2 to EKS

I have installed MongoDB community edition v7.0.1 in one of the EC2 instance with 64GiB EBS Volume for Data & log. Now we are moving to micro service architecture and need to migrate the MongoDB instance from EC2 to AWS EKS with Persistent Volume setup. Was looking at online options, but not sure on how to migrate the existing data in EBS volume to PV/PVC in EKS with no downtime ?

Can anyone please guide ?

I’m not aware of an easy way to do that with Community.

You can export the data and then import it into the new deployment within EKS, but there’s no way (with Community edition) to keep the data synchronised until you cut over.

@Dan_Mckean Thanks a lot. I am looking at Replica Set option to sync the db between EC2 mongodb and EKS mongodb either using Node Port or Load Balancer setup. So I changed the EC2 mongodb deployment to use a keyfile which I assume is a crucial part to communicate with the replica set members. However, I am not sure how to use the keyfile in EKS mongodb deployment ( I am using MongoDB Kubernetes Operator ). Also, the readme doc in the above repo does not have any information on keyfile usage. Kindly advice. Thanks!

That’s a nice way to achieve it!

But I’m sorry to say that right now there’s no official support for having the Operator support that kind of hybrid cluster.

I’ll ask in our team Slack channel if there’s a way to make it work!

Hey,

Though an “expand and contract” migration like you’re thinking is a common tactic here, it’s more commonly used on-prem and not so much for straddling VMs and K8s.

While there’s nothing stopping it working in theory, the suggestion was that using mongomirror would be a viable, likely simpler, alternative than trying to create a replica set that spans k8s and VMs, managed by the operator. Cluster-to-cluster/mongosync if you’re on 6.0+

Worth noting that neither is an official option here. Mongomirror is not officially support for a self-hosted to self-hosted migration, but does work. And there’s no official support for Community when it comes to cluster-to-cluster https://www.mongodb.com/docs/cluster-to-cluster-sync/current/reference/limitations/#mongodb-community-edition.

Thanks @Dan_Mckean Appreciate your help. I tried to make use of the keyFile from K8s deployment in EC2 mongodb instance & sync the authentication process as well, however with this approach, the authentication is successful, but getting a replicaset ID mismatch.
lastHeartbeatMessage: "replica set IDs do not match, ours: 65cdafd3d2xxxxxxxx; remote node's: 65cda23ea90xxxxxxxxxxxxx",.
Tried the mongosync approach - executing the mongosync from EC2 mongodb, however, the mongosync is unable to connect to the EKS mongodb which is exposed via a nodeport. I guess its because of internal replica is named in K8s DNS i.e <namespace>.<service>.svc.cluster.local:27017. Does a rs.reconfig works here ?

Does the MongoDB enterprise supports Hybrid Replication between K8s and VM based deployment ? Kindly advice. Thanks!

Sorry for the late reply. I was on leave.

I imagine you’d need to set up an external service to be able to access the EKS deployment from the EC2. I’m far from an expert on EKS so I won’t try to advise there. But I would say that if you can connect to the deployment manually, then the mongosync coming from the same place should have what it needs too.

Regarding the Enterprise Operator… no, we don’t have a hybrid replication mechanism. For now we don’t support spanning a deployment across VMs and K8s, and it’s never actually been a request. So for more migrations, cluster to cluster sync is used. That works well.