Installing Ops Manager for move backups to S3

Dear Mongo community
I want to use MongoDB Ops manager, initially a temporary proof of concept on our prod cluster by installing a simple test Ops Manager setup
My goal is to move atlas cluster snapshots backups from the atlas to AWS s3 buckets of an automate way by taking the daily, weekly, and monthly frequency I have on cloud backups on my cluster. According to the architecture it is possible, right?

Reviewing the Installation checklist for a test installation I want to ask about this con:

If you lose the server, you lose everything: users and projects, metadata, backups, automation configurations, stored monitoring metrics, etc.

I am new to this ops manager architecture workflow and the way it works.
They say If I lose the server, I lose everything including backups, so my question is:
As long the snapshots are stored on my AWS S3 bucket, if I lose the server, (the ops manager) will I lose the snapshots present either on my atlas cluster (that ones I see on the dashboard) or my AWS s3 bucket?

I want to deep dive into this kind of disadvantaged behavior. Just for the proof of concept and also justify go for a production setup with replicas

Can someone with experience baking databases up via MongoDB Ops tell me about how is the backup process and specifically with this host loss?

Hi @Bernardo_Garcia

Just to clarify, MongoDB Atlas and MongoDB Ops Manager are two separate products.

In your first question, I can see mention of only MongoDB Atlas however the image pasted below is for Ops Manager:

My goal is to move atlas cluster snapshots backups from the atlas to AWS s3 buckets of an automate way by taking the daily, weekly, and monthly frequency I have on cloud backups on my cluster. According to the architecture it is possible, right?

Specific to Atlas, there currently isn’t a way to directly export Atlas cluster backups to your own S3 bucket. However, there are currently some feedback posts for Atlas under review which may be useful to read.
If you are wishing to link up Ops Manager to Atlas for S3 backups (from the Ops Manager deployment), then this isn’t possible.

Reviewing the Installation checklist for a test installation I want to ask about this con:

If you lose the server, you lose everything: users and projects, metadata, backups, automation configurations, stored monitoring metrics, etc.

The con you have stated here is in specific reference to the “Test Install” which consists only of a single server where everything is installed. Production environments should use highly available deployments.

Hope this helps.
Jason

2 Likes

Dear @Jason_Tran, thanks for the update.
When I saw the architecture picture for OpsManager, I thought ops manager deployment was able to interact with external MongoDB clusters/deployments like we have in Atlas MongoDB service. Something like an intercommunication between existing clusters to maintain them, monitor them, and backing up them, without matter if those clusters already do exist previously to the ops manager deployment or are outside of its scope. I was wrong then.

With your information, and looking at the architecture picture, It seems then, MongoDB Ops Manager involves their own MongoDB database deployments with MongoDB agents beside them to allow this communication with Ops Manager to get their features, including backup daemon.

So if we want to involve an automated backup solution for mongo clusters (aka, mongo clusters intended as MongoDB deployments), then … do those clusters should be created within MongoOps Manager deployment context?

If so, then those mongo deployments will not have to do with atlas service, they will be indeed part of the MongoOps service deployment

I ask this because I am interacting with the Atlas MongoAPI right now to create restore jobs to allow me to download snapshots.
One solution I had in mind is to download these snapshots via API and store them somewhere, then upload them to an external storage like AWS S3 or Azure storage accounts.
This is an approach that I will need to be script it indeed to automated, and under a cloud-native solution perspective, I will have to think about doing it within a VM instance to store the snapshots and upload them after. Or perhaps a couple of containers to download them and store them in a PVC inside k8s and restore them if needed

The thing is I really need to do this via API and not using a mongodump / mongorestore approach since my data on atlas is growing and at some point, and I heard we can experience some performance problems since all data dumped via mongodump has to be read into memory by the MongoDB server and it backus the data and index definitions.

When connected to a MongoDB instance, mongodump can adversely affect mongod performance. If your data is larger than system memory, the queries will push the working set out of memory, causing page faults.

I found this on the mongodb docs

Another thing is since my atlas cluster has three replicaset nodes (1 primary and two secondaries) perhaps a mongodump / mongorestore approach against the secondary nodes could be something sustainable? I am not sure, since the data in the short term will be GB for every snapshot, and the memory ram is just 2gb. I wouldl have to scale the cluster (M10 plan actually)

So to sum up, Is mongo ops manager just useful when we raise a mongo deployment databases using MongoOps manager from the beginning right?

Hi @Bernardo_Garcia,

So if we want to involve an automated backup solution for mongo clusters (aka, mongo clusters intended as MongoDB deployments), then … do those clusters should be created within MongoOps Manager deployment context?

If so, then those mongo deployments will not have to do with atlas service, they will be indeed part of the MongoOps service deployment

If you have concerns or requirements around backup retention for your use case, it would be worth reviewing the Atlas - Snapshot Scheduling and Retention Policy documentation.

MongoDB Atlas has an integrated Cloud Backup feature for dedicated clusters (M10+). It sounds like your goal is to get Atlas backup snapshots regularly saved to storage in your own S3 buckets (per your earlier discussion on Moving existing atlas mongo snapshots to external storage.

There currently isn’t a feature to directly export cloud snapshots from Atlas to S3 at this stage. I expect you can work out your own custom solution using the Atlas API, but I also recommend sharing your use case as a feature suggestion on the MongoDB Feedback Engine so others can upvote, comment, and follow any updates.

Another thing is since my atlas cluster has three replicaset nodes (1 primary and two secondaries) perhaps a mongodump / mongorestore approach against the secondary nodes could be something sustainable? I am not sure, since the data in the short term will be GB for every snapshot, and the memory ram is just 2gb. I wouldl have to scale the cluster (M10 plan actually)

You could possibly perform the mongodump with the –host and --port options where the --host would be a secondary node in your cluster. Of course this does void the fact that the mongodump can still adversely affect mongod performance. It would, in the case of specifying a secondary node in the --host option, focus the possible adverse effects to that particular node.

So to sum up, Is mongo ops manager just useful when we raise a mongo deployment databases using MongoOps manager from the beginning right?

Ops Manager can be useful but the important thing in the context of this discussion is that it is to be used for self hosted MongoDB Deployments and cannot interact with MongoDB Atlas clusters / servers.

Hope this helps!
Jason

2 Likes

Hi @Jason_Tran, thanks for your clarification and ideas. Indeed it should be a custom solution from my side.

1 Like