Atlas Auto Scaling for Small Businesses

Is there a way to setup MongoDB Atlas to Auto Scale to a specific point, and then automatically generate and build a new cluster entirely and keep it under a limit, transfer over configs and have it take over the load from the other clusters overload?

Basically make another cluster set that’s mirrored/cloned to a point, I know you can build functions to make the data query from one cluster to the other. But is there a more streamlined way of doing this, so you can have multiple clusters of say M10 or M20 for general performance vs one giant M30 or M40. But do this automatically, and as compaction or destruction of older unneeded data occurs, decommission the “new” cluster as demand declines. |

What’ I’m looking for is a way to just through automation build and destroy clusters of a set tier instead of upgrading a tier because of needs for better CPU and RAM not being necessary vs say something like connections and so on.

Plus for performance, this route in grand scheme talking with Technical Services I was advised more smaller clusters vs one big cluster is more ideal. But they weren’t quite sure how to implement this in an automated way. Such as on AWS I would just typically run several other services that would do it automatically, but sticking with Atlas specifically for data storage uses, what are my options?

And yes, I do understand there is the server less, but the costs vs doing this other way are very different. As it also gives the scaling and ability to go by leaps instead of the smaller sprints server less uses.

And opens the ability to use Realm and other things that server less doesn’t. I can build all of this in AWS, but again, looking at keeping it all in Atlas to simplify things as there’s other services besides AWS I’m using.

And a contract bid I got in the pipeline will need the ability to cross between Azure and AWS and cost wise, it would save almost 23% of costs to just have Atlas as the centralized DB location and just pipe it over to Azure and AWS than to have it on AWS and pipe over to Azure, and Azure to AWS. Which would make this a lot more beneficial, at least for my use case.

Probably the largest and most dramatic feature needed: Automation, and ability to automate. As friendly as possible, like even ways to implement Atlas into Terraform or a means to remotely automate these above features, specifying tier, and the configs and populating the indexes wanted, with the pipelines and deploying the desired Atlas Functions and Triggers with the appropriate parameters and then building the endpoints for various application outputs to funnel into Atlas.

Daisy chaining Atlas clusters and querying data from multiple clusters is a trivial thing for in project and multiple outside projects via Atlas Functions using HTTPS etc., but it’s just the automation of building and destroying on a whim/automatically is what I’m having problems orchestrating. Otherwise I’d just use Kubernetes and Docker all over.