Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Atlas Cluster Automation Using Scheduled Triggers

Brian Leonard11 min read • Published Jan 08, 2020 • Updated Jun 25, 2024
AtlasJavaScript
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Every action you can take in the Atlas user interface is backed by a corresponding Administration API, which allows you to easily bring automation to your Atlas deployments. Some of the more common forms of Atlas automation occur on a schedule, such as pausing a cluster that’s only used for testing in the evening and resuming the cluster again in the morning.
Having an API to automate Atlas actions is great, but you’re still on the hook for writing the script that calls the API, finding a place to host the script, and setting up the job to call the script on your desired schedule. This is where Atlas Scheduled Triggers come to the rescue.
In this Atlas Cluster Automation Using Scheduled Triggers article I will show you how a Scheduled Trigger can be used to easily incorporate automation into your environment. In addition to pausing and unpausing a cluster, I’ll similarly show how cluster scale up and down events could also be placed on a schedule. Both of these activities allow you to save on costs for when you either don’t need the cluster (paused), or don’t need it to support peak workloads (scale down).

Architecture

Three example scheduled triggers are provided in this solution. Each trigger has an associated trigger function. The bulk of the work is handled by the modifyCluster function, which as the name implies is a generic function for making modifications to a cluster. It's a wrapper around the Atlas Modify One Cluster from One Project Admin API.
Architecture

Preparation

Generate an API key

In order to call the Atlas Administrative APIs, you'll first need an API Key with Project Owner privileges is necessary for the projects they wish to schedule cluster changes.
API Keys are created in the Access Manager. Select Access Manager from the menu on the top navigation bar and select Project Access:
Then select the API Keys tab.
Create a new key, giving it a good description and assign the key Project Owner permissions.
API Key
Click Next and make a note of your Private Key:
Save API Key
Let's limit who can use our API key by adding an access list. In our case, the API key is going to be used by a Trigger which is a component of Atlas App Services. You will find the list of IP addresses used by App Services in the documentation under Firewall Configuration. Note, each IP address must be added individually. Here's an idea you can vote for to get this addressed: Ability to provide IP addresses as a list for Network Access
Add Access List Entry
API Access List
Click Done.

Deployment

Create a project for automation

Since this solution works across your entire Atlas organization, I like to host it in its own dedicated Atlas Project.
Create a Project

Create an application

We will host our trigger in an Atlas App Services Application. To begin, just click the App Services tab:
App Services
You'll see that App Services offers a bunch of templates to get you started. For this use case, just select the first option to Build your own App:
Welcome to App Services
You'll then be presented with options to link a data source, name your application and choose a deployment model. The current iteration of this utility doesn't use a data source, so you can ignore that step (a free cluster for you regardless). You can also leave the deployment model at its default (Global), unless you want to limit the application to a specific region.
I've named the application Automation App:
Welcome to App Services
Click Create App Service. If you're presented with a set of guides, click Close Guides as today I am your guide.
From here, you have the option to simply import the App Services application and adjust any of the functions to fit your needs. If you prefer to build the application from scratch, skip to the next section.

Import option

Step 1: Store the API secret key

The extract has a dependency on the API Secret Key, thus the import will fail if it is not configured beforehand.
Use the Values menu on the left to Create a Secret named AtlasPrivateKeySecret containing your private key (the secret is not in quotes):
Create Secret

Step 2: Install the App services CLI

The App Services CLI is available on npm. To install the App Services CLI on your system, ensure that you have Node.js installed and then run the following command in your shell:
1✗ npm install -g atlas-app-services-cli

Step 3: Extract the application archive

Download and extract the AutomationApp.zip.

Step 4: Log into Atlas

To configure your app with App Services CLI, you must log in to Atlas using your API keys:
1✗ appservices login --api-key="<Public API Key>" --private-api-key="<Private API Key>"
2
3Successfully logged in

Step 5: Get the Application ID

Select the App Settings menu and copy your Application ID:
App ID

Step 6: Import the application

Run the following appservices push command from the directory where you extracted the export:
1appservices push --remote="<Your App ID>"
2
3...
4A summary of changes
5...
6
7? Please confirm the changes shown above Yes
8
9Creating draft
10Pushing changes
11Deploying draft
12Deployment complete
13Successfully pushed app up:
After the import, replace the AtlasPublicKey with your API public key value.
Atlas Public Key

Review the imported application

The imported application includes 3 self-explanatory sample scheduled triggers:
Triggers
The 3 triggers have 3 associated Functions. The pauseClustersTrigger and resumeClustersTrigger function supply a set of projects and clusters to pause, so these need to be adjusted to fit your needs:
1 // Supply projectIDs and clusterNames...
2 const projectIDs =[
3 {
4 id: '5c5db514c56c983b7e4a8701',
5 names: [
6 'Demo',
7 'Demo2'
8 ]
9 },
10 {
11 id: '62d05595f08bd53924fa3634',
12 names: [
13 'ShardedMultiRegion'
14 ]
15 }
16];
All three trigger functions call the modifyCluster function, where the bulk of the work is done.
In addition, you'll find two utility functions, getProjectClusters and getProjects. These functions are not utilized in this solution, but are provided for reference if you wanted to further automate these processes (that is, removing the hard coded project IDs and cluster names in the trigger functions):
Functions
Now that you have reviewed the draft, as a final step go ahead and deploy the App Services application.
Review Draft & Deploy

Build-it-yourself option

To understand what's included in the application, here are the steps to build it yourself from scratch.

Step 1: Store the API keys

The functions we need to create will call the Atlas Administration APIs, so we need to store our API Public and Private Keys, which we will do using Values & Secrets. The sample code I provide references these values as AtlasPublicKey and AtlasPrivateKey, so use those same names unless you want to change the code where they’re referenced.
You'll find Values under the BUILD menu:
Values
First, create a Value for your public key (note, the key is in quotes):
Atlas Public Key
Create a Secret containing your private key (the secret is not in quotes):
Create Secret
The Secret cannot be accessed directly, so create a second Value that links to the secret:
Link to Secret

Step 2: Note the project ID(s)

We need to note the IDs of the projects that have clusters we want to automate. Click the 3 dots in the upper left corner of the UI to open the Project Settings:
Project Settings
Under which you’ll find your Project ID:
Project ID

Step 3: Create the functions

I will create two functions, a generic function to modify a cluster and a trigger function to iterate over the clusters to be paused.
You'll find functions under the BUILD menu:
Functions

modifyCluster

I’m only demonstrating a couple of things you can do with cluster automation, but the sky is really limitless. The following modifyCluster function is a generic wrapper around the Modify One Multi-Cloud Cluster from One Project API for calling the API from App Services (or Node.js for that matter).
Create a New Function named modifyCluster. Set the function to Private as it will only be called by our trigger. The other default settings are fine:
Modify Cluster Function
Switch to the Function Editor tab and paste the following code:
1/*
2 * Modifies the cluster as defined by the body parameter.
3 * See https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Clusters/operation/updateCluster
4 *
5 */
6exports = async function(username, password, projectID, clusterName, body) {
7
8 // Easy testing from the console
9 if (username == "Hello world!") {
10 username = await context.values.get("AtlasPublicKey");
11 password = await context.values.get("AtlasPrivateKey");
12 projectID = "5c5db514c56c983b7e4a8701";
13 clusterName = "Demo";
14 body = {paused: false}
15 }
16
17 const arg = {
18 scheme: 'https',
19 host: 'cloud.mongodb.com',
20 path: 'api/atlas/v2/groups/' + projectID + '/clusters/' + clusterName,
21 username: username,
22 password: password,
23 headers: {'Accept': ['application/vnd.atlas.2023-11-15+json'], 'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
24 digestAuth:true,
25 body: JSON.stringify(body)
26 };
27
28 // The response body is a BSON.Binary object. Parse it and return.
29 response = await context.http.patch(arg);
30
31 return EJSON.parse(response.body.text());
32};
To test this function, you need to supply an API key, an API secret, a project Id, an associated cluster name to modify, and a payload containing the modifications you'd like to make. In our case it's simply setting the paused property.
Note: By default, the Console supplies 'Hello world!' when test running a function, so my function code tests for that input and provides some default values for easy testing.
Console
1 // Easy testing from the console
2 if (username == "Hello world!") {
3 username = await context.values.get("AtlasPublicKey");
4 password = await context.values.get("AtlasPrivateKey");
5 projectID = "5c5db514c56c983b7e4a8701";
6 clusterName = "Demo";
7 body = {paused: false}
8 }
Press the Run button to see the results, which will appear in the Result window:
Run
And you should find you cluster being resumed (or paused):
Cluster

pauseClustersTrigger

This function will be called by a trigger. As it's not possible to pass parameters to a scheduled trigger, it uses a hard-coded list of project Ids and associated cluster names to pause. Ideally these values would be stored in a collection with a nice UI to manage all of this, but that's a job for another day :-).
In the appendix of this article, I provide functions that will get all projects and clusters in the organization. That would create a truly dynamic operation that would pause all clusters. You could then alternatively refactor the code to use an exclude list instead of an allow list.
1/*
2 * Iterates over the provided projects and clusters, pausing those clusters
3 */
4exports = async function() {
5
6 // Supply projectIDs and clusterNames...
7 const projectIDs = [{id:'5c5db514c56c983b7e4a8701', names:['Demo', 'Demo2']}, {id:'62d05595f08bd53924fa3634', names:['ShardedMultiRegion']}];
8
9 // Get stored credentials...
10 const username = context.values.get("AtlasPublicKey");
11 const password = context.values.get("AtlasPrivateKey");
12
13 // Set desired state...
14 const body = {paused: true};
15
16 var result = "";
17
18 projectIDs.forEach(async function (project) {
19
20 project.names.forEach(async function (cluster) {
21 result = await context.functions.execute('modifyCluster', username, password, project.id, cluster, body);
22 console.log("Cluster " + cluster + ": " + EJSON.stringify(result));
23 });
24 });
25
26 return "Clusters Paused";
27};

Step 4: Create Trigger - pauseClusters

The ability to pause and resume a cluster is supported by the Modify One Cluster from One Project API. To begin, select Triggers from the menu on the left:
Triggers Menu
And add a Trigger.
Set the Trigger Type to Scheduled and the name to pauseClusters:
Add Trigger
As for the schedule, you have the full power of CRON Expressions at your fingertips. For this exercise, let’s assume we want to pause the cluster every evening at 6pm. Select Advanced and set the CRON schedule to 0 22 * * *.
Note, the time is in GMT, so adjust accordingly for your timezone. As this cluster is running in US East, I’m going to add 4 hours:
Schedule Type
Check the Next Events window to validate the job will run when you desire.
The final step is to select the function for the trigger to execute. Select the pauseClustersTrigger function.
Trigger Function
And Save the trigger.
The final step is to REVIEW DRAFT & DEPLOY.
Review Draft & Deploy

Resume the cluster

You could opt to manually resume the cluster(s) as it’s needed. But for completeness, let’s assume we want the cluster(s) to automatically resume at 8am US East every weekday morning.
Duplicate the pauseClustersTrigger function to a new function named resumeClustersTriggger
Duplicate Function
At a minimum, edit the function code setting paused to false. You could also adjust the projectIDs and clusterNames to a subset of projects to resume:
1/*
2 * Iterates over the provided projects and clusters, resuming those clusters
3 */
4exports = async function() {
5
6 // Supply projectIDs and clusterNames...
7 const projectIDs = [{id:'5c5db514c56c983b7e4a8701', names:['Demo', 'Demo2']}, {id:'62d05595f08bd53924fa3634', names:['ShardedMultiRegion']}];
8
9 // Get stored credentials...
10 const username = context.values.get("AtlasPublicKey");
11 const password = context.values.get("AtlasPrivateKey");
12
13 // Set desired state...
14 const body = {paused: false};
15
16 var result = "";
17
18 projectIDs.forEach(async function (project) {
19
20 project.names.forEach(async function (cluster) {
21 result = await context.functions.execute('modifyCluster', username, password, project.id, cluster, body);
22 console.log("Cluster " + cluster + ": " + EJSON.stringify(result));
23 });
24 });
25
26 return "Clusters Paused";
27};
Then add a new scheduled trigger named resumeClusters. Set the CRON schedule to: 0 12 * * 1-5. The Next Events validates for us this is exactly what we want:
Schedule Type Resume

Create Trigger: Scaling up and down

It’s not uncommon to have workloads that are more demanding during certain hours of the day or days of the week. Rather than running your cluster to support peak capacity, you can use this same approach to schedule your cluster to scale up and down as your workload requires it.
NOTE: Atlas Clusters already support Auto-Scaling, which may very well suit your needs. The approach described here will let you definitively control when your cluster scales up and down.
Let’s say we want to scale up our cluster every day at 9am before our store opens for business.
Add a new function named scaleClusterUpTrigger. Here’s the function code. It’s very similar to before, except the body’s been changed to alter the provider settings:
NOTE: This example represents a single-region topology. If you have multiple regions and/or asymetric clusters using read-only and/or analytics nodes, just check the Modify One Cluster from One Project API documenation for the payload details.
1exports = async function() {
2
3 // Supply projectID and clusterNames...
4 const projectID = '<Project ID>';
5 const clusterName = '<Cluster Name>';
6
7 // Get stored credentials...
8 const username = context.values.get("AtlasPublicKey");
9 const password = context.values.get("AtlasPrivateKey");
10
11 // Set the desired instance size...
12 const body = {
13 "replicationSpecs": [
14 {
15 "regionConfigs": [
16 {
17 "electableSpecs": {
18 "instanceSize": "M10",
19 "nodeCount":3
20 },
21 "priority":7,
22 "providerName": "AZURE",
23 "regionName": "US_EAST_2",
24 },
25 ]
26 }
27 ]
28 };
29
30 result = await context.functions.execute('modifyCluster', username, password, projectID, clusterName, body);
31 console.log(EJSON.stringify(result));
32
33 if (result.error) {
34 return result;
35 }
36
37 return clusterName + " scaled up";
38};
Then add a scheduled trigger named scaleClusterUp. Set the CRON schedule to: 0 13 * * *.
Scaling a cluster back down would simply be another trigger, scheduled to run when you want, using the same code above, setting the instanceSizeName to whatever you desire.
And that’s it. I hope you find this beneficial. You should be able to use the techniques described here to easily call any MongoDB Atlas Admin API endpoint from Atlas App Services.

Appendix

getProjects

This standalone function can be test run from the App Services console to see the list of all the projects in your organization. You could also call it from other functions to get a list of projects:
1/*
2 * Returns an array of the projects in the organization
3 * See https://docs.atlas.mongodb.com/reference/api/project-get-all/
4 *
5 * Returns an array of objects, e.g.
6 *
7 * {
8 * "clusterCount": {
9 * "$numberInt": "1"
10 * },
11 * "created": "2021-05-11T18:24:48Z",
12 * "id": "609acbef1b76b53fcd37c8e1",
13 * "links": [
14 * {
15 * "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/609acbef1b76b53fcd37c8e1",
16 * "rel": "self"
17 * }
18 * ],
19 * "name": "mg-training-sample",
20 * "orgId": "5b4e2d803b34b965050f1835"
21 * }
22 *
23 */
24exports = async function() {
25
26 // Get stored credentials...
27 const username = await context.values.get("AtlasPublicKey");
28 const password = await context.values.get("AtlasPrivateKey");
29
30 const arg = {
31 scheme: 'https',
32 host: 'cloud.mongodb.com',
33 path: 'api/atlas/v1.0/groups',
34 username: username,
35 password: password,
36 headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
37 digestAuth:true,
38 };
39
40 // The response body is a BSON.Binary object. Parse it and return.
41 response = await context.http.get(arg);
42
43 return EJSON.parse(response.body.text()).results;
44};

getProjectClusters

Another example function that will return the cluster details for a provided project.
Note, to test this function, you need to supply a projectId. By default, the Console supplies ‘Hello world!’, so I test for that input and provide some default values for easy testing.
1/*
2 * Returns an array of the clusters for the supplied project ID.
3 * See https://docs.atlas.mongodb.com/reference/api/clusters-get-all/
4 *
5 * Returns an array of objects. See the API documentation for details.
6 *
7 */
8exports = async function(project_id) {
9
10 if (project_id == "Hello world!") { // Easy testing from the console
11 project_id = "5e8f8268d896f55ac04969a1"
12 }
13
14 // Get stored credentials...
15 const username = await context.values.get("AtlasPublicKey");
16 const password = await context.values.get("AtlasPrivateKey");
17
18 const arg = {
19 scheme: 'https',
20 host: 'cloud.mongodb.com',
21 path: `api/atlas/v1.0/groups/${project_id}/clusters`,
22 username: username,
23 password: password,
24 headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']},
25 digestAuth:true,
26 };
27
28 // The response body is a BSON.Binary object. Parse it and return.
29 response = await context.http.get(arg);
30
31 return EJSON.parse(response.body.text()).results;
32};
Questions? Comments? Let's continue the conversation. Join us in the MongoDB Developer Community to keep it going.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Article

Using Atlas Data Federation to Control Access to Your Analytics Node


Aug 28, 2024 | 9 min read
Tutorial

Exploring Search Capabilities With Atlas Search


Aug 20, 2024 | 9 min read
Tutorial

Getting started with MongoDB Atlas Search and Java


Jul 30, 2024 | 7 min read
Article

How to Enable Local and Automatic Testing of Atlas Search-Based Features


Jun 12, 2024 | 8 min read
Table of Contents
  • Appendix