In this tutorial, you will use Atlas scheduled Triggers to automate cluster management tasks by programmatically calling the Atlas Administration API.
This tutorial includes the following procedures:
Initial Setup: Create a service account with
Project Ownerpermissions to your existing Atlas project, store the service account credentials as Values and Secrets, then create reusable Functions that uses these credentials to call the Update One Cluster in One Project endpoint.Note
If you prefer to use API keys instead of service accounts to authenticate to the Atlas Administration API with
Project Ownerpermissions, you can save API public and private keys as Values and Secrets to use in the Functions in this tutorial.Pause and Resume Clusters on a Schedule: Create scheduled Triggers to automatically pause clusters every evening and resume them every weekday morning.
Scale Clusters on a Schedule: Create scheduled Triggers to automatically scale a cluster up during peak hours and down afterwards.
Required Permissions
To complete this tutorial, you need a user with Project Owner access to a MongoDB Atlas project.
Initial Set Up
This initial set up only needs to be completed once, and allows you to create the scheduled Triggers on this page to automate cluster management tasks. Before performing this tutorial, ensure you have a MongoDB Atlas project with at least one cluster. This procedure performs the following set up tasks:
Creates and saves credentials for an Atlas service account that Triggers will use to call the Atlas Administration API with
Project Ownerpermissions to your existing Atlas project.Creates a reusable Function called
getAuthHeadersthat generates an access token using the service account credentials and returns the appropriate authentication headers for calling the Atlas Administration API.Creates a reusable Function called
modifyClusterthat wraps the Update One Cluster in One Project API.
Create a Service Account.
To create a service account that your Triggers can use to call the
Atlas Administration API with Project Owner permissions to
your existing Atlas project:
In Atlas, go to the Users page.
If it's not already displayed, select your desired organization from the Organizations menu in the navigation bar.
Click All Projects in the sidebar under the Identity & Access section, and select your desired project.
Click Project Identity & Access in the sidebar under the Security section.
The Users page displays.
Click Create Application Service Account.
Enter the service account information.
Name: Name for your service account. (e.g.,
TriggersServiceAccount)Description: (optional) Description for your service account. (e.g., Service account for Atlas Functions to call Atlas Administration API.)
Service Account Permissions:
Project Owner
Click Create.
This creates the service account and automatically adds it to the project's parent organization with the permission
Organization Member.Configure the API Access List.
Add IP addresses to the API Access List if you want to restrict which IP addresses can call the Atlas Administration API with this service account.
Note
If Require IP Access List for the Atlas Administration API is enabled for your organization, or if you added any IP addresses to your service account's API Access List, then every Atlas Administration API request must pass an IP access-list check.
Atlas Triggers and Functions send outgoing HTTP requests from a specific set of outbound IP addresses. To enable your scheduled Triggers to call the Atlas Administration API and other external services, you must add these IP addresses to your service account's API Access List.
For the full list of outbound IP addresses used by Atlas Functions, see Function Security Outbound IP Access. You must add each IP address individually.
Store Service Account credentials as Values and Secrets.
Create the following Values and Secrets to store your service account credentials:
AtlasClientIdValue that contains your service account client ID.AtlasClientSecretSecret that contains your service account client secret.AtlasClientSecretValue that links to the Secret. This enables you to access the client secret value in your Functions, while still keeping it stored securely as a Secret.
In Atlas, go to the Triggers page.
If it's not already displayed, select the organization that contains your project from the Organizations menu in the navigation bar.
If it's not already displayed, select your project from the Projects menu in the navigation bar.
In the sidebar, click Triggers under the Streaming Data heading.
The Triggers page displays.
Navigate to the Values Page.
Click the Linked App Service: Triggers link.
In the sidebar, click Values under the Build heading.
Store the client ID in a Value.
Click Create a Value
Enter
AtlasClientIdas the Value Name.Select the Value type.
Select the Custom Content option and enter the client ID.
Note
You must enter the client ID as a string value with quotes (
"<clientId>").Click Save.
Store the client secret in a Secret and link it to a Value.
Note
Secret values cannot be accessed directly, so you must create a second Value that links to the Secret.
Click Create a Value.
Enter
AtlasClientSecretas the Value Name.Select the Value type.
Select the Link to Secret option.
Enter
AtlasClientSecretand click Create "AtlasClientSecret" to name the Secret value.Paste the client secret into the Client Secret field that appears below the Secret name.
Click Save to create both the Secret and the Value.
Create the getAuthHeaders Function.
To create a reusable Function that retrieves an access token using the service account credentials and returns the appropriate authentication headers for calling the Atlas Administration API:
Navigate to the Functions Page.
In the sidebar, click Functions under the Build heading.
Click Create a Function.
The Settings tab displays by default.
Enter
getAuthHeadersas the Name for the Function.Set Private to
true. This Function will only be called by other Functions in this tutorial.Leave the other configuration options in the Settings tab at their default values.
Define the Function code.
Click the Function Editor tab and paste the following code to define the Function:
1 /* 2 * Generate API request headers with a new Service Account Access Token. 3 */ 4 exports = async function getAuthHeaders() { 5 6 // Get stored credentials 7 clientId = context.values.get("AtlasClientId"); 8 clientSecret = context.values.get("AtlasClientSecret"); 9 10 // Throw an error if credentials are missing 11 if (!clientId || !clientSecret) { 12 throw new Error("Authentication credentials not found. Set AtlasClientId/AtlasClientSecret (service account auth credentials)."); 13 } 14 15 // Define the argument for the HTTP request to get the access token 16 const tokenUrl = "https://cloud.mongodb.com/api/oauth/token"; 17 const credentials = Buffer.from(`${clientId}:${clientSecret}`).toString("base64"); 18 19 const arg = { 20 url: tokenUrl, 21 headers: { 22 "Authorization": [ `Basic ${credentials}` ], 23 "Content-Type": [ "application/x-www-form-urlencoded" ] 24 }, 25 body: "grant_type=client_credentials" 26 } 27 28 // The response body is a BSON.Binary object; parse it to extract the access token 29 const response = await context.http.post(arg); 30 const tokenData = JSON.parse(response.body.text()); 31 const accessToken = tokenData.access_token; 32 33 // Define the Accept header with the resource version from env var or default to latest stable 34 const resourceVersion = context.environment.ATLAS_API_VERSION || "2025-03-12"; 35 const acceptHeader = `application/vnd.atlas.${resourceVersion}+json`; 36 37 // Return the access token as headers for future API calls 38 return { 39 headers: { 40 "Authorization": [ `Bearer ${accessToken}` ], 41 "Accept": [ acceptHeader ], 42 "Accept-Encoding": [ "bzip, deflate" ], 43 "Content-Type": [ "application/json" ] 44 } 45 }; 46 }
Create the modifyCluster Function.
To create a reusable Function that wraps the Update One Cluster in One Project endpoint:
From the Functions Page, click Create a Function.
The Settings tab displays by default.
Enter
modifyClusteras the Name for the Function.Set Private to
true. This Function will only be called by other Functions in this tutorial.Leave the other configuration options in the Settings tab at their default values.
Define the Function code.
Click the Function Editor tab and paste the following code to define the Function:
1 /* 2 * Modifies the cluster as defined by the `body` parameter. 3 * See https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Clusters/operation/updateCluster 4 */ 5 exports = async function(projectID, clusterName, body) { 6 7 // Easy testing from the console 8 if (projectID === "Hello world!") { 9 projectID = "<projectId>"; 10 clusterName = "<clusterName>"; 11 body = { paused: false }; 12 } 13 14 // Retrieve headers to authenticate with a new access token, and define the request URL for the Atlas API endpoint 15 const authHeaders = await context.functions.execute("getAuthHeaders"); 16 const requestUrl = `https://cloud.mongodb.com/api/atlas/v2/groups/${projectID}/clusters/${clusterName}`; 17 18 // Build the argument for the HTTP request to the Atlas API to modify the cluster 19 const arg = { 20 url: requestUrl, 21 headers: authHeaders.headers, 22 body: JSON.stringify(body) 23 }; 24 25 // The response body is a BSON.Binary object; parse it and return the modified cluster description 26 const response = await context.http.patch(arg); 27 if (response.body) { 28 return EJSON.parse(response.body.text()); 29 } else { 30 throw new Error(`No response body returned from Atlas API. Status code: ${response.status}`); 31 } 32 }; Note
Test code in the Function Editor.
The Function Editor automatically provides
"Hello world!"as the first argument when you run a Function in the Testing Console. This code tests for that input and provides values to the parameters when"Hello world!"is received.To test the Function with your own input, replace the following placeholder values with your own information:
<projectId><clusterName>In the
bodyparameter, provide a payload containing the modifications you'd like to make to the cluster. The example code includes a payload that pauses a cluster.
Pause and Resume Clusters on a Schedule
This procedure creates scheduled Triggers to automatically pause clusters every evening and resume them every weekday morning. This is useful for non-production clusters that don't need to run outside of business hours, or for any clusters that you want to automatically pause and resume on a schedule.
Create the pauseClusters Function.
In Atlas, go to the Triggers page.
If it's not already displayed, select the organization that contains your project from the Organizations menu in the navigation bar.
If it's not already displayed, select your project from the Projects menu in the navigation bar.
In the sidebar, click Triggers under the Streaming Data heading.
The Triggers page displays.
Navigate to the Functions Page
Click the Linked App Service: Triggers link.
In the sidebar, click Functions under the Build heading.
Click Create a Function.
The Settings tab displays by default.
Enter
pauseClustersas the Name for the Function.Set Private to
true. This Function will only be called by thepauseClustersTrigger in this tutorial.Leave the other configuration options in the Settings tab at their default values.
Define the Function code.
Click the Function Editor tab and paste the following code to define the Function:
1 /* 2 * Iterates over the provided projects and clusters, pausing those clusters. 3 */ 4 exports = async function () { 5 6 // Supply project IDs and cluster names to pause 7 const projectIDs = [ 8 { 9 id: "<projectIdA>", 10 names: [ "<clusterNameA>", "<clusterNameB>" ] 11 }, 12 { 13 id: "<projectIdB>", 14 names: [ "<clusterNameC>" ] 15 } 16 ]; 17 18 // Set desired state 19 const body = { paused: true }; 20 21 // Pause each cluster and log the response 22 for (const project of projectIDs) { 23 for (const cluster of project.names) { 24 const result = await context.functions.execute( 25 "modifyCluster", 26 project.id, 27 cluster, 28 body, 29 ); 30 console.log("Cluster " + cluster + ": " + EJSON.stringify(result)); 31 } 32 } 33 34 return "Clusters Paused"; 35 }; Replace the
projectIDsarray with your own project and cluster names.Note
To avoid hardcoding project and cluster names, you can use the helper Functions at the end of this tutorial to retrieve lists of projects and clusters from the Atlas Administration API and programmatically determine which clusters to pause and resume on a schedule.
Create the pauseClusters scheduled Trigger.
From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.
Click Create a Trigger to open the Trigger configuration page.
If you have an existing Trigger, click Add a Trigger
Configure Trigger settings.
In Trigger Details, set the following configuration:
SettingValueTrigger Type
Scheduled
Schedule Type
Advanced. This allows you to specify a CRON expression for the schedule.
To run this every weekday evening at 6 PM US Eastern (which is 22:00 UTC), use the following CRON expression:
0 22 * * 1-5 Skip Events On Re-enable
On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.
Event Type
Function. Select the
pauseClustersFunction from the dropdown.Trigger Name
pauseClustersClick Save to create the Trigger.
Your test clusters will now automatically pause every evening at 6 PM US Eastern.
Create the resumeClusters Function.
Duplicate the
pauseClustersFunction into a new Function namedresumeClusters.In the Function Editor tab, update the
pausedstate tofalsein the Function code:1 /* 2 * Iterates over the provided projects and clusters, resuming those clusters. 3 */ 4 exports = async function () { 5 6 // Supply project IDs and cluster names to resume 7 const projectIDs = [ 8 { 9 id: "<projectIdA>", 10 names: [ "<clusterNameA>", "<clusterNameB>" ] 11 }, 12 { 13 id: "<projectIdB>", 14 names: [ "<clusterNameC>" ] 15 } 16 ]; 17 18 // Set desired state 19 const body = { paused: false }; 20 21 // Resume each cluster and log the response 22 for (const project of projectIDs) { 23 for (const cluster of project.names) { 24 const result = await context.functions.execute( 25 "modifyCluster", 26 project.id, 27 cluster, 28 body, 29 ); 30 console.log("Cluster " + cluster + ": " + EJSON.stringify(result)); 31 } 32 } 33 34 return "Clusters Resumed"; 35 };
Create the resumeClusters scheduled Trigger.
From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.
Configure Trigger settings.
In Trigger Details, set the following configuration:
SettingValueTrigger Type
Scheduled
Schedule Type
Advanced. This allows you to specify a CRON expression for the schedule.
To run this every weekday morning at 8 AM US Eastern (which is 12:00 UTC), use the following CRON expression:
0 12 * * 1-5 Skip Events On Re-enable
On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.
Event Type
Function. Select the
resumeClustersFunction from the dropdown.Trigger Name
resumeClustersClick Save to create the Trigger.
Your test clusters will now pause every evening and resume every weekday morning automatically.
Scale Clusters on a Schedule
This procedure creates scheduled Triggers to automatically scale a cluster up during peak hours and down afterwards. This is useful for clusters that have predictable usage patterns where you want to proactively scale before the workload increases, and scale down afterwards to save costs.
Note
Atlas supports Cluster Auto-Scaling to automatically increase your cluster tier or storage capacity based on usage or predicted usage. However, if you have predictable peak usage windows, you can use scheduled Triggers to proactively scale your cluster before your workload increases.
Create the scaleClusterUp Function.
In Atlas, go to the Triggers page.
If it's not already displayed, select the organization that contains your project from the Organizations menu in the navigation bar.
If it's not already displayed, select your project from the Projects menu in the navigation bar.
In the sidebar, click Triggers under the Streaming Data heading.
The Triggers page displays.
Navigate to the Functions Page
Click the Linked App Service: Triggers link.
In the sidebar, click Functions under the Build heading.
Click Create a Function.
The Settings tab displays by default.
Enter
scaleClusterUpas the Name for the Function.Set Private to
true. This Function will only be called by thescaleClusterUpTrigger in this tutorial.Leave the other configuration options in the Settings tab at their default values.
Define the Function code.
From the Create Function page, click the Function Editor tab and paste the following code to define your Function:
1 /* 2 * Scales a single cluster up to a larger instance size. 3 * This example scales an AWS cluster up to M30 in region US_EAST_1. 4 */ 5 exports = async function() { 6 // Supply project ID and cluster name... 7 const projectID = "<projectId>"; 8 const clusterName = "<clusterName>"; 9 10 // Set the desired instance size and topology... 11 const body = { 12 replicationSpecs: [ 13 { 14 regionConfigs: [ 15 { 16 electableSpecs: { 17 instanceSize: "M30", // for example, larger tier 18 nodeCount: 3 19 }, 20 priority: 7, 21 providerName: "AWS", 22 regionName: "US_EAST_1" 23 } 24 ] 25 } 26 ] 27 }; 28 29 // Scale up the cluster and log the response 30 const result = await context.functions.execute( 31 "modifyCluster", 32 projectID, 33 clusterName, 34 body 35 ); 36 console.log(EJSON.stringify(result)); 37 38 return clusterName + " scaled up"; 39 }; Replace the
<projectId>and<clusterName>placeholders with your own project ID and cluster name, and adjust theregionConfigsarray for your own topology.See the Update One Cluster in One Project endpoint documentation for more details on the available fields you can include in the request body to modify your cluster's configuration.
Create the scaleClusterUp scheduled Trigger.
From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.
Click Create a Trigger to open the Trigger configuration page.
If you have an existing Trigger, click Add a Trigger
Configure Trigger settings.
In Trigger Details, set the following configuration:
SettingValueTrigger Type
Scheduled
Schedule Type
Advanced. This allows you to specify a CRON expression for the schedule.
To run this every morning at 8 AM US Eastern (which is 13:00 UTC), use the following CRON expression:
0 13 * * * Skip Events On Re-enable
On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.
Event Type
Function. Select the
pauseClustersFunction from the dropdown.Trigger Name
scaleClusterUpClick Save to create the Trigger.
Your test clusters will now automatically scale up every morning at 8 AM US Eastern.
Create the scaleClusterDown Function.
Duplicate the
scaleClusterUpFunction into a new Function namedscaleClusterDown.In the Function Editor tab, paste and adjust the following code to scale your cluster down to the specified configuration:
1 /* 2 * Scales a single cluster down to a smaller instance size. 3 * This example scales an AWS cluster down to M10 in region US_EAST_1. 4 */ 5 exports = async function() { 6 const projectID = "<projectId>"; 7 const clusterName = "<clusterName>"; 8 9 const body = { 10 replicationSpecs: [ 11 { 12 regionConfigs: [ 13 { 14 electableSpecs: { 15 instanceSize: "M10", // for example, smaller tier 16 nodeCount: 3 17 }, 18 priority: 7, 19 providerName: "AWS", 20 regionName: "US_EAST_1" 21 } 22 ] 23 } 24 ] 25 }; 26 27 // Scale down the cluster and log the response 28 const result = await context.functions.execute( 29 "modifyCluster", 30 projectID, 31 clusterName, 32 body 33 ); 34 console.log(EJSON.stringify(result)); 35 36 return clusterName + " scaled down"; 37 }; Replace the
<projectId>and<clusterName>placeholders with your own project ID and cluster name, and adjust theregionConfigsarray for your own topology.See the Update One Cluster in One Project endpoint documentation for more details on the available fields you can include in the request body to modify your cluster's configuration.
Create the scaleClusterDown scheduled Trigger.
From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.
Configure Trigger settings.
In Trigger Details, set the following configuration:
SettingValueTrigger Type
Scheduled
Schedule Type
Advanced. This allows you to specify a CRON expression for the schedule.
To run this every evening at 6 PM US Eastern (which is 22:00 UTC), use the following CRON expression:
0 22 * * * Skip Events On Re-enable
On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.
Event Type
Function. Select the
scaleClusterDownFunction from the dropdown.Trigger Name
scaleClusterDownClick Save to create the Trigger.
Together, these two Triggers ensure the cluster runs at higher capacity during busy hours and scales down afterwards.
Optional Helper Functions
The following helper Functions can be test run from the Triggers Function Editor to list projects and clusters in your organization in order to specify which clusters you want to target in the Functions in this tutorial. You can also call these Functions from other Functions to retrieve this information programmatically.
getProjects() calls the Return All Projects endpoint to return all projects in your organization.
1 /* 2 * Returns an array of the projects in the organization 3 * See https://docs.atlas.mongodb.com/reference/api/project-get-all/ 4 * 5 * Returns an array of objects, e.g. 6 * 7 * { 8 * "clusterCount": { 9 * "$numberInt": "1" 10 * }, 11 * "created": "2021-05-11T18:24:48Z", 12 * "id": "609acbef1b76b53fcd37c8e1", 13 * "links": [ 14 * { 15 * "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/609acbef1b76b53fcd37c8e1", 16 * "rel": "self" 17 * } 18 * ], 19 * "name": "mg-training-sample", 20 * "orgId": "5b4e2d803b34b965050f1835" 21 * } 22 * 23 */ 24 exports = async function() { 25 26 // Retrieve headers to authenticate with a new access token, and define the request URL for the Atlas API endpoint 27 const authHeaders = await context.functions.execute("getAuthHeaders"); 28 const requestUrl = `https://cloud.mongodb.com/api/atlas/v2/groups`; 29 30 // Build the argument for the HTTP request to the Atlas API to get all projects in the organization 31 const arg = { 32 url: requestUrl, 33 headers: authHeaders.headers 34 }; 35 36 // The response body is a BSON.Binary object; parse it and return the `results` array, which contains the list of projects for the organization 37 response = await context.http.get(arg); 38 return EJSON.parse(response.body.text()).results; 39 };
getProjectClusters(<projectId>) calls the
Return All Clusters in One Project
endpoint to return all clusters in the project with the specified
project ID.
1 /* 2 * Returns an array of the clusters for the supplied project ID. 3 * See https://docs.atlas.mongodb.com/reference/api/clusters-get-all/ 4 * 5 * Returns an array of objects. See the API documentation for details. 6 * 7 */ 8 exports = async function(projectId) { 9 10 if (projectId == "Hello world!") { // Easy testing from the console 11 projectId = "<projectId>" 12 } 13 14 // Retrieve headers to authenticate with a new access token, and define the request URL for the Atlas API endpoint 15 const authHeaders = await context.functions.execute("getAuthHeaders"); 16 const requestUrl = `https://cloud.mongodb.com/api/atlas/v2/groups/${projectId}/clusters`; 17 18 // Build the argument for the HTTP request to the Atlas API to get all clusters in the project 19 const arg = { 20 url: requestUrl, 21 headers: authHeaders.headers 22 }; 23 24 // The response body is a BSON.Binary object; parse it and return the `results` array, which contains the list of clusters for the project 25 response = await context.http.get(arg); 26 return EJSON.parse(response.body.text()).results; 27 };
Replace the placeholder <projectId> with your own project ID.