Docs Menu
Docs Home
/ /

Tutorial: Automate Clusters with Scheduled Triggers

In this tutorial, you will use Atlas scheduled Triggers to automate cluster management tasks by programmatically calling the Atlas Administration API.

This tutorial includes the following procedures:

  • Initial Setup: Create a service account with Project Owner permissions to your existing Atlas project, store the service account credentials as Values and Secrets, then create reusable Functions that uses these credentials to call the Update One Cluster in One Project endpoint.

    Note

    If you prefer to use API keys instead of service accounts to authenticate to the Atlas Administration API with Project Owner permissions, you can save API public and private keys as Values and Secrets to use in the Functions in this tutorial.

  • Pause and Resume Clusters on a Schedule: Create scheduled Triggers to automatically pause clusters every evening and resume them every weekday morning.

  • Scale Clusters on a Schedule: Create scheduled Triggers to automatically scale a cluster up during peak hours and down afterwards.

To complete this tutorial, you need a user with Project Owner access to a MongoDB Atlas project.

This initial set up only needs to be completed once, and allows you to create the scheduled Triggers on this page to automate cluster management tasks. Before performing this tutorial, ensure you have a MongoDB Atlas project with at least one cluster. This procedure performs the following set up tasks:

  • Creates and saves credentials for an Atlas service account that Triggers will use to call the Atlas Administration API with Project Owner permissions to your existing Atlas project.

  • Creates a reusable Function called getAuthHeaders that generates an access token using the service account credentials and returns the appropriate authentication headers for calling the Atlas Administration API.

  • Creates a reusable Function called modifyCluster that wraps the Update One Cluster in One Project API.

1

To create a service account that your Triggers can use to call the Atlas Administration API with Project Owner permissions to your existing Atlas project:

  1. In Atlas, go to the Users page.

    1. If it's not already displayed, select your desired organization from the Organizations menu in the navigation bar.

    2. Click All Projects in the sidebar under the Identity & Access section, and select your desired project.

    3. Click Project Identity & Access in the sidebar under the Security section.

    The Users page displays.

  2. Click Create Application Service Account.

  3. Enter the service account information.

    • Name: Name for your service account. (e.g., TriggersServiceAccount)

    • Description: (optional) Description for your service account. (e.g., Service account for Atlas Functions to call Atlas Administration API.)

    • Service Account Permissions: Project Owner

  4. Click Create.

    This creates the service account and automatically adds it to the project's parent organization with the permission Organization Member.

  5. Configure the API Access List.

    Add IP addresses to the API Access List if you want to restrict which IP addresses can call the Atlas Administration API with this service account.

    Note

    If Require IP Access List for the Atlas Administration API is enabled for your organization, or if you added any IP addresses to your service account's API Access List, then every Atlas Administration API request must pass an IP access-list check.

    Atlas Triggers and Functions send outgoing HTTP requests from a specific set of outbound IP addresses. To enable your scheduled Triggers to call the Atlas Administration API and other external services, you must add these IP addresses to your service account's API Access List.

    For the full list of outbound IP addresses used by Atlas Functions, see Function Security Outbound IP Access. You must add each IP address individually.

2

Create the following Values and Secrets to store your service account credentials:

  • AtlasClientId Value that contains your service account client ID.

  • AtlasClientSecret Secret that contains your service account client secret.

  • AtlasClientSecret Value that links to the Secret. This enables you to access the client secret value in your Functions, while still keeping it stored securely as a Secret.

  1. In Atlas, go to the Triggers page.

    1. If it's not already displayed, select the organization that contains your project from the Organizations menu in the navigation bar.

    2. If it's not already displayed, select your project from the Projects menu in the navigation bar.

    3. In the sidebar, click Triggers under the Streaming Data heading.

    The Triggers page displays.

  2. Navigate to the Values Page.

    1. Click the Linked App Service: Triggers link.

    2. In the sidebar, click Values under the Build heading.

  3. Store the client ID in a Value.

    1. Click Create a Value

    2. Enter AtlasClientId as the Value Name.

    3. Select the Value type.

    4. Select the Custom Content option and enter the client ID.

      Note

      You must enter the client ID as a string value with quotes ("<clientId>").

    5. Click Save.

  4. Store the client secret in a Secret and link it to a Value.

    Note

    Secret values cannot be accessed directly, so you must create a second Value that links to the Secret.

    1. Click Create a Value.

    2. Enter AtlasClientSecret as the Value Name.

    3. Select the Value type.

    4. Select the Link to Secret option.

    5. Enter AtlasClientSecret and click Create "AtlasClientSecret" to name the Secret value.

      Paste the client secret into the Client Secret field that appears below the Secret name.

    6. Click Save to create both the Secret and the Value.

3

To create a reusable Function that retrieves an access token using the service account credentials and returns the appropriate authentication headers for calling the Atlas Administration API:

  1. Navigate to the Functions Page.

    1. In the sidebar, click Functions under the Build heading.

    2. Click Create a Function.

      The Settings tab displays by default.

  2. Enter getAuthHeaders as the Name for the Function.

  3. Set Private to true. This Function will only be called by other Functions in this tutorial.

    Leave the other configuration options in the Settings tab at their default values.

  4. Define the Function code.

    Click the Function Editor tab and paste the following code to define the Function:

    1/*
    2 * Generate API request headers with a new Service Account Access Token.
    3 */
    4exports = async function getAuthHeaders() {
    5
    6 // Get stored credentials
    7 clientId = context.values.get("AtlasClientId");
    8 clientSecret = context.values.get("AtlasClientSecret");
    9
    10 // Throw an error if credentials are missing
    11 if (!clientId || !clientSecret) {
    12 throw new Error("Authentication credentials not found. Set AtlasClientId/AtlasClientSecret (service account auth credentials).");
    13 }
    14
    15 // Define the argument for the HTTP request to get the access token
    16 const tokenUrl = "https://cloud.mongodb.com/api/oauth/token";
    17 const credentials = Buffer.from(`${clientId}:${clientSecret}`).toString("base64");
    18
    19 const arg = {
    20 url: tokenUrl,
    21 headers: {
    22 "Authorization": [ `Basic ${credentials}` ],
    23 "Content-Type": [ "application/x-www-form-urlencoded" ]
    24 },
    25 body: "grant_type=client_credentials"
    26 }
    27
    28 // The response body is a BSON.Binary object; parse it to extract the access token
    29 const response = await context.http.post(arg);
    30 const tokenData = JSON.parse(response.body.text());
    31 const accessToken = tokenData.access_token;
    32
    33 // Define the Accept header with the resource version from env var or default to latest stable
    34 const resourceVersion = context.environment.ATLAS_API_VERSION || "2025-03-12";
    35 const acceptHeader = `application/vnd.atlas.${resourceVersion}+json`;
    36
    37 // Return the access token as headers for future API calls
    38 return {
    39 headers: {
    40 "Authorization": [ `Bearer ${accessToken}` ],
    41 "Accept": [ acceptHeader ],
    42 "Accept-Encoding": [ "bzip, deflate" ],
    43 "Content-Type": [ "application/json" ]
    44 }
    45 };
    46}
4

To create a reusable Function that wraps the Update One Cluster in One Project endpoint:

  1. From the Functions Page, click Create a Function.

    The Settings tab displays by default.

  2. Enter modifyCluster as the Name for the Function.

  3. Set Private to true. This Function will only be called by other Functions in this tutorial.

    Leave the other configuration options in the Settings tab at their default values.

  4. Define the Function code.

    Click the Function Editor tab and paste the following code to define the Function:

    1/*
    2 * Modifies the cluster as defined by the `body` parameter.
    3 * See https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Clusters/operation/updateCluster
    4 */
    5exports = async function(projectID, clusterName, body) {
    6
    7 // Easy testing from the console
    8 if (projectID === "Hello world!") {
    9 projectID = "<projectId>";
    10 clusterName = "<clusterName>";
    11 body = { paused: false };
    12 }
    13
    14 // Retrieve headers to authenticate with a new access token, and define the request URL for the Atlas API endpoint
    15 const authHeaders = await context.functions.execute("getAuthHeaders");
    16 const requestUrl = `https://cloud.mongodb.com/api/atlas/v2/groups/${projectID}/clusters/${clusterName}`;
    17
    18 // Build the argument for the HTTP request to the Atlas API to modify the cluster
    19 const arg = {
    20 url: requestUrl,
    21 headers: authHeaders.headers,
    22 body: JSON.stringify(body)
    23 };
    24
    25 // The response body is a BSON.Binary object; parse it and return the modified cluster description
    26 const response = await context.http.patch(arg);
    27 if (response.body) {
    28 return EJSON.parse(response.body.text());
    29 } else {
    30 throw new Error(`No response body returned from Atlas API. Status code: ${response.status}`);
    31 }
    32};

    Note

    Test code in the Function Editor.

    The Function Editor automatically provides "Hello world!" as the first argument when you run a Function in the Testing Console. This code tests for that input and provides values to the parameters when "Hello world!" is received.

    To test the Function with your own input, replace the following placeholder values with your own information:

    • <projectId>

    • <clusterName>

    • In the body parameter, provide a payload containing the modifications you'd like to make to the cluster. The example code includes a payload that pauses a cluster.

This procedure creates scheduled Triggers to automatically pause clusters every evening and resume them every weekday morning. This is useful for non-production clusters that don't need to run outside of business hours, or for any clusters that you want to automatically pause and resume on a schedule.

1
  1. In Atlas, go to the Triggers page.

    1. If it's not already displayed, select the organization that contains your project from the Organizations menu in the navigation bar.

    2. If it's not already displayed, select your project from the Projects menu in the navigation bar.

    3. In the sidebar, click Triggers under the Streaming Data heading.

    The Triggers page displays.

  2. Navigate to the Functions Page

    1. Click the Linked App Service: Triggers link.

    2. In the sidebar, click Functions under the Build heading.

    3. Click Create a Function.

      The Settings tab displays by default.

  3. Enter pauseClusters as the Name for the Function.

  4. Set Private to true. This Function will only be called by the pauseClusters Trigger in this tutorial.

    Leave the other configuration options in the Settings tab at their default values.

  5. Define the Function code.

    Click the Function Editor tab and paste the following code to define the Function:

    1/*
    2 * Iterates over the provided projects and clusters, pausing those clusters.
    3 */
    4exports = async function () {
    5
    6 // Supply project IDs and cluster names to pause
    7 const projectIDs = [
    8 {
    9 id: "<projectIdA>",
    10 names: [ "<clusterNameA>", "<clusterNameB>" ]
    11 },
    12 {
    13 id: "<projectIdB>",
    14 names: [ "<clusterNameC>" ]
    15 }
    16 ];
    17
    18 // Set desired state
    19 const body = { paused: true };
    20
    21 // Pause each cluster and log the response
    22 for (const project of projectIDs) {
    23 for (const cluster of project.names) {
    24 const result = await context.functions.execute(
    25 "modifyCluster",
    26 project.id,
    27 cluster,
    28 body,
    29 );
    30 console.log("Cluster " + cluster + ": " + EJSON.stringify(result));
    31 }
    32 }
    33
    34 return "Clusters Paused";
    35};

    Replace the projectIDs array with your own project and cluster names.

    Note

    To avoid hardcoding project and cluster names, you can use the helper Functions at the end of this tutorial to retrieve lists of projects and clusters from the Atlas Administration API and programmatically determine which clusters to pause and resume on a schedule.

2
  1. From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.

  2. Click Create a Trigger to open the Trigger configuration page.

    If you have an existing Trigger, click Add a Trigger

  3. Configure Trigger settings.

    In Trigger Details, set the following configuration:

    Setting
    Value

    Trigger Type

    Scheduled

    Schedule Type

    Advanced. This allows you to specify a CRON expression for the schedule.

    To run this every weekday evening at 6 PM US Eastern (which is 22:00 UTC), use the following CRON expression:

    0 22 * * 1-5

    Skip Events On Re-enable

    On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.

    Event Type

    Function. Select the pauseClusters Function from the dropdown.

    Trigger Name

    pauseClusters

  4. Click Save to create the Trigger.

    Your test clusters will now automatically pause every evening at 6 PM US Eastern.

3
  1. Duplicate the pauseClusters Function into a new Function named resumeClusters.

  2. In the Function Editor tab, update the paused state to false in the Function code:

    1/*
    2 * Iterates over the provided projects and clusters, resuming those clusters.
    3 */
    4exports = async function () {
    5
    6 // Supply project IDs and cluster names to resume
    7 const projectIDs = [
    8 {
    9 id: "<projectIdA>",
    10 names: [ "<clusterNameA>", "<clusterNameB>" ]
    11 },
    12 {
    13 id: "<projectIdB>",
    14 names: [ "<clusterNameC>" ]
    15 }
    16 ];
    17
    18 // Set desired state
    19 const body = { paused: false };
    20
    21 // Resume each cluster and log the response
    22 for (const project of projectIDs) {
    23 for (const cluster of project.names) {
    24 const result = await context.functions.execute(
    25 "modifyCluster",
    26 project.id,
    27 cluster,
    28 body,
    29 );
    30 console.log("Cluster " + cluster + ": " + EJSON.stringify(result));
    31 }
    32 }
    33
    34 return "Clusters Resumed";
    35};
4
  1. From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.

  2. Configure Trigger settings.

    In Trigger Details, set the following configuration:

    Setting
    Value

    Trigger Type

    Scheduled

    Schedule Type

    Advanced. This allows you to specify a CRON expression for the schedule.

    To run this every weekday morning at 8 AM US Eastern (which is 12:00 UTC), use the following CRON expression:

    0 12 * * 1-5

    Skip Events On Re-enable

    On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.

    Event Type

    Function. Select the resumeClusters Function from the dropdown.

    Trigger Name

    resumeClusters

  3. Click Save to create the Trigger.

Your test clusters will now pause every evening and resume every weekday morning automatically.

This procedure creates scheduled Triggers to automatically scale a cluster up during peak hours and down afterwards. This is useful for clusters that have predictable usage patterns where you want to proactively scale before the workload increases, and scale down afterwards to save costs.

Note

Atlas supports Cluster Auto-Scaling to automatically increase your cluster tier or storage capacity based on usage or predicted usage. However, if you have predictable peak usage windows, you can use scheduled Triggers to proactively scale your cluster before your workload increases.

1
  1. In Atlas, go to the Triggers page.

    1. If it's not already displayed, select the organization that contains your project from the Organizations menu in the navigation bar.

    2. If it's not already displayed, select your project from the Projects menu in the navigation bar.

    3. In the sidebar, click Triggers under the Streaming Data heading.

    The Triggers page displays.

  2. Navigate to the Functions Page

    1. Click the Linked App Service: Triggers link.

    2. In the sidebar, click Functions under the Build heading.

    3. Click Create a Function.

      The Settings tab displays by default.

  3. Enter scaleClusterUp as the Name for the Function.

  4. Set Private to true. This Function will only be called by the scaleClusterUp Trigger in this tutorial.

    Leave the other configuration options in the Settings tab at their default values.

  5. Define the Function code.

    From the Create Function page, click the Function Editor tab and paste the following code to define your Function:

    1/*
    2 * Scales a single cluster up to a larger instance size.
    3 * This example scales an AWS cluster up to M30 in region US_EAST_1.
    4 */
    5exports = async function() {
    6 // Supply project ID and cluster name...
    7 const projectID = "<projectId>";
    8 const clusterName = "<clusterName>";
    9
    10 // Set the desired instance size and topology...
    11 const body = {
    12 replicationSpecs: [
    13 {
    14 regionConfigs: [
    15 {
    16 electableSpecs: {
    17 instanceSize: "M30", // for example, larger tier
    18 nodeCount: 3
    19 },
    20 priority: 7,
    21 providerName: "AWS",
    22 regionName: "US_EAST_1"
    23 }
    24 ]
    25 }
    26 ]
    27 };
    28
    29 // Scale up the cluster and log the response
    30 const result = await context.functions.execute(
    31 "modifyCluster",
    32 projectID,
    33 clusterName,
    34 body
    35 );
    36 console.log(EJSON.stringify(result));
    37
    38 return clusterName + " scaled up";
    39};

    Replace the <projectId> and <clusterName> placeholders with your own project ID and cluster name, and adjust the regionConfigs array for your own topology.

    See the Update One Cluster in One Project endpoint documentation for more details on the available fields you can include in the request body to modify your cluster's configuration.

2
  1. From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.

  2. Click Create a Trigger to open the Trigger configuration page.

    If you have an existing Trigger, click Add a Trigger

  3. Configure Trigger settings.

    In Trigger Details, set the following configuration:

    Setting
    Value

    Trigger Type

    Scheduled

    Schedule Type

    Advanced. This allows you to specify a CRON expression for the schedule.

    To run this every morning at 8 AM US Eastern (which is 13:00 UTC), use the following CRON expression:

    0 13 * * *

    Skip Events On Re-enable

    On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.

    Event Type

    Function. Select the pauseClusters Function from the dropdown.

    Trigger Name

    scaleClusterUp

  4. Click Save to create the Trigger.

    Your test clusters will now automatically scale up every morning at 8 AM US Eastern.

3
  1. Duplicate the scaleClusterUp Function into a new Function named scaleClusterDown.

  2. In the Function Editor tab, paste and adjust the following code to scale your cluster down to the specified configuration:

    1/*
    2 * Scales a single cluster down to a smaller instance size.
    3 * This example scales an AWS cluster down to M10 in region US_EAST_1.
    4 */
    5exports = async function() {
    6 const projectID = "<projectId>";
    7 const clusterName = "<clusterName>";
    8
    9 const body = {
    10 replicationSpecs: [
    11 {
    12 regionConfigs: [
    13 {
    14 electableSpecs: {
    15 instanceSize: "M10", // for example, smaller tier
    16 nodeCount: 3
    17 },
    18 priority: 7,
    19 providerName: "AWS",
    20 regionName: "US_EAST_1"
    21 }
    22 ]
    23 }
    24 ]
    25 };
    26
    27 // Scale down the cluster and log the response
    28 const result = await context.functions.execute(
    29 "modifyCluster",
    30 projectID,
    31 clusterName,
    32 body
    33 );
    34 console.log(EJSON.stringify(result));
    35
    36 return clusterName + " scaled down";
    37};

    Replace the <projectId> and <clusterName> placeholders with your own project ID and cluster name, and adjust the regionConfigs array for your own topology.

    See the Update One Cluster in One Project endpoint documentation for more details on the available fields you can include in the request body to modify your cluster's configuration.

4
  1. From the Functions page, navigate to the Triggers page by clicking Triggers in the sidebar under the Build heading.

  2. Configure Trigger settings.

    In Trigger Details, set the following configuration:

    Setting
    Value

    Trigger Type

    Scheduled

    Schedule Type

    Advanced. This allows you to specify a CRON expression for the schedule.

    To run this every evening at 6 PM US Eastern (which is 22:00 UTC), use the following CRON expression:

    0 22 * * *

    Skip Events On Re-enable

    On. This prevents the Trigger from executing on schedules that were queued while the Trigger was disabled.

    Event Type

    Function. Select the scaleClusterDown Function from the dropdown.

    Trigger Name

    scaleClusterDown

  3. Click Save to create the Trigger.

Together, these two Triggers ensure the cluster runs at higher capacity during busy hours and scales down afterwards.

The following helper Functions can be test run from the Triggers Function Editor to list projects and clusters in your organization in order to specify which clusters you want to target in the Functions in this tutorial. You can also call these Functions from other Functions to retrieve this information programmatically.

Back

Scheduled Triggers

On this page