This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Ops Manager, refer to the upgrade documentation.
You were redirected from a different version of the documentation. Click here to go back.

Deploy a Cluster through the API


This tutorial manipulates the Public API’s automation configuration to deploy a sharded cluster that is owned by another user. The tutorial first creates a new group, then a new user as owner of the group, and then a sharded cluster owned by the new user. You can create a script to automate these procedures for use in routine operations.

To perform these steps, you must have access to Ops Manager as a user with the Global Owner role.

The procedures install a cluster with two shards. Each shard comprises a three-member replica set. The tutorial installs one mongos and three config servers. Each component of the cluster resides on its own server, requiring a total of 10 servers.

The tutorial installs the Automation Agent on each server.


Ops Manager must have an existing user with Global Owner role. The first user you create has this role. Global owners can perform any Ops Manager action, both through the Ops Manager interface and through the API.

You must have the URL of the Ops Manager Web Server, as set in the mmsBaseUrl setting of the Monitoring Agent configuration file.

Provision ten servers to host the components of the sharded cluster. For server requirements, see the Production Notes in the MongoDB manual.

Each server must provide its Automation Agent with full networking access to the hostnames and ports of the Automation Agents on all the other servers. Each agent runs the command hostname -f to self-identify its hostname and port and report them to Ops Manager.


To ensure agents can reach each other, provision the servers using Automation. This installs the Automation Agents with correct network access. Then use this tutorial to reinstall the Automation Agents on those machines.


As you work with the API, you can view examples on the following GitHub page:


Retrieve API Key

This procedure displays the full API key just once. You must record the API key when it is displayed.

Note that this API key for the Public API is different from the API key for a group, which is always visible in Ops Manager through the Group Settings tab.


Log in as a Global Owner.

Log into the Ops Manager web interface as a user with the Global Owner role.


Select the Administration tab and then API Keys & Whitelists.


Generate a new API key.

In the API Keys section, click Generate. Then enter a description, such as “API Testing,” and click Generate.

If prompted for a two-factor verification code, enter the code and click Verify. Then click Generate again.


Copy and record the key.

Copy the key immediately when it is generated. Ops Manager displays the full key one time only. You will not be able to view the full key again.

Record the key in a secure place. After you have successfully recorded the key, click Close.

Create the Group and the User through the API


Use the API to create a group.

Use the Public API to send a groups document to create the new group. Issue the following command, replacing <> with the credentials of the global owner, <api_key> with your API key, <> with the Ops Manager URL, and <group_name> with the name of the new group:

curl -u "<>:<api_key>" -H "Content-Type: application/json" "http://<>/api/public/v1.0/groups" --digest -i -X POST --data '
   "name": "<group_name>"

The API returns a document that includes the group’s agentApiKey and id. The API automatically sets the publicApiEnabled field to true to allow subsequent API-based configuration.


Record the values of agentApiKey and id in the returned document.

Record these values for use in this procedure and in other procedures in this tutorial.


Use the API to create a user in the new group.

Use the /users endpoint to add a user to the new group.

The body of the request should contain a users document with the user’s information. Set the user’s roles.roleName to GROUP_OWNER, and the user’s roles.groupId set to the new group’s‘ id.

curl -u "<>:<api_key>" -H "Content-Type: application/json" "http://<>/api/public/v1.0/users" --digest -i -X POST --data '
   "username": "<>",
   "emailAddress": "<>",
   "firstName": "<First>",
   "lastName": "<Last>",
   "password": "<password>",
   "roles": [{
     "groupId": "<group_id>",
     "roleName": "GROUP_OWNER"

Remove global owner from the group. (Optional)

The global owner that you used to create the group is also automatically added to the group. You can remove the global owner from the group without losing the ability to make changes to the group in the future. As long as you have the group’s agentApiKey and id, you have full access to the group when logged in as the global owner.

GET the global owner’s ID. Issue the following command to request the group’s users, replace the credentials, API key, URL, and group ID, with the relevant values:

curl -u "<>:<api_key>" "http://<>/api/public/v1.0/groups/<group_id>/users" --digest -i

The API returns a document that lists all the group’s users. Locate the user with roles.roleName set to GLOBAL_OWNER. Copy the user’s id value, and issue the following to remove the user from the group, replacing <user_id> with the user’s id value:

curl -u "<>:<api_key>" "http://<>/api/public/v1.0/groups/<group_id>/users/<user_id>" --digest -i -X DELETE

Upon successful removal of the user, the API returns the HTTP 200 OK status code to indicate the request has succeeded.

Install the Automation Agent on each Provisioned Server

Your servers must have the networking access described in the Prerequisites.


Create the Automation Agent configuration file to be used on the servers.

Create the following configuration file and enter values as shown below. The file uses your agentApiKey, group id, and the Ops Manager URL.

Save this file as automation-agent.config. You will distribute this file to each of your provisioned servers.

# Enter your Group ID - It can be found at /settings

# Enter your API key - It can be found at /settings

# Base url of the MMS web server.

# Path to log file

# Path to backup automation configuration

# Lowest log level to log.  Can be (in order): DEBUG, ROUTINE, INFO, WARN, ERROR, DOOM

# Maximum number of rotated log files

# Maximum size in bytes of a log file (before rotating)

# URL to proxy all HTTP requests through

Retrieve the command strings used to download and install the Automation Agent.

In the Ops Manager web interface, select the Administration tab and then select the Agents page. Under Automation at the bottom of the page, select your operating system to display the install instructions. Copy and save the following strings from these instructions:

  • The curl string used to download the agent.
  • The rpm or dpkg string to install the agent. For operating systems that use tar to unpackage the agent, no install string is listed.
  • The nohup string used run the agent.

Download, configure, and run the Automation Agent on each server.

Do the following on each of the provisioned servers. You can create a script to use as a turn-key operation for these steps:

Use the curl string to download the Automation Agent.

Use rpm, dpkg, or tar to install the agent. Make the agent controllable by the new user you added to the group in the previous procedure.

Replace the contents of the config file with the file you created in the first step. The config file is one of the following, depending on the operating system:

  • /etc/mongodb-mms/automation-agent.config
  • <install_directory>/local.config

Check that the following directories exist and are accessible to the Automation Agent. If they do not, create them. The first two are created automatically on RHEL, CentOS, SUSE, Amazon Linux, and Ubuntu:

  • /var/lib/mongodb-mms-automation
  • /var/log/mongodb-mms-automation
  • /data

Use the nohup string to run the Automation Agent.


Confirm the initial state of the automation configuration.

When the Automation Agent first runs, it downloads the mms-cluster-config-backup.json file, which describes the desired state of the automation configuration.

On one of the servers, navigate to /var/lib/mongodb-mms-automation/ and open mms-cluster-config-backup.json. Confirm that the file’s version field is set to 1. Ops Manager automatically increments this field as changes occur.

Deploy the New Cluster

To add or update a deployment, retrieve the configuration document, make changes as needed, and send the updated configuration though the API to Ops Manager.


You can learn more about the configuration file by viewing it in Ops Manager. Select the Deployment tab and then the Raw AutomationConfig page. Note that the raw configuration contains fields you should not update with the configuration document.

The following procedure deploys an updated automation configuration through the Public API:


Retrieve the automation configuration from Ops Manager.

Use the automationConfig resource to retrieve the configuration. Issue the following command, replacing <> with the credentials of the global owner, <api_key> with the previously retrieved API key, <> with the URL of Ops Manager, and <group_id> with the previously retrieved group ID:

curl -u "<>:<api_key>" "http://<>/api/public/v1.0/groups/<group_id>/automationConfig" --digest -i

Confirm that the version field of the retrieved automation configuration matches the version field in the mms-cluster-config-backup.json file.


Create the top level of the new configuration document.

Create a document with the following fields. As you build the configuration document, refer the description of an automation configuration for detailed explanations of the settings. For examples, refer to the following page on GitHub:

    "options": {
        "downloadBase": "/var/lib/mongodb-mms-automation"
    "mongoDbVersions": [],
    "monitoringVersions": [],
    "backupVersions": [],
    "processes": [],
    "replicaSets": [],
    "sharding": []

Add MongoDB versions to the configuration document.

In the mongoDbVersions array, add the versions of MongoDB to have available to the deployment. Add only those versions you will use. For this tutorial, the following array includes just one version, 2.4.12, but you can specify multiple versions. Using 2.4.12 allows this deployment to later upgrade to 2.6, as described in Update the MongoDB Version of a Deployment.

"mongoDbVersions": [
    { "name": "2.4.12" }

Add the Monitoring Agent to the configuration document.

In the monitoringVersions.hostname field, enter the hostname of the server where Ops Manager should install the Monitoring Agent. Use the fully qualified domain name that running hostname -f on the server returns, as in the following:

"monitoringVersions": [
        "hostname": "<>",
        "logPath": "/var/log/mongodb-mms-automation/monitoring-agent.log",
        "logRotate": {
            "sizeThresholdMB": 1000,
            "timeThresholdHrs": 24

This configuration example also includes the logPath field, which specifies the log location, and logRotate, which specifies the log thresholds.


Add the servers to the configuration document.

This sharded cluster has 10 MongoDB instances, as described in the Overview, each running on its own server. Thus, the automation configuration’s processes array will have 10 documents, one for each MongoDB instance.

The following example adds the first document to the processes array. Replace <process_name_1> with any name you choose, and replace <> with the FQDN of the server. You will need to add 9 documents: one for each MongoDB instance in your sharded cluster.

The example uses the args2_4 syntax for the processes.<args> field. For MongoDB versions 2.6 and later, use the args2_6 syntax. See Supported MongoDB Options for Automation for more information.

"processes": [
        "version": "2.4.12",
        "name": "<process_name_1>",
        "hostname": "<>",
        "logRotate": {
            "sizeThresholdMB": 1000,
            "timeThresholdHrs": 24
        "authSchemaVersion": 1,
        "processType": "mongod",
        "args2_4": {
            "port": 27017,
            "replSet": "rs1",
            "dbpath": "/data/",
            "logpath": "/data/mongodb.log"

Add the sharded cluster topology to the configuration document.

Add two replica set documents to the replicaSets array. Add three members to each document. The following example shows one replica set member added in the first replica set document:

"replicaSets": [
        "_id": "rs1",
        "members": [
                "_id": 0,
                "host": "<process_name_1>",
                "priority": 1,
                "votes": 1,
                "slaveDelay": 0,
                "hidden": false,
                "arbiterOnly": false

In the sharding array, add the replica sets to the shards, and add the three config servers, as in the following:

"sharding": [
        "shards": [
                "tags": [],
                "_id": "shard1",
                "rs": "rs1"
                "tags": [],
                "_id": "shard2",
                "rs": "rs2"
        "name": "sharded_cluster_via_api",
        "configServer": [
        "collections": []

Send the configuration document.

Use the groups/<group_id>/automationConfig endpoint to to send the automation configuration document to Ops Manager, as in the following. Replace <> with the credentials of the global owner, <api_key> with previously retrieved API key, <> with the Ops Manager URL, and <group_id> with the previously retrieved group id.

Replace <configuration_document> with the configuration document you have created in the previous steps.

curl -u "<>:<api_key>" -H "Content-Type: application/json" "http://<>/api/public/v1.0/groups/<group_id>/automationConfig" --digest -i -X PUT --data '

Upon successful update of the configuration, the API returns the HTTP/1.1 200 OK status code to indicate the request has succeeded.


Confirm successful update of the automation configuration.

Retrieve the automation configuration to compare it against the document you sent. In particular, confirm that the version field equals 2.

Issue a command similar to the following. Replace the credentials, API key, <> URL, and group id as in previous steps.

curl -u "<>:<api_key>" "http://<>/api/public/v1.0/groups/<group_id>/automationConfig" --digest -i

Check the deployment status to ensure goal state is reached.

Use the automationStatus resource to retrieve the deployment status. Issue the following command, replacing the credentials, API key, URL, and group id as in previous steps.

curl -u "<>:<api_key>" "http://<>/api/public/v1.0/groups/<group_id>/automationStatus" --digest -i

The command returns the processes array and the goalVersion field. The processes array contains a document for each server that is to run a MongoDB instance, similar to the following:

  "plan": [],
  "lastGoalVersionAchieved": 2,
  "name": "<process_name_1>",
  "hostname": "<>",

If any document has a lastGoalVersionAchieved field equal to 1, the configuration is in the process of deploying. The document’s plan field displays the remaining work. Wait several seconds and issue the curl command again.

When all lastGoalVersionAchieved fields equal the value specified in the goalVersion field, the new configuration has successfully deployed.

To view the new configuration in the Ops Manager web interface, select the Deployment tab and then the Deployment page.

Next Steps

To make an additional version of MongoDB available in the cluster, follow the steps in Update the MongoDB Version of a Deployment.