Navigation
This version of the documentation is archived and no longer supported. It will be removed on EOL_DATE. To learn how to upgrade your version of MongoDB Ops Manager, refer to the upgrade documentation.
You were redirected from a different version of the documentation. Click here to go back.
This version of the manual is no longer supported. It will be removed on EOL_DATE.

Deploy a Cluster through the API

Overview

This tutorial manipulates the Public API’s automation configuration to deploy a sharded cluster that is owned by another user. The tutorial first creates a new project, then a new user as owner of the project, and then a sharded cluster owned by the new user. You can create a script to automate these procedures for use in routine operations.

To perform these steps, you must have sufficient access to Ops Manager. A user with the Global Owner or Project Owner role has sufficient access.

The procedures install a cluster with two shards. Each shard comprises a three-member replica set. The tutorial installs one mongos and three config servers. Each component of the cluster resides on its own server, requiring a total of 10 servers.

The tutorial installs the Automation Agent on each server.

Prerequisites

Ops Manager must have an existing user. If you are deploying the sharded cluster on a fresh install of Ops Manager, you must register the first user.

You must have the URL of the Ops Manager Web Server, as set in the mmsBaseUrl setting of the Monitoring Agent configuration file.

Provision ten servers to host the components of the sharded cluster. For server requirements, see the Production Notes in the MongoDB manual.

Each server must provide its Automation Agent with full networking access to the hostnames and ports of the Automation Agents on all the other servers. Each agent runs the command hostname -f to self-identify its hostname and port and report them to Ops Manager.

Tip

To ensure agents can reach each other, provision the servers using Automation. This installs the Automation Agents with correct network access. Then use this tutorial to reinstall the Automation Agents on those machines.

Examples

As you work with the API, you can view examples on the following GitHub page: https://github.com/10gen-labs/mms-api-examples/tree/master/automation/.

Procedures

Generate a Public API Key

This procedure displays the full API key just once. You must record the API key when it is displayed.

Note

A Public API key is different from an agent API key. A Public API key is associated with a user; an agent API key is associated with a project.

1

Log in as a Global Owner or project owner.

Log into the Ops Manager web interface as a user with the Global Roles role or Project Owner role.

2

Go to the Public API Access view.

Click on your user name in the upper-right hand corner and select Account. Then click Public API Access.

3

Generate a new Public API key.

In the API Keys section, click Generate. Then enter a description, such as “API Testing,” and click Generate.

If prompted for a two-factor verification code, enter the code and click Verify. Then click Generate again.

4

Copy and record the key.

Copy the key immediately when it is generated. Ops Manager displays the full key one time only. You will not be able to view the full key again.

Record the key in a secure place. After you have successfully recorded the key, click Close.

Create the Group and the User through the API

1

Use the API to create a group.

Use the Public API to send a groups document to create the new group. Issue the following command, replacing <user@example.net> with your user credentials, <public_api_key> with your Public API key, <app-example.net> with the Ops Manager URL, and <group_name> with the name of the new group:

curl -u "<user@example.net>:<public_api_key>" -H "Content-Type: application/json" "http://<app-example.net>/api/public/v1.0/groups" --digest -i -X POST --data '
{
   "name": "<group_name>"
}'

The API returns a document that includes the group’s agentApiKey and id.

2

Record the values of agentApiKey and id in the returned document.

Record these values for use in this procedure and in other procedures in this tutorial.

3

Use the API to create a user in the new group.

Use the /users endpoint to add a user to the new group.

The body of the request should contain a users document with the user’s information. Set the user’s roles.roleName to GROUP_OWNER, and the user’s roles.groupId set to the new group’s‘ id.

curl -u "<user@example.net>:<public_api_key>" -H "Content-Type: application/json" "http://<app-example.net>/api/public/v1.0/users" --digest -i -X POST --data '
{
   "username": "<new_user@example.com>",
   "emailAddress": "<new_user@example.com>",
   "firstName": "<First>",
   "lastName": "<Last>",
   "password": "<password>",
   "roles": [{
     "groupId": "<group_id>",
     "roleName": "GROUP_OWNER"
   }]
}'
4

If you used a global owner user to create the group, you can remove that user from the group. (Optional)

The user you use to create the group is automatically added to the group. If you used a global owner user, you can remove the user from the group without losing the ability to make changes to the group in the future. As long as you have the group’s agentApiKey and id, you have full access to the group when logged in as the global owner.

GET the global owner’s ID. Issue the following command to request the group’s users. Replace the credentials, Public API key, URL, and group ID, with the relevant values:

curl -u "<user@example.net>:<public_api_key>" "http://<app-example.net>/api/public/v1.0/groups/<group_id>/users" --digest -i

The API returns a document that lists all the group’s users. Locate the user with roles.roleName set to GLOBAL_OWNER. Copy the user’s id value, and issue the following to remove the user from the group, replacing <user_id> with the user’s id value:

curl -u "<user@example.net>:<public_api_key>" "http://<app-example.net>/api/public/v1.0/groups/<group_id>/users/<user_id>" --digest -i -X DELETE

Upon successful removal of the user, the API returns the HTTP 200 OK status code to indicate the request has succeeded.

Install the Automation Agent on each Provisioned Server

Your servers must have the networking access described in the Prerequisites.

1

Create the Automation Agent configuration file to be used on the servers.

Create the following configuration file and enter values as shown below. The file uses your agent API key (agentApiKey), group id, and the Ops Manager URL.

Save this file as automation-agent.config. You will distribute this file to each of your provisioned servers.

# REQUIRED
# Enter your Group ID - It can be found at /settings
#
mmsGroupId=<Project ID>

# REQUIRED
# Enter your API key - It can be found at /settings
#
mmsApiKey=<Agent API key>

# Base url of the MMS web server.
#
mmsBaseUrl=<Base URL of |application|>

# Path to log file
#
logFile=/var/log/mongodb-mms-automation/automation-agent.log

# Path to backup automation configuration
#
mmsConfigBackup=/var/lib/mongodb-mms-automation/mms-cluster-config-backup.json

# Lowest log level to log.  Can be (in order): DEBUG, ROUTINE, INFO, WARN, ERROR, DOOM
#
logLevel=INFO

# Maximum number of rotated log files
#
maxLogFiles=10

# Maximum size in bytes of a log file (before rotating)
#
maxLogFileSize=268435456

# URL to proxy all HTTP requests through
#
#httpProxy=
2

Retrieve the command strings used to download and install the Automation Agent.

  1. In the Ops Manager web interface, click Deployment, then the Agents tab.
  2. Click Downloads & Settings.
  3. In the Automation column, click the link for your operating system. system to display the install instructions.
  4. Copy and save the following strings from the instructions:
    • The curl string used to download the agent.
    • The rpm or dpkg string to install the agent. For operating systems that use tar to unpackage the agent, no install string is listed.
    • The nohup string used to run the agent.
3

Download, configure, and run the Automation Agent on each server.

Do the following on each of the provisioned servers. You can create a script to use as a turn-key operation for these steps:

Use the curl string to download the Automation Agent.

Use rpm, dpkg, or tar to install the agent. Make the agent controllable by the new user you added to the group in the previous procedure.

Replace the contents of the config file with the file you created in the first step. The config file is one of the following, depending on the operating system:

  • /etc/mongodb-mms/automation-agent.config
  • <install_directory>/local.config

Check that the following directories exist and are accessible to the Automation Agent. If they do not, create them. The first two are created automatically on RHEL, CentOS, SUSE, Amazon Linux, and Ubuntu:

  • /var/lib/mongodb-mms-automation
  • /var/log/mongodb-mms-automation
  • /data

Use the nohup string to run the Automation Agent.

4

Confirm the initial state of the automation configuration.

When the Automation Agent first runs, it downloads the mms-cluster-config-backup.json file, which describes the desired state of the automation configuration.

On one of the servers, navigate to /var/lib/mongodb-mms-automation/ and open mms-cluster-config-backup.json. Confirm that the file’s version field is set to 1. Ops Manager automatically increments this field as changes occur.

Deploy the New Cluster

To add or update a deployment, retrieve the configuration document, make changes as needed, and send the updated configuration though the API to Ops Manager.

The following procedure deploys an updated automation configuration through the Public API:

1

Retrieve the automation configuration from Ops Manager.

Use the automationConfig resource to retrieve the configuration. Issue the following command, replacing:

  • <user@example.net> with your user credentials,
  • <public_api_key> with the previously retrieved Public API key,
  • <app-example.net> with the URL of Ops Manager, and
  • <group_id> with the previously retrieved project ID:
curl -u "<user@example.net>:<public_api_key>" "http://<app-example.net>/api/public/v1.0/groups/<group_id>/automationConfig" --digest -i

Confirm that the version field of the retrieved automation configuration matches the version field in the mms-cluster-config-backup.json file, which is found on any server running the Automation Agent.

2

Create the top level of the new automation configuration.

Create a document with the following fields. As you build the configuration document, refer the description of an automation configuration for detailed explanations of the settings. For examples, refer to the following page on GitHub: https://github.com/10gen-labs/mms-api-examples/tree/master/automation/.

{
    "options": {
        "downloadBase": "/var/lib/mongodb-mms-automation",
        "downloadBaseWindows": "C:\\MMSAutomation\\versions"
    },
    "mongoDbVersions": [],
    "monitoringVersions": [],
    "backupVersions": [],
    "processes": [],
    "replicaSets": [],
    "sharding": []
}
3

Add MongoDB versions to the automation configuration.

In the mongoDbVersions array, add the versions of MongoDB to have available to the deployment. Add only those versions you will use. For this tutorial, the following array includes just one version, 3.2.12, but you can specify multiple versions. Using 3.2.12 allows this deployment to later upgrade to 3.4, as described in Update the MongoDB Version of a Deployment.

"mongoDbVersions": [
    { "name": "3.2.12" }
]
4

Add the Monitoring Agent to the automation configuration.

In the monitoringVersions.hostname field, enter the hostname of the server where Ops Manager should install the Monitoring Agent. Use the fully qualified domain name that running hostname -f on the server returns, as in the following:

"monitoringVersions": [
    {
        "hostname": "<server_x.example.net>",
        "logPath": "/var/log/mongodb-mms-automation/monitoring-agent.log",
        "logRotate": {
            "sizeThresholdMB": 1000,
            "timeThresholdHrs": 24
        }
    }
]

This configuration example also includes the logPath field, which specifies the log location, and logRotate, which specifies the log thresholds.

5

Add the servers to the automation configuration.

This sharded cluster has 10 MongoDB instances, as described in the Overview, each running on its own server. Thus, the automation configuration’s processes array will have 10 documents, one for each MongoDB instance.

The following example adds the first document to the processes array. Replace <process_name_1> with any name you choose, and replace <server1.example.net> with the FQDN of the server. You will need to add 9 documents: one for each MongoDB instance in your sharded cluster.

Specify the args2_6 syntax for the processes.<args> field. See Supported MongoDB Options for Automation for more information.

"processes": [
    {
        "version": "3.2.12",
        "name": "<process_name_1>",
        "hostname": "<server1.example.net>",
        "logRotate": {
            "sizeThresholdMB": 1000,
            "timeThresholdHrs": 24
        },
        "authSchemaVersion": 5,
        "processType": "mongod",
        "args2_6": {
            "net": {
                "port": 27017
            },
            "storage": {
                "dbPath": "/data/"
            },
            "systemLog": {
                "path": "/data/mongodb.log",
                "destination": "file"
            },
            "replication": {
                "replSetName": "rs1"
            }
        }
    },
    ...
]
6

Add the sharded cluster topology to the automation configuration.

Add two replica set documents to the replicaSets array. Add three members to each document. The following example shows one replica set member added in the first replica set document:

"replicaSets": [
    {
        "_id": "rs1",
        "members": [
            {
                "_id": 0,
                "host": "<process_name_1>",
                "priority": 1,
                "votes": 1,
                "slaveDelay": 0,
                "hidden": false,
                "arbiterOnly": false
            },
            ...
        ]
    },
    ...

In the sharding array, add the replica sets to the shards, and add the three config servers, as in the following:

"sharding": [
    {
        "shards": [
            {
                "tags": [],
                "_id": "shard1",
                "rs": "rs1"
            },
            {
                "tags": [],
                "_id": "shard2",
                "rs": "rs2"
            }
        ],
        "name": "sharded_cluster_via_api",
        "configServer": [
            "<process_name_7>",
            "<process_name_8>",
            "<process_name_9>"
        ],
        "collections": []
    }
]
7

Send the updated automation configuration.

Use the groups/<group_id>/automationConfig endpoint to send the automation configuration document to Ops Manager. Replace <configuration_document> with the configuration document you created in the previous steps. Replace the credentials, Public API key, URL, and group ID as in previous steps.

curl -u "<user@example.net>:<public_api_key>" -H "Content-Type: application/json" "http://<app-example.net>/api/public/v1.0/groups/<group_id>/automationConfig" --digest -i -X PUT --data '
<configuration_document>
'

Upon successful update of the configuration, the API returns the HTTP/1.1 200 OK status code to indicate the request has succeeded.

8

Confirm successful update of the automation configuration.

Retrieve the automation configuration and confirm it contains the changes.

To retrieve the automation configuration, issue a command similar to the following. Replace the credentials, URL, and group ID as in previous steps.

curl -u "<user@example.net>:<public_api_key>" "http://<app-example.net>/api/public/v1.0/groups/<group_id>/automationConfig" --digest -i
9

Verify that the configuration update is deployed.

Use the automationStatus resource to verify the configuration update is fully deployed. Issue the following command, replacing the credentials, Public API key, URL, and project ID, as in previous steps.

curl -u "<user@example.net>:<public_api_key>" "http://<app-example.net>/api/public/v1.0/groups/<group_id>/automationStatus" --digest -i

The curl command returns a JSON object containing the processes array and the goalVersion key and value. The processes array contains a document for each server that hosts a MongoDB instance. The new configuration is successfully deployed when all lastGoalVersionAchieved fields in the processes array equal the value specified for goalVersion.

In the example response, processes[2].lastGoalVersionAchieved is behind goalVersion. This indicates that the MongoDB instance at server3.example.net is running one version behind the goalVersion. Wait several seconds and issue the curl command again.

{
  "goalVersion": 2,
  "processes": [{
    "hostname": "server1.example.net",
    "lastGoalVersionAchieved": 2,
    "name": "ReplSet_0",
    "plan": []
  }, {
    "hostname": "server2.example.net",
    "lastGoalVersionAchieved": 2,
    "name": "ReplSet_1",
    "plan": []
  }, {
     "hostname": "server3.example.net",
     "lastGoalVersionAchieved": 1,
     "name": "ReplSet_2",
     "plan":[...]
  }]
}

To view the new configuration in the Ops Manager web interface, click Deployment.

Next Steps

To make an additional version of MongoDB available in the cluster, follow the steps in Update the MongoDB Version of a Deployment.