- API >
- Public API Tutorials >
- Deploy a Cluster through the API
Deploy a Cluster through the API¶
On this page
Overview¶
This tutorial manipulates the Public API’s automation configuration to deploy a sharded cluster that is owned by another user. The tutorial first creates a new group, then a new user as owner of the group, and then a sharded cluster owned by the new user. You can create a script to automate these procedures for use in routine operations.
To perform these steps, you must have access to Ops Manager as a user with the Global Owner role.
The procedures install a cluster with two shards. Each shard comprises a three-member replica set. The tutorial installs one mongos and three config servers. Each component of the cluster resides on its own server, requiring a total of 10 servers.
The tutorial installs the Automation Agent on each server.
Prerequisites¶
Ops Manager must have an existing user with Global Owner role. The first user you create has this role. Global owners can perform any Ops Manager action, both through the Ops Manager interface and through the API.
You must have the URL of the Ops Manager Web Server, as set in the
mmsBaseUrl
setting of the Monitoring Agent configuration
file.
Provision ten servers to host the components of the sharded cluster. For server requirements, see the Production Notes in the MongoDB manual.
Each server must provide its Automation Agent with full networking access to
the hostnames and ports of the Automation Agents on all the other servers.
Each agent runs the command hostname -f
to self-identify its hostname
and port and report them to Ops Manager.
Tip
To ensure agents can reach each other, provision the servers using Automation. This installs the Automation Agents with correct network access. Then use this tutorial to reinstall the Automation Agents on those machines.
Examples¶
As you work with the API, you can view examples on the following GitHub page: https://github.com/10gen-labs/mms-api-examples/tree/master/automation/.
Procedures¶
Retrieve API Key¶
This procedure displays the full API key just once. You must record the API key when it is displayed.
Note that this API key for the Public API is different from the API key for a group, which is always visible in Ops Manager through the Group Settings tab.
Log in as a Global Owner.¶
Log into the Ops Manager web interface as a user with the Global Owner role.
Select the Administration tab and then API Keys & Whitelists.¶
Generate a new API key.¶
In the API Keys section, click Generate. Then enter a description, such as “API Testing,” and click Generate.
If prompted for a two-factor verification code, enter the code and click Verify. Then click Generate again.
Copy and record the key.¶
Copy the key immediately when it is generated. Ops Manager displays the full key one time only. You will not be able to view the full key again.
Record the key in a secure place. After you have successfully recorded the key, click Close.
Create the Group and the User through the API¶
Use the API to create a group.¶
Use the Public API to send a groups
document to create the new group. Issue the following command, replacing
<user@example.net>
with the credentials of the global owner, <api_key>
with your API key,
<app-example.net>
with the Ops Manager URL, and
<group_name>
with the name of the new group:
The API returns a document that includes the group’s agentApiKey
and
id
. The API automatically sets the publicApiEnabled
field to
true
to allow subsequent API-based configuration.
Record the values of agentApiKey
and id
in the returned document.¶
Record these values for use in this procedure and in other procedures in this tutorial.
Remove global owner from the group. (Optional)¶
The global owner that you used to create the
group is also automatically added to the group. You can remove the
global owner from the group without losing the ability to make changes to the
group in the future. As long as you have the group’s agentApiKey
and
id
, you have full access to the group when logged in as the global
owner.
GET
the global owner’s ID. Issue the following command to
request the group’s users, replace the credentials, API key,
URL, and group ID, with the relevant values:
The API returns a document that lists all the group’s users.
Locate the user with roles.roleName
set to GLOBAL_OWNER
.
Copy the user’s id
value, and issue the following to remove the
user from the group, replacing <user_id>
with the user’s id
value:
Upon successful removal of the user, the API returns the HTTP 200
OK
status code to indicate the request has succeeded.
Install the Automation Agent on each Provisioned Server¶
Your servers must have the networking access described in the Prerequisites.
Create the Automation Agent configuration file to be used on the servers.¶
Create the following configuration file and enter values as shown below.
The file uses your agentApiKey
, group id
, and the Ops Manager URL.
Save this file as automation-agent.config
. You will distribute this
file to each of your provisioned servers.
Retrieve the command strings used to download and install the Automation Agent.¶
In the Ops Manager web interface, select the Administration tab and then select the Agents page. Under Automation at the bottom of the page, select your operating system to display the install instructions. Copy and save the following strings from these instructions:
- The
curl
string used to download the agent. - The
rpm
ordpkg
string to install the agent. For operating systems that usetar
to unpackage the agent, no install string is listed. - The
nohup
string used run the agent.
Download, configure, and run the Automation Agent on each server.¶
Do the following on each of the provisioned servers. You can create a script to use as a turn-key operation for these steps:
Use the curl
string to download the Automation Agent.
Use rpm
, dpkg
, or tar
to install the agent. Make the agent controllable by the new user you added to the group in the previous procedure.
Replace the contents of the config file with the file you created in the first step. The config file is one of the following, depending on the operating system:
/etc/mongodb-mms/automation-agent.config
<install_directory>/local.config
Check that the following directories exist and are accessible to the Automation Agent. If they do not, create them. The first two are created automatically on RHEL, CentOS, SUSE, Amazon Linux, and Ubuntu:
/var/lib/mongodb-mms-automation
/var/log/mongodb-mms-automation
/data
Use the nohup
string to run the Automation Agent.
Confirm the initial state of the automation configuration.¶
When the Automation Agent first runs, it downloads the
mms-cluster-config-backup.json
file, which describes the desired
state of the automation configuration.
On one of the servers, navigate to /var/lib/mongodb-mms-automation/
and open mms-cluster-config-backup.json
. Confirm that the file’s
version
field is set to 1
. Ops Manager automatically increments
this field as changes occur.
Deploy the New Cluster¶
To add or update a deployment, retrieve the configuration document, make changes as needed, and send the updated configuration though the API to Ops Manager.
Tip
You can learn more about the configuration file by viewing it in Ops Manager. Select the Deployment tab and then the Raw AutomationConfig page. Note that the raw configuration contains fields you should not update with the configuration document.
The following procedure deploys an updated automation configuration through the Public API:
Retrieve the automation configuration from Ops Manager.¶
Use the automationConfig
resource to retrieve the configuration. Issue the following command,
replacing <user@example.net>
with the credentials of the
global owner, <api_key>
with the
previously retrieved API key, <app-example.net>
with the URL of
Ops Manager, and <group_id>
with the previously retrieved group ID:
Confirm that the version
field of the retrieved automation
configuration matches the
version
field in the mms-cluster-config-backup.json
file.
Create the top level of the new configuration document.¶
Create a document with the following fields. As you build the configuration document, refer the description of an automation configuration for detailed explanations of the settings. For examples, refer to the following page on GitHub: https://github.com/10gen-labs/mms-api-examples/tree/master/automation/.
Add MongoDB versions to the configuration document.¶
In the mongoDbVersions
array, add the versions of MongoDB to have
available to the deployment. Add only those versions you will use. For
this tutorial, the
following array includes just one version, 2.4.12
, but you can specify multiple
versions. Using 2.4.12
allows this deployment to later upgrade to
2.6
, as described in
Update the MongoDB Version of a Deployment.
Add the Monitoring Agent to the configuration document.¶
In the monitoringVersions.hostname
field, enter the hostname of the
server where Ops Manager should install the Monitoring Agent. Use the fully
qualified domain name that running hostname -f
on the
server returns, as in the following:
This configuration example also includes the logPath
field, which
specifies the log location, and logRotate
, which specifies the log
thresholds.
Add the servers to the configuration document.¶
This sharded cluster has 10 MongoDB instances, as
described in the Overview, each running
on its own server. Thus, the automation configuration’s processes
array will have 10
documents, one for each MongoDB instance.
The following example adds the first document to the processes
array. Replace <process_name_1>
with any name you choose, and
replace <server_1.example.net>
with the FQDN of the server. You
will need to add 9 documents: one for each MongoDB instance in your
sharded cluster.
The example uses the args2_4
syntax for the processes.<args>
field. For MongoDB versions 2.6 and later, use the args2_6
syntax.
See Supported MongoDB Options for Automation for more
information.
Add the sharded cluster topology to the configuration document.¶
Add two replica set documents to the replicaSets
array. Add
three members to each document. The following example shows one
replica set member added in the first replica set document:
In the sharding
array, add the replica sets to the shards, and
add the three config servers, as in the following:
Send the configuration document.¶
Use the groups/<group_id>/automationConfig
endpoint to
to send the automation configuration document to Ops Manager, as in
the following. Replace
<user@example.net>
with the credentials of the global owner, <api_key>
with previously retrieved API key,
<app-example.net>
with the Ops Manager URL, and
<group_id>
with the previously retrieved group id
.
Replace <configuration_document>
with the configuration document
you have created in the previous steps.
Upon successful update of the configuration, the API returns the
HTTP/1.1 200 OK
status code to indicate the request has succeeded.
Confirm successful update of the automation configuration.¶
Retrieve the automation configuration
to compare it against the document you sent. In particular, confirm that
the version
field equals 2
.
Issue a command similar to the following. Replace the credentials, API
key, <app-example.net>
URL, and group id
as in previous steps.
Check the deployment status to ensure goal state is reached.¶
Use the automationStatus
resource to retrieve the deployment status. Issue the following
command, replacing the credentials, API key, URL, and
group id
as in previous steps.
The command returns the processes
array and the goalVersion
field. The processes
array contains a document for each server
that is to run a MongoDB instance, similar to the following:
If any document has a lastGoalVersionAchieved
field equal to 1
,
the configuration is in the process of deploying. The document’s plan
field
displays the remaining work. Wait several seconds and issue the
curl
command again.
When all lastGoalVersionAchieved
fields equal the value
specified in the goalVersion
field, the new
configuration has successfully deployed.
To view the new configuration in the Ops Manager web interface, select the Deployment tab and then the Deployment page.
Next Steps¶
To make an additional version of MongoDB available in the cluster, follow the steps in Update the MongoDB Version of a Deployment.