- API >
- Public API Tutorials >
- Deploy a Cluster through the API
Deploy a Cluster through the API¶
On this page
Overview¶
This tutorial manipulates the Public API’s automation configuration to deploy a sharded cluster that is owned by another user. The tutorial first creates a new project, then a new user as owner of the project, and then a sharded cluster owned by the new user. You can create a script to automate these procedures for use in routine operations.
To perform these steps, you must have sufficient access to Ops Manager. A
user with the Global Owner
or
Project Owner
role has sufficient access.
The procedures install a cluster with two shards. Each shard comprises a three-member replica set. The tutorial installs one mongos and three config servers. Each component of the cluster resides on its own server, requiring a total of 10 servers.
The tutorial installs the Automation Agent on each server.
Prerequisites¶
Ops Manager must have an existing user. If you are deploying the sharded cluster on a fresh install of Ops Manager, you must register the first user.
You must have the URL of the Ops Manager Web Server, as set in the
mmsBaseUrl
setting of the Monitoring Agent configuration
file.
Provision ten servers to host the components of the sharded cluster. For server requirements, see the Production Notes in the MongoDB manual.
Each server must provide its
Automation Agent with full
networking access to the hostnames and ports of the Automation Agents on
all the other servers. Each agent runs the command hostname -f
to self-identify its hostname and port and report them to Ops Manager.
Tip
To ensure agents can reach each other, provision the servers using Automation. This installs the Automation Agents with correct network access. Then use this tutorial to reinstall the Automation Agents on those machines.
Examples¶
As you work with the API, you can view examples on the following GitHub page: https://github.com/10gen-labs/mms-api-examples/tree/master/automation/.
Procedures¶
Generate a Public API Key¶
This procedure displays the full API key just once. You must record the API key when it is displayed.
Note
A Public API key is different from an agent API key. A Public API key is associated with a user; an agent API key is associated with a project.
Log in as a Global Owner or project owner.¶
Log into the Ops Manager web interface as a user with the
Global Roles role or Project Owner
role.
Go to the Public API Access view.¶
Click on your user name in the upper-right hand corner and select Account. Then click Public API Access.
Generate a new Public API key.¶
In the API Keys section, click Generate. Then enter a description, such as “API Testing,” and click Generate.
If prompted for a two-factor verification code, enter the code and click Verify. Then click Generate again.
Copy and record the key.¶
Copy the key immediately when it is generated. Ops Manager displays the full key one time only. You will not be able to view the full key again.
Record the key in a secure place. After you have successfully recorded the key, click Close.
Create the Group and the User through the API¶
Use the API to create a group.¶
Use the Public API to send a groups
document to create the new group. Issue the following command, replacing
<user@example.net>
with your user credentials,
<public_api_key>
with your Public API key,
<app-example.net>
with the Ops Manager URL, and
<group_name>
with the name of the new group:
The API returns a document that includes the group’s agentApiKey
and id
.
Record the values of agentApiKey
and id
in the returned document.¶
Record these values for use in this procedure and in other procedures in this tutorial.
If you used a global owner user to create the group, you can remove that user from the group. (Optional)¶
The user you use to create the group is automatically added to the group.
If you used a global owner user, you can remove the
user from the group without losing the ability to make changes to the
group in the future. As long as you have the group’s agentApiKey
and
id
, you have full access to the group when logged in as the global
owner.
GET
the global owner’s ID. Issue the following command to
request the group’s users. Replace the credentials, Public API key,
URL, and group ID, with the relevant values:
The API returns a document that lists all the group’s users.
Locate the user with roles.roleName
set to GLOBAL_OWNER
.
Copy the user’s id
value, and issue the following to remove the
user from the group, replacing <user_id>
with the user’s id
value:
Upon successful removal of the user, the API returns the HTTP 200
OK
status code to indicate the request has succeeded.
Install the Automation Agent on each Provisioned Server¶
Your servers must have the networking access described in the Prerequisites.
Create the Automation Agent configuration file to be used on the servers.¶
Create the following configuration file and enter values as shown below.
The file uses your agent API key (agentApiKey
), group id
, and the Ops Manager URL.
Save this file as automation-agent.config
. You will distribute this
file to each of your provisioned servers.
Retrieve the command strings used to download and install the Automation Agent.¶
- In the Ops Manager web interface, click Deployment, then the Agents tab.
- Click Downloads & Settings.
- In the Automation column, click the link for your operating system. system to display the install instructions.
- Copy and save the following strings from the instructions:
- The
curl
string used to download the agent. - The
rpm
ordpkg
string to install the agent. For operating systems that usetar
to unpackage the agent, no install string is listed. - The
nohup
string used to run the agent.
- The
Download, configure, and run the Automation Agent on each server.¶
Do the following on each of the provisioned servers. You can create a script to use as a turn-key operation for these steps:
Use the curl
string to download the Automation Agent.
Use rpm
, dpkg
, or tar
to install the agent. Make the agent controllable by the new user you added to the group in the previous procedure.
Replace the contents of the config file with the file you created in the first step. The config file is one of the following, depending on the operating system:
/etc/mongodb-mms/automation-agent.config
<install_directory>/local.config
Check that the following directories exist and are accessible to the Automation Agent. If they do not, create them. The first two are created automatically on RHEL, CentOS, SUSE, Amazon Linux, and Ubuntu:
/var/lib/mongodb-mms-automation
/var/log/mongodb-mms-automation
/data
Use the nohup
string to run the Automation Agent.
Confirm the initial state of the automation configuration.¶
When the Automation Agent first runs, it downloads the
mms-cluster-config-backup.json
file, which describes the desired
state of the automation configuration.
On one of the servers, navigate to /var/lib/mongodb-mms-automation/
and open mms-cluster-config-backup.json
. Confirm that the file’s
version
field is set to 1
. Ops Manager automatically increments
this field as changes occur.
Deploy the New Cluster¶
To add or update a deployment, retrieve the configuration document, make changes as needed, and send the updated configuration though the API to Ops Manager.
The following procedure deploys an updated automation configuration through the Public API:
Retrieve the automation configuration from Ops Manager.¶
Use the automationConfig resource to retrieve the configuration. Issue the following command, replacing:
<user@example.net>
with your user credentials,<public_api_key>
with the previously retrieved Public API key,<app-example.net>
with the URL of Ops Manager, and<group_id>
with the previously retrieved project ID:
Confirm that the version
field of the retrieved automation
configuration matches the
version
field in the mms-cluster-config-backup.json file,
which is found on any server running the Automation Agent.
Create the top level of the new automation configuration.¶
Create a document with the following fields. As you build the configuration document, refer the description of an automation configuration for detailed explanations of the settings. For examples, refer to the following page on GitHub: https://github.com/10gen-labs/mms-api-examples/tree/master/automation/.
Add MongoDB versions to the automation configuration.¶
In the mongoDbVersions
array, add the versions of MongoDB to have
available to the deployment. Add only those versions you will use. For
this tutorial, the
following array includes just one version, 3.2.12
, but you can specify multiple
versions. Using 3.2.12
allows this deployment to later upgrade to
3.4
, as described in
Update the MongoDB Version of a Deployment.
Add the Monitoring Agent to the automation configuration.¶
In the monitoringVersions.hostname
field, enter the hostname of the
server where Ops Manager should install the Monitoring Agent. Use the fully
qualified domain name that running hostname -f
on the
server returns, as in the following:
This configuration example also includes the logPath
field, which
specifies the log location, and logRotate
, which specifies the log
thresholds.
Add the servers to the automation configuration.¶
This sharded cluster has 10 MongoDB instances, as
described in the Overview, each running
on its own server. Thus, the automation configuration’s processes
array will have 10
documents, one for each MongoDB instance.
The following example adds the first document to the processes
array. Replace <process_name_1>
with any name you choose, and
replace <server1.example.net>
with the FQDN of the server. You
will need to add 9 documents: one for each MongoDB instance in your
sharded cluster.
Specify the args2_6
syntax for the processes.<args>
field.
See Supported MongoDB Options for Automation for more
information.
Add the sharded cluster topology to the automation configuration.¶
Add two replica set documents to the replicaSets
array. Add
three members to each document. The following example shows one
replica set member added in the first replica set document:
In the sharding
array, add the replica sets to the shards, and
add the three config servers, as in the following:
Send the updated automation configuration.¶
Use the groups/<group_id>/automationConfig
endpoint to
send the automation configuration document to Ops Manager.
Replace <configuration_document>
with the configuration document you
created in the previous steps. Replace the credentials, Public API key,
URL, and group ID as in previous steps.
Upon successful update of the configuration, the API returns the
HTTP/1.1 200 OK
status code to indicate the request has succeeded.
Confirm successful update of the automation configuration.¶
Retrieve the automation configuration and confirm it contains the changes.
To retrieve the automation configuration, issue a command similar to the following. Replace the credentials, URL, and group ID as in previous steps.
Verify that the configuration update is deployed.¶
Use the automationStatus resource to verify the configuration update is fully deployed. Issue the following command, replacing the credentials, Public API key, URL, and project ID, as in previous steps.
The curl
command returns a JSON object containing the processes
array and the goalVersion
key and value. The processes
array
contains a document for each server that hosts a MongoDB instance.
The new configuration is successfully deployed when all lastGoalVersionAchieved
fields in the processes
array equal the value specified for goalVersion
.
In the example response, processes[2].lastGoalVersionAchieved
is behind goalVersion
. This indicates that the MongoDB instance at
server3.example.net
is running one version behind the goalVersion
.
Wait several seconds and issue the curl
command again.
To view the new configuration in the Ops Manager web interface, click Deployment.
Next Steps¶
To make an additional version of MongoDB available in the cluster, follow the steps in Update the MongoDB Version of a Deployment.