Navigation
This version of the documentation is archived and no longer supported.

Deploy a Sharded Cluster

The topics on this page present an ordered sequence of the tasks required to set up a sharded cluster. Before deploying a sharded cluster for the first time, consider the Sharded Cluster Overview and Sharded Cluster Architectures documents.

To set up a sharded cluster, complete the following sequence of tasks in the order defined below:

  1. Start the Config Server Database Instances
  2. Start the mongos Instances
  3. Add Shards to the Cluster
  4. Enable Sharding for a Database
  5. Enable Sharding for a Collection

Warning

Sharding and “localhost” Addresses

If you use either “localhost” or 127.0.0.1 as the hostname portion of any host identifier, for example as the host argument to addShard or the value to the --configdb run time option, then you must use “localhost” or 127.0.0.1 for all host settings for any MongoDB instances in the cluster. If you mix localhost addresses and remote host address, MongoDB will error.

Start the Config Server Database Instances

The config server processes are mongod instances that store the cluster’s metadata. You designate a mongod as a config server using the --configsvr option. Each config server stores a complete copy of the cluster’s metadata.

In production deployments, you must deploy exactly three config server instances, each running on different servers to assure good uptime and data safety. In test environments, you can run all three instances on a single server.

Config server instances receive relatively little traffic and demand only a small portion of system resources. Therefore, you can run an instance on a system that runs other cluster components.

  1. Create data directories for each of the three config server instances. By default, a config server stores its data files in the /data/configdb directory. You can choose a different location. To create a data directory, issue a command similar to the following:

    mkdir /data/configdb
    
  2. Start the three config server instances. Start each by issuing a command using the following syntax:

    mongod --configsvr --dbpath <path> --port <port>
    

    The default port for config servers is 27019. You can specify a different port. The following example starts a config server using the default port and default data directory:

    mongod --configsvr --dbpath /data/configdb --port 27019
    

    For additional command options, see mongod or Configuration File Options.

    Note

    All config servers must be running and available when you first initiate a sharded cluster.

Start the mongos Instances

The mongos instances are lightweight and do not require data directories. You can run a mongos instance on a system that runs other cluster components, such as on an application server or a server running a mongod process. By default, a mongos instance runs on port 27017.

When you start the mongos instance, specify the hostnames of the three config servers, either in the configuration file or as command line parameters. For operational flexibility, use DNS names for the config servers rather than explicit IP addresses. If you’re not using resolvable hostname, you cannot change the config server names or IP addresses without a restarting every mongos and mongod instance.

To start a mongos instance, issue a command using the following syntax:

mongos --configdb <config server hostnames>

For example, to start a mongos that connects to config server instance running on the following hosts and on the default ports:

  • cfg0.example.net
  • cfg1.example.net
  • cfg2.example.net

You would issue the following command:

mongos --configdb cfg0.example.net:27019,cfg1.example.net:27019,cfg2.example.net:27019

Each mongos in a sharded cluster must use the same configdb string, with identical host names listed in identical order.

If you start a mongos instance with a string that does not exactly match the string used by the other mongos instances in the cluster, the mongos return a config-database-string-error error and refuse to start.

Add Shards to the Cluster

A shard can be a standalone mongod or a replica set. In a production environment, each shard should be a replica set.

  1. From a mongo shell, connect to the mongos instance. Issue a command using the following syntax:

    mongo --host <hostname of machine running mongos> --port <port mongos listens on>
    

    For example, if a mongos is accessible at mongos0.example.net on port 27017, issue the following command:

    mongo --host mongos0.example.net --port 27017
    
  2. Add each shard to the cluster using the sh.addShard() method, as shown in the examples below. Issue sh.addShard() separately for each shard. If the shard is a replica set, specify the name of the replica set and specify a member of the set. In production deployments, all shards should be replica sets.

    Optional

    You can instead use the addShard database command, which lets you specify a name and maximum size for the shard. If you do not specify these, MongoDB automatically assigns a name and maximum size. To use the database command, see addShard.

    The following are examples of adding a shard with sh.addShard():

    • To add a shard for a replica set named rs1 with a member running on port 27017 on mongodb0.example.net, issue the following command:

      sh.addShard( "rs1/mongodb0.example.net:27017" )
      

      Changed in version 2.0.3.

      For MongoDB versions prior to 2.0.3, you must specify all members of the replica set. For example:

      sh.addShard( "rs1/mongodb0.example.net:27017,mongodb1.example.net:27017,mongodb2.example.net:27017" )
      
    • To add a shard for a standalone mongod on port 27017 of mongodb0.example.net, issue the following command:

      sh.addShard( "mongodb0.example.net:27017" )
      

    Note

    It might take some time for chunks to migrate to the new shard.

Enable Sharding for a Database

Before you can shard a collection, you must enable sharding for the collection’s database. Enabling sharding for a database does not redistribute data but make it possible to shard the collections in that database.

Once you enable sharding for a database, MongoDB assigns a primary shard for that database where MongoDB stores all data before sharding begins.

  1. From a mongo shell, connect to the mongos instance. Issue a command using the following syntax:

    mongo --host <hostname of machine running mongos> --port <port mongos listens on>
    
  2. Issue the sh.enableSharding() method, specifying the name of the database for which to enable sharding. Use the following syntax:

    sh.enableSharding("<database>")
    

Optionally, you can enable sharding for a database using the enableSharding command, which uses the following syntax:

db.runCommand( { enableSharding: <database> } )

Enable Sharding for a Collection

You enable sharding on a per-collection basis.

  1. Determine what you will use for the shard key. Your selection of the shard key affects the efficiency of sharding. See the selection considerations listed in the Shard Key Selection.

  2. Enable sharding for a collection by issuing the sh.shardCollection() method in the mongo shell. The method uses the following syntax:

    sh.shardCollection("<database>.<collection>", shard-key-pattern)
    

    Replace the <database>.<collection> string with the full namespace of your database, which consists of the name of your database, a dot (e.g. .), and the full name of the collection. The shard-key-pattern represents your shard key, which you specify in the same form as you would an index key pattern.

Example

The following sequence of commands shards four collections:

sh.shardCollection("records.people", { "zipcode": 1, "name": 1 } )
sh.shardCollection("people.addresses", { "state": 1, "_id": 1 } )
sh.shardCollection("assets.chairs", { "type": 1, "_id": 1 } )
sh.shardCollection("events.alerts", { "hashed_id": 1 } )

In order, these operations shard:

  1. The people collection in the records database using the shard key { "zipcode": 1, "name": 1 }.

    This shard key distributes documents by the value of the zipcode field. If a number of documents have the same value for this field, then that chunk will be splittable by the values of the name field.

  2. The addresses collection in the people database using the shard key { "state": 1, "_id": 1 }.

    This shard key distributes documents by the value of the state field. If a number of documents have the same value for this field, then that chunk will be splittable by the values of the _id field.

  3. The chairs collection in the assets database using the shard key { "type": 1, "_id": 1 }.

    This shard key distributes documents by the value of the type field. If a number of documents have the same value for this field, then that chunk will be splittable by the values of the _id field.

  4. The alerts collection in the events database using the shard key { "hashed_id": 1 }.

    This shard key distributes documents by the value of the hashed_id field. Presumably this is a computed value that holds the hash of some value in your documents and is able to evenly distribute documents throughout your cluster.