Mongodb Atlas not showing multiple shards

HI All

I have two collections. I have made “account_id” as sharding key. I created collection range_shard with “account_id” as range sharding. Another collection demo with “account_id” as hashed sharding. When I do sh.printShardingStatus, I see only one shard. I have created 1KB documents with 500 different tenant_id and 4 documents per tenant. I am using M30 cluster. Does mongo DB decides if a real shard is required based on the amount of the data?


ShardedDataDistribution

[
  {
    ns: 'test.range_singleshardkey',
    shards: [
      {
        shardName: 'config',
        numOrphanedDocs: 0,
        numOwnedDocuments: 301500,
        ownedSizeBytes: 19899000,
        orphanedSizeBytes: 0
      }
    ]
  },
  {
    ns: 'test.demo',
    shards: [
      {
        shardName: 'config',
        numOrphanedDocs: 0,
        numOwnedDocuments: 1504,
        ownedSizeBytes: 99264,
        orphanedSizeBytes: 0
      }
    ]
  },
  {
    ns: 'config.system.sessions',
    shards: [
      {
        shardName: 'config',
        numOrphanedDocs: 0,
        numOwnedDocuments: 64,
        ownedSizeBytes: 9024,
        orphanedSizeBytes: 0
      }
    ]
  }
]


[
  {
    database: {
      _id: 'test',
      primary: 'config',
      version: {
        uuid: UUID('b5d27584-7101-47f5-ab1c-d09647a27017'),
        timestamp: Timestamp({ t: 1750908545, i: 3 }),
        lastMod: 1
      }
    },
    collections: {
      'test.demo': {
        shardKey: {
          account_id: 'hashed'
        },
        unique: false,
        balancing: true,
        chunkMetadata: [
          {
            shard: 'config',
            nChunks: 1
          }
        ],
        chunks: [
          {
            min: {
              account_id: MinKey()
            },
            max: {
              account_id: MaxKey()
            },
            'on shard': 'config',
            'last modified': Timestamp({ t: 1, i: 0 })
          }
        ],
        tags: []
      },
      'test.range_singleshardkey': {
        shardKey: {
          account_id: 1
        },
        unique: false,
        balancing: true,
        chunkMetadata: [
          {
            shard: 'config',
            nChunks: 1
          }
        ],
        chunks: [
          {
            min: {
              account_id: MinKey()
            },
            max: {
              account_id: MaxKey()
            },
            'on shard': 'config',
            'last modified': Timestamp({ t: 1, i: 0 })
          }
        ],
        tags: []
      }
    }
  },

Analyze shard key output is here

db.range_singleshardkey.analyzeShardKey({"account_id": 1})

{
  keyCharacteristics: {
    numDocsTotal: Long('301500'),
    numOrphanDocs: Long('0'),
    avgDocSizeBytes: Long('66'),
    numDocsSampled: Long('301500'),
    isUnique: false,
    numDistinctValues: Long('1000'),
    mostCommonValues: [ [Object], [Object], [Object], [Object], [Object] ],
    monotonicity: {
      recordIdCorrelationCoefficient: 0.0145763473,
      type: 'not monotonic'
    },
    note: 'Due to performance reasons, the analyzeShardKey command does not filter out orphan documents when calculating metrics about the characteristics of the shard key. Therefore, if "numOrphanDocs" is large relative to "numDocsTotal", you may want to rerun the command at some other time to get more accurate "numDistinctValues" and "mostCommonValues" metrics.'
  },
  readDistribution: {
    sampleSize: {
      total: Long('0'),
      find: Long('0'),
      aggregate: Long('0'),
      count: Long('0'),
      distinct: Long('0')
    }
  },
  writeDistribution: {
    sampleSize: {
      total: Long('0'),
      update: Long('0'),
      delete: Long('0'),
      findAndModify: Long('0')
    }
  },
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1750929919, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('mZx4vSbeb43QjKhkYHsN8K6FJ5A=', 0),
      keyId: Long('7517901593753157655')
    }
  },
  operationTime: Timestamp({ t: 1750929919, i: 1 })
}

Thanks
Guru

Sharding is done by chunks. Default/inital chuck size is 128 MiBytes. The total size of your data is way below 128 MiBytes, thus all data are stored in one shard.
Add more data to have at least 128 MiB of data, then sharding should start.

Thanks for your response. I created a cluster with single shard. That’s the part of the problem. Once I created a cluster with two shards, it’s working