Why Timeseries Bucket count is different on 2 independent environment with same data

Hello Team,
We have been experimenting the timeseries Collections with Community Version 7.0.2.
So we have approximately 120M socket document as we open collection per day(Granularity minutes for timeseries collection). But Even thought that both environment has the same mongodb version and also both data’s in collections are identical, One of it’s Bucket Count is very high compared to other which I don’t expect to be. This causes usage of too much Space and also causes higher storage on dump files as well.
Our environments only different on ram and cpu.
First environment is 128GB ram and 12 CPU with 3 Different host as Replica setup.
Second environment is 16 GB ram and 8 cpu with 3 different host as replica Setup.
For our first environment, I expect better bucketing than the second environment relatively as the second environment has low specs on Ram and CPU. However what happen is just the opposite somehow;

On First environment:

db.system.buckets.DeviceLocations20231201.stats();

{
  ok: 1,
  capped: false,
  wiredTiger: {
    metadata: { formatVersion: 1 },
 ...
  ...
    cache_walk: {
      'Average difference between current eviction generation when the page was last considered': 0,
      'Average on-disk page image size seen': 0,
      'Average time in cache for pages that have been visited by the eviction server': 0,
      'Average time in cache for pages that have not been visited by the eviction server': 0,
      'Clean pages currently in cache': 0,
      'Current eviction generation': 0,
      'Dirty pages currently in cache': 0,
      'Entries in the root page': 0,
      'Internal pages currently in cache': 0,
      'Leaf pages currently in cache': 0,
      'Maximum difference between current eviction generation when the page was last considered': 0,
      'Maximum page size seen': 0,
      'Minimum on-disk page image size seen': 0,
      'Number of pages never visited by eviction server': 0,
      'On-disk page image sizes smaller than a single allocation unit': 0,
      'Pages created in memory and never written': 0,
      'Pages currently queued for eviction': 0,
      'Pages that could not be queued for eviction': 0,
      'Refs skipped during cache traversal': 0,
      'Size of the root page': 0,
      'Total number of pages currently in cache': 0
    },
    'checkpoint-cleanup': {
      'pages added for eviction': 0,
      'pages removed': 0,
      'pages skipped during tree walk': 36442922,
      'pages visited': 107703003
    },
    compression: {
      'compressed page maximum internal page size prior to compression': 4096,
      'compressed page maximum leaf page size prior to compression ': 131072,
      'compressed pages read': 1560302,
      'compressed pages written': 1678387,
      'number of blocks with compress ratio greater than 64': 0,
      'number of blocks with compress ratio smaller than 16': 27528,
      'number of blocks with compress ratio smaller than 2': 1,
      'number of blocks with compress ratio smaller than 32': 4,
      'number of blocks with compress ratio smaller than 4': 598026,
      'number of blocks with compress ratio smaller than 64': 0,
      'number of blocks with compress ratio smaller than 8': 934743,
      'page written failed to compress': 0,
      'page written was too small to compress': 480818
    },
    cursor: {
      'Total number of entries skipped by cursor next calls': 0,
      'Total number of entries skipped by cursor prev calls': 0,
      'Total number of entries skipped to position the history store cursor': 0,
      'Total number of times a search near has exited due to prefix config': 0,
      'Total number of times cursor fails to temporarily release pinned page to encourage eviction of hot or large page': 0,
      'Total number of times cursor temporarily releases pinned page to encourage eviction of hot or large page': 0,
      'bulk loaded cursor insert calls': 0,
      'cache cursors reuse count': 16977881,
      'close calls that result in cache': 16977917,
      'create calls': 410,
      'cursor bound calls that return an error': 0,
      'cursor bounds cleared from reset': 0,
      'cursor bounds comparisons performed': 0,
      'cursor bounds next called on an unpositioned cursor': 0,
      'cursor bounds next early exit': 0,
      'cursor bounds prev called on an unpositioned cursor': 0,
      'cursor bounds prev early exit': 0,
      'cursor bounds search early exit': 0,
      'cursor bounds search near call repositioned cursor': 0,
      'cursor cache calls that return an error': 0,
      'cursor close calls that return an error': 0,
      'cursor compare calls that return an error': 0,
      'cursor equals calls that return an error': 0,
      'cursor get key calls that return an error': 0,
      'cursor get value calls that return an error': 0,
      'cursor insert calls that return an error': 0,
      'cursor insert check calls that return an error': 0,
      'cursor largest key calls that return an error': 0,
      'cursor modify calls that return an error': 0,
      'cursor next calls that return an error': 0,
      'cursor next calls that skip due to a globally visible history store tombstone': 0,
      'cursor next calls that skip greater than 1 and fewer than 100 entries': 0,
      'cursor next calls that skip greater than or equal to 100 entries': 0,
      'cursor next random calls that return an error': 0,
      'cursor prev calls that return an error': 0,
      'cursor prev calls that skip due to a globally visible history store tombstone': 0,
      'cursor prev calls that skip greater than or equal to 100 entries': 0,
      'cursor prev calls that skip less than 100 entries': 0,
      'cursor reconfigure calls that return an error': 0,
      'cursor remove calls that return an error': 0,
      'cursor reopen calls that return an error': 0,
      'cursor reserve calls that return an error': 0,
      'cursor reset calls that return an error': 0,
      'cursor search calls that return an error': 0,
      'cursor search near calls that return an error': 0,
      'cursor update calls that return an error': 0,
      'insert calls': 24232041,
      'insert key and value bytes': Long("187075859619"),
      modify: 0,
      'modify key and value bytes affected': 0,
      'modify value bytes modified': 0,
      'next calls': 37316547,
      'open cursor count': 0,
      'operation restarted': 478671,
      'prev calls': 0,
      'remove calls': 0,
      'remove key bytes removed': 0,
      'reserve calls': 0,
      'reset calls': 55550057,
      'search calls': 12080108,
      'search history store calls': 0,
      'search near calls': 24266901,
      'truncate calls': 0,
      'update calls': 0,
      'update key and value bytes': 0,
      'update value size change': 0
    },
    reconciliation: {
      'VLCS pages explicitly reconciled as empty': 0,
      'approximate byte size of timestamps in pages written': 227856000,
      'approximate byte size of transaction IDs in pages written': 113928000,
      'dictionary matches': 0,
      'fast-path pages deleted': 0,
      'internal page key bytes discarded using suffix compression': 1824328,
      'internal page multi-block writes': 24744,
      'leaf page key bytes discarded using prefix compression': 0,
      'leaf page multi-block writes': 28030,
      'leaf-page overflow keys': 0,
      'maximum blocks required for a page': 53,
      'overflow values written': 0,
      'page reconciliation calls': 69292,
      'page reconciliation calls for eviction': 29022,
      'pages deleted': 11,
      'pages written including an aggregated newest start durable timestamp ': 456912,
      'pages written including an aggregated newest stop durable timestamp ': 260,
      'pages written including an aggregated newest stop timestamp ': 0,
      'pages written including an aggregated newest stop transaction ID': 0,
      'pages written including an aggregated newest transaction ID ': 456912,
      'pages written including an aggregated oldest start timestamp ': 456255,
      'pages written including an aggregated prepare': 0,
      'pages written including at least one prepare': 0,
      'pages written including at least one start durable timestamp': 1667584,
      'pages written including at least one start timestamp': 1667584,
      'pages written including at least one start transaction ID': 1667584,
      'pages written including at least one stop durable timestamp': 0,
      'pages written including at least one stop timestamp': 0,
      'pages written including at least one stop transaction ID': 0,
      'records written including a prepare': 0,
      'records written including a start durable timestamp': 14241000,
      'records written including a start timestamp': 14241000,
      'records written including a start transaction ID': 14241000,
      'records written including a stop durable timestamp': 0,
      'records written including a stop timestamp': 0,
      'records written including a stop transaction ID': 0
    },
    session: { 'object compaction': 0 },
    transaction: {
      'a reader raced with a prepared transaction commit and skipped an update or updates': 0,
      'checkpoint has acquired a snapshot for its transaction': 0,
      'number of times overflow removed value is read': 0,
      'race to read prepared update retry': 0,
      'rollback to stable history store keys that would have been swept in non-dryrun mode': 0,
      'rollback to stable history store records with stop timestamps older than newer records': 0,
      'rollback to stable inconsistent checkpoint': 0,
      'rollback to stable keys removed': 0,
      'rollback to stable keys restored': 0,
      'rollback to stable keys that would have been removed in non-dryrun mode': 0,
      'rollback to stable keys that would have been restored in non-dryrun mode': 0,
      'rollback to stable restored tombstones from history store': 0,
      'rollback to stable restored updates from history store': 0,
      'rollback to stable skipping delete rle': 0,
      'rollback to stable skipping stable rle': 0,
      'rollback to stable sweeping history store keys': 0,
      'rollback to stable tombstones from history store that would have been restored in non-dryrun mode': 0,
      'rollback to stable updates from history store that would have been restored in non-dryrun mode': 0,
      'rollback to stable updates removed from history store': 0,
      'rollback to stable updates that would have been removed from history store in non-dryrun mode': 0,
      'transaction checkpoints due to obsolete pages': 0,
      'update conflicts': 0
    }
  },
  sharded: false,
  size: 58930645847,
  numOrphanDocs: 0,
  storageSize: 14064058368,
  totalIndexSize: 599035904,
  totalSize: 14663094272,
  timeseries: {
    bucketCount: 12151933,
    numBucketInserts: 0,
    numBucketUpdates: 0,
    numBucketsOpenedDueToMetadata: 0,
    numBucketsClosedDueToCount: 0,
    numBucketsClosedDueToSchemaChange: 0,
    numBucketsClosedDueToSize: 0,
    numBucketsClosedDueToTimeForward: 0,
    numBucketsClosedDueToTimeBackward: 0,
    numBucketsClosedDueToMemoryThreshold: 0,
    numCommits: 0,
    numWaits: 0,
    numMeasurementsCommitted: 0,
    numBucketsClosedDueToReopening: 0,
    numBucketsArchivedDueToMemoryThreshold: 0,
    numBucketsArchivedDueToTimeBackward: 0,
    numBucketsReopened: 0,
    numBucketsKeptOpenDueToLargeMeasurements: 0,
    numBucketsClosedDueToCachePressure: 0,
    numBucketsFetched: 0,
    numBucketsQueried: 0,
    numBucketFetchesFailed: 0,
    numBucketQueriesFailed: 0,
    numBucketReopeningsFailed: 0,
    numDuplicateBucketsReopened: 0,
    numBytesUncompressed: 0,
    numBytesCompressed: 0,
    numSubObjCompressionRestart: 0,
    numCompressedBuckets: 0,
    numUncompressedBuckets: 0,
    numFailedDecompressBuckets: 0,
    avgBucketSize: 4849,
    bucketsNs: 'Sirius.system.buckets.DeviceLocations20231201'
  },
  indexSizes: { deviceId_1_dataDate_1: 599035904 },
  avgObjSize: 0,
  ns: 'Sirius.system.buckets.DeviceLocations20231201',
  nindexes: 1,
  scaleFactor: 1
}

Second Environment with Same command

{
  ok: 1,
  capped: false,
  wiredTiger: {
   ...
  ...
    cache_walk: {
      'Average difference between current eviction generation when the page was last considered': 0,
      'Average on-disk page image size seen': 0,
      'Average time in cache for pages that have been visited by the eviction server': 0,
      'Average time in cache for pages that have not been visited by the eviction server': 0,
      'Clean pages currently in cache': 0,
      'Current eviction generation': 0,
      'Dirty pages currently in cache': 0,
      'Entries in the root page': 0,
      'Internal pages currently in cache': 0,
      'Leaf pages currently in cache': 0,
      'Maximum difference between current eviction generation when the page was last considered': 0,
      'Maximum page size seen': 0,
      'Minimum on-disk page image size seen': 0,
      'Number of pages never visited by eviction server': 0,
      'On-disk page image sizes smaller than a single allocation unit': 0,
      'Pages created in memory and never written': 0,
      'Pages currently queued for eviction': 0,
      'Pages that could not be queued for eviction': 0,
      'Refs skipped during cache traversal': 0,
      'Size of the root page': 0,
      'Total number of pages currently in cache': 0
    },
    'checkpoint-cleanup': {
      'pages added for eviction': 0,
      'pages removed': 0,
      'pages skipped during tree walk': 17551505,
      'pages visited': 17650045
    },
    compression: {
      'compressed page maximum internal page size prior to compression': 4096,
      'compressed page maximum leaf page size prior to compression ': 131072,
      'compressed pages read': 2789104,
      'compressed pages written': 494260,
      'number of blocks with compress ratio greater than 64': 0,
      'number of blocks with compress ratio smaller than 16': 10829,
      'number of blocks with compress ratio smaller than 2': 707050,
      'number of blocks with compress ratio smaller than 32': 0,
      'number of blocks with compress ratio smaller than 4': 1880722,
      'number of blocks with compress ratio smaller than 64': 0,
      'number of blocks with compress ratio smaller than 8': 190503,
      'page written failed to compress': 3,
      'page written was too small to compress': 217624
    },
    cursor: {
      'Total number of entries skipped by cursor next calls': 0,
      'Total number of entries skipped by cursor prev calls': 0,
      'Total number of entries skipped to position the history store cursor': 0,
      'Total number of times a search near has exited due to prefix config': 0,
      'Total number of times cursor fails to temporarily release pinned page to encourage eviction of hot or large page': 0,
      'Total number of times cursor temporarily releases pinned page to encourage eviction of hot or large page': 0,
      'bulk loaded cursor insert calls': 0,
      'cache cursors reuse count': 751792,
      'close calls that result in cache': 751792,
      'create calls': 77,
      'cursor bound calls that return an error': 0,
      'cursor bounds cleared from reset': 0,
      'cursor bounds comparisons performed': 0,
      'cursor bounds next called on an unpositioned cursor': 0,
      'cursor bounds next early exit': 0,
      'cursor bounds prev called on an unpositioned cursor': 0,
      'cursor bounds prev early exit': 0,
      'cursor bounds search early exit': 0,
      'cursor bounds search near call repositioned cursor': 0,
      'cursor cache calls that return an error': 0,
      'cursor close calls that return an error': 0,
      'cursor compare calls that return an error': 0,
      'cursor equals calls that return an error': 0,
      'cursor get key calls that return an error': 0,
      'cursor get value calls that return an error': 0,
      'cursor insert calls that return an error': 0,
      'cursor insert check calls that return an error': 0,
      'cursor largest key calls that return an error': 0,
      'cursor modify calls that return an error': 0,
      'cursor next calls that return an error': 0,
      'cursor next calls that skip due to a globally visible history store tombstone': 0,
      'cursor next calls that skip greater than 1 and fewer than 100 entries': 0,
      'cursor next calls that skip greater than or equal to 100 entries': 0,
      'cursor next random calls that return an error': 0,
      'cursor prev calls that return an error': 0,
      'cursor prev calls that skip due to a globally visible history store tombstone': 0,
      'cursor prev calls that skip greater than or equal to 100 entries': 0,
      'cursor prev calls that skip less than 100 entries': 0,
      'cursor reconfigure calls that return an error': 0,
      'cursor remove calls that return an error': 0,
      'cursor reopen calls that return an error': 0,
      'cursor reserve calls that return an error': 0,
      'cursor reset calls that return an error': 0,
      'cursor search calls that return an error': 0,
      'cursor search near calls that return an error': 0,
      'cursor update calls that return an error': 0,
      'insert calls': 2938772,
      'insert key and value bytes': Long("119370645878"),
      modify: 0,
      'modify key and value bytes affected': 0,
      'modify value bytes modified': 0,
      'next calls': 10598121,
      'open cursor count': 0,
      'operation restarted': 6985,
      'prev calls': 3,
      'remove calls': 0,
      'remove key bytes removed': 0,
      'reserve calls': 0,
      'reset calls': 5505443,
      'search calls': 4448668,
      'search history store calls': 0,
      'search near calls': 2978299,
      'truncate calls': 0,
      'update calls': 0,
      'update key and value bytes': 0,
      'update value size change': 0
    },
    reconciliation: {
      'VLCS pages explicitly reconciled as empty': 0,
      'approximate byte size of timestamps in pages written': 25044624,
      'approximate byte size of transaction IDs in pages written': 12522312,
      'dictionary matches': 0,
      'fast-path pages deleted': 0,
      'internal page key bytes discarded using suffix compression': 641369,
      'internal page multi-block writes': 7982,
      'leaf page key bytes discarded using prefix compression': 0,
      'leaf page multi-block writes': 71472,
      'leaf-page overflow keys': 0,
      'maximum blocks required for a page': 3,
      'overflow values written': 0,
      'page reconciliation calls': 89669,
      'page reconciliation calls for eviction': 78529,
      'pages deleted': 0,
      'pages written including an aggregated newest start durable timestamp ': 209496,
      'pages written including an aggregated newest stop durable timestamp ': 0,
      'pages written including an aggregated newest stop timestamp ': 0,
      'pages written including an aggregated newest stop transaction ID': 0,
      'pages written including an aggregated newest transaction ID ': 209496,
      'pages written including an aggregated oldest start timestamp ': 209494,
      'pages written including an aggregated prepare': 0,
      'pages written including at least one prepare': 0,
      'pages written including at least one start durable timestamp': 453372,
      'pages written including at least one start timestamp': 453372,
      'pages written including at least one start transaction ID': 453372,
      'pages written including at least one stop durable timestamp': 0,
      'pages written including at least one stop timestamp': 0,
      'pages written including at least one stop transaction ID': 0,
      'records written including a prepare': 0,
      'records written including a start durable timestamp': 1565289,
      'records written including a start timestamp': 1565289,
      'records written including a start transaction ID': 1565289,
      'records written including a stop durable timestamp': 0,
      'records written including a stop timestamp': 0,
      'records written including a stop transaction ID': 0
    },
    session: { 'object compaction': 0 },
    transaction: {
      'a reader raced with a prepared transaction commit and skipped an update or updates': 0,
      'checkpoint has acquired a snapshot for its transaction': 0,
      'number of times overflow removed value is read': 0,
      'race to read prepared update retry': 0,
      'rollback to stable history store keys that would have been swept in non-dryrun mode': 0,
      'rollback to stable history store records with stop timestamps older than newer records': 0,
      'rollback to stable inconsistent checkpoint': 0,
      'rollback to stable keys removed': 0,
      'rollback to stable keys restored': 0,
      'rollback to stable keys that would have been removed in non-dryrun mode': 0,
      'rollback to stable keys that would have been restored in non-dryrun mode': 0,
      'rollback to stable restored tombstones from history store': 0,
      'rollback to stable restored updates from history store': 0,
      'rollback to stable skipping delete rle': 0,
      'rollback to stable skipping stable rle': 0,
      'rollback to stable sweeping history store keys': 0,
      'rollback to stable tombstones from history store that would have been restored in non-dryrun mode': 0,
      'rollback to stable updates from history store that would have been restored in non-dryrun mode': 0,
      'rollback to stable updates removed from history store': 0,
      'rollback to stable updates that would have been removed from history store in non-dryrun mode': 0,
      'transaction checkpoints due to obsolete pages': 0,
      'update conflicts': 0
    }
  },
  sharded: false,
  size: 20256427025,
  numOrphanDocs: 0,
  storageSize: 8362278912,
  totalIndexSize: 78336000,
  totalSize: 8440614912,
  timeseries: {
    bucketCount: 1508453,
    numBucketInserts: 1254611,
    numBucketUpdates: 0,
    numBucketsOpenedDueToMetadata: 56163,
    numBucketsClosedDueToCount: 0,
    numBucketsClosedDueToSchemaChange: 0,
    numBucketsClosedDueToSize: 148105,
    numBucketsClosedDueToTimeForward: 0,
    numBucketsClosedDueToTimeBackward: 0,
    numBucketsClosedDueToMemoryThreshold: 0,
    numCommits: 1254611,
    numWaits: 0,
    numMeasurementsCommitted: 87461045,
    avgNumMeasurementsPerCommit: 69,
    numBucketsClosedDueToReopening: 0,
    numBucketsArchivedDueToMemoryThreshold: 3860,
    numBucketsArchivedDueToTimeBackward: 0,
    numBucketsReopened: 0,
    numBucketsKeptOpenDueToLargeMeasurements: 0,
    numBucketsClosedDueToCachePressure: 1050343,
    numBucketsFetched: 0,
    numBucketsQueried: 0,
    numBucketFetchesFailed: 0,
    numBucketQueriesFailed: 56163,
    numBucketReopeningsFailed: 0,
    numDuplicateBucketsReopened: 0,
    numBytesUncompressed: 71960016212,
    numBytesCompressed: 12773718027,
    numSubObjCompressionRestart: 489056,
    numCompressedBuckets: 1198209,
    numUncompressedBuckets: 239,
    numFailedDecompressBuckets: 0,
    avgBucketSize: 13428,
    bucketsNs: 'CloneService.system.buckets.DeviceLocations20231201'
  },
  indexSizes: { deviceId_1_dataDate_1: 78336000 },
  avgObjSize: 0,
  ns: 'CloneService.system.buckets.DeviceLocations20231201',
  nindexes: 1,
  scaleFactor: 1
}

Both environments are 3 member replica (PSS) setup and Both environment receives data from c# application with same speed. (around 1k data per sec)

Actual data allocation metrics on both Environments below;

  • 14.85 GB actual storage on SSD storage (58.93GB uncompressed) for the First environment(with 128 GB Ram)
  • 8.32 GB actual storage on SSD storage (20.26GB uncompressed for the second environment(with 16 GB ram)
    there is also difference almost 8x on indexing size which costs more on first environment.

Both environment have almost 9x bucket count difference even though that granularity is the same.

They both have metaField which is deviceId.

I really don’t understand what causes this. Any help and recommendation are welcomed.

1 Like

Hi @cagatay_erem,

Welcome to the MongoDB Community forums.

Could you please share the output of the

> db.<collection_name>.countDocuments()
> db.system.buckets.<collection_name>.countDocuments()

for both collections. If the data ingestion is consistent for both cases then the number of documents should be identical.

The bucketing pattern and creation of the bucket depend on the timestamp and metafield. Thus, they rely on the data ingestion pattern.

Also looking at the bucket counts from both the shared stats,

    bucketCount: 12151933, // First Environment 
    bucketCount: 1508453, // Second Environment

The initial bucketCount is eight times greater in the first environment. Additionally, could you please provide the command you utilized to create the TS collection in both environments?

Look forward to hearing from you.

Best regards,
Kushagra

Hello @Kushagra_Kesav
Thank you for your time. Let me share the things you wanted to see,

First Environment:

db.DeviceLocations20231201.countDocuments()
120458045

Second Environment:

db.DeviceLocations20231201.countDocuments()
120451210

There might slightly difference because of some devices can send data’s later on and might not updated yet on the other environment. Its quite normal for our domain.

Another important aspect which I wanted to point is exactly as you mentioned here,

When I check the for Each bucket on an example of a metafield, this is how bucketing looks like
On first Environment:

db.system.buckets.DeviceLocations20231201.findOne({meta: 12100055});
{
  _id: ObjectId("656922000ef36a506272aa12"),
  control: {
    version: 2,
    min: {
      createdDate: 2023-12-01T00:09:54.109Z,
      dataDate: 2023-12-01T00:00:00.000Z,
      dataTypes: [
        1
      ],
      ignition: 0,
      plate: 'REDACTED',
      addressInfo: {
        address: 'REDACTED',
        country: 'REDACTED',
        province: 'REDACTED',
        district: 'REDACTED',
        postalCode: ''
      },
      areaId: 0,
      gpsInfo: REDACTED,
      batteryInfo: {
        batteryStatus: 5,
        batteryLevel: 5
      },
      signalQuality: 3,
      temperatureInfo: {
        t1: 255,
        t2: 255,
        t3: 255,
        t4: 255
      }
    },
	
    max: {
      createdDate: 2023-12-01T03:18:46.140Z,
      dataDate: 2023-12-01T03:16:43.000Z,
      dataTypes: [
        1
      ],
      ignition: 0,
      plate: 'REDACTED',
      addressInfo: {
        address: 'REDACTED',
        country: 'REDACTED',
        province: 'REDACTED',
        district: 'REDACTED',
        postalCode: ''
      },
      areaId: 0,
      gpsInfo: REDACTED,
      batteryInfo: {
        batteryStatus: 5,
        batteryLevel: 5
      },
      signalQuality: 3,
      temperatureInfo: {
        t1: 255,
        t2: 255,
        t3: 255,
        t4: 255
      }
    },
    count: 15
  },
  meta: 12100055,
  data: {
    REDACTED}
}

On Second Environment:

db.system.buckets.DeviceLocations20231201.findOne({meta: 12100055});
{
  _id: ObjectId("65692200b2af0bbff55dad5a"),
  control: {
    version: 2,
    min: {
      createdDate: 2023-12-01T00:09:54.109Z,
      dataDate: 2023-12-01T00:00:00.000Z,
      dataTypes: [
        1,
        2,
        72
      ],
      ignition: 0,
      plate: 'REDACTED',
      addressInfo: {
        address: 'REDACTED',
        country: 'REDACTED',
        province: 'REDACTED',
        district: 'REDACTED',
        postalCode: ''
      },
      areaId: 0,
      gpsInfo: REDACTED,
      batteryInfo: {
        batteryStatus: 5,
        batteryLevel: 5
      },
      signalQuality: 0,
      temperatureInfo: {
        t1: 255,
        t2: 255,
        t3: 255,
        t4: 255
      },
      informationEvents: [
        {
          type: 1,
          key: '1',
          value: '{"Duration":47809,"DurationLimit":60,"Stopped":1}'
        }
      ]
    },
	
    max: {
      createdDate: 2023-12-01T06:09:34.039Z,
      dataDate: 2023-12-01T06:07:58.000Z,
      dataTypes: [
        5,
        2,
        72
      ],
      ignition: 1,
      plate: 'REDACTED',
      addressInfo: {
        address: 'REDACTED',
        country: 'REDACTED',
        province: 'REDACTED',
        district: 'REDACTED',
        postalCode: ''
      },
      areaId: 0,
      gpsInfo: REDACTED,
      batteryInfo: {
        batteryStatus: 5,
        batteryLevel: 5
      },
      signalQuality: 5,
      temperatureInfo: {
        t1: 255,
        t2: 255,
        t3: 255,
        t4: 255
      },
      informationEvents: [
        {
          type: 40,
          key: '7',
          value: '{"LastDistrict":"REDACTED"}'
        }
      ]
    },
    count: 199
  },
  meta: 12100055,
  data: {
  REDACTED
  }
}

I had to redact and delete some important schematic fields due to regulations.

When comparing bucket averages;
On first Environment;

db.system.buckets.DeviceLocations20231201.aggregate([ { $group: { _id: null, averageCount: { $avg: “$control.count” } } } ]);
{
_id: null,
averageCount: 9.934568221590789
}

Second Environment:

db.system.buckets.DeviceLocations20231201.aggregate([ { $group: { _id: null, averageCount: { $avg: “$control.count” } } } ]);
{
_id: null,
averageCount: 80.96513435114824
}

Here it is;

db.createCollection(
“DeviceLocations20231201”,
{
timeseries: {
timeField: “dataDate”,
metaField: “deviceId”,
granularity: “minutes”
}});

So I expect for the first environment, to hold more data in its bucket rather than 15, like second environment have done as I showed. It causes more space usage for my SSD’s and also indexing as well.
Thank you for you help.

Hello @Kushagra_Kesav
I have digged up further address the issue and found out that iterating Daily Collections causing this issue as I decide on our architectural decision. My issue is very similar to this topic.

I was trying to understand the issue and shared it via StackExchange Community,

wiredtiger - MongoDb Timeseries Data at high Load Causes Cache Pressure in Time - Database Administrators Stack Exchange.

I guess I address this issue so far, my proposal to fix it on the Mongo feedback,

I do really like this Timeseries Feature with Mongo and Would be great to enhance it and support any architectural design.

So we basically no need to restart or trying to force wiredtiger to put the datas into Disk for buckets which we left behind If we decide iterating daily.
Thank you.