We are operating within a multi-cluster environment that includes primary and secondary nodes across the configuration, MongoDB routers, and multiple replicated shard clusters. Recently, we added several nodes to the shard clusters, and we have observed that rebalancing is not occurring as anticipated, resulting in significant data imbalance, causing the storage to run out of space on the rest of the clusters. Here is the distribution of the chunks across the clusters.
{ "_id" : "node-data1", "count" : 372246 } { "_id" : "node-data2", "count" : 372236 } { "_id" : "node-data3", "count" : 372239 } { "_id" : "node-data4", "count" : 372229 } { "_id" : "node-data5", "count" : 109849 } { "_id" : "node-data6", "count" : 109693 } { "_id" : "node-data7", "count" : 46619 } { "_id" : "node-data8", "count" : 46535 }
I am observing many jumbo chunks for one of the largest tables, and the balancing process is proceeding very slowly. Confirmed that the autosplitter is functioning on the shard nodes. 2024-10-17T11:25:24.248+0000 I SHARDING [ChunkSplitter-1488] request split points lookup for chunk proddb.metrics { : -3216651796548520950 } -->> { : -3216609153408564802 } 2024-10-17T11:27:43.926+0000 I SHARDING [ChunkSplitter-1489] request split points lookup for chunk proddb.metrics { : -2441014098372422508 } -->> { : -2440993494113865685 } 2024-10-17T11:29:45.360+0000 I SHARDING [ChunkSplitter-1490] request split points lookup for chunk proddb.metrics { : 4074468535445309800 } -->> { : 4074496847277228083 } 2024-10-17T11:32:50.063+0000 I SHARDING [ChunkSplitter-1491] request split points lookup for chunk proddb.metrics { : -2441014098372422508 } -->> { : -2440993494113865685 } 2024-10-17T11:33:33.803+0000 I SHARDING [ChunkSplitter-1492] request split points lookup for chunk proddb.metrics { : -3216651796548520950 } -->> { : -3216609153408564802 }
Chunk value is set to null db.settings.findOne() null use config switched to db config db.settings.findOne() { "_id" : "balancer", "mode" : "full", "stopped" : false }
Also, I see that chunks are getting split into 2 parts as per the logs, which is approx 64MB
2024-10-17T11:23:34.302+0000 W SHARDING [ChunkSplitter-1487] Finding the auto split vector for prodnam.file_reference completed over { file: 1, gcid: 1 } - numSplits: 1 - duration: 2031ms 2024-10-17T11:23:34.331+0000 I SHARDING [ChunkSplitter-1487] autosplitted prodnam.reference chunk: shard: site-data1, lastmod: 4420|2||6089abb02e729dbed8945b52, [{ file: "nLmAstBHRFC00HJsTW5o95/tOyr27JBSBtztGUuU8IY=", gcid: "PjAoXrMCOxaXYXlJzoRRUEABAPasUVYBS55hXBJUcyM=" }, { file: "nM6QCJ84zHEb742BjWpSVHCCzRAvzZrBX2ohw8xO+6c=", gcid: "s8A57ngkqsEeaAA0esVy1Vx94gIvpDt5vP0XzxLq9i4=" }) into 2 parts (maxChunkSizeBytes 67108864) 2024-10-17T11:25:24.248+0000 I SHARDING [ChunkSplitter-1488] request split points lookup for chunk proddb.metrics { : -3216651796548520950 } -->> { : -3216609153408564802 } 2024-10-17T11:27:43.926+0000 I SHARDING [ChunkSplitter-1489] request split points lookup for chunk proddb.metrics { : -2441014098372422508 } -->> { : -2440993494113865685 }
I would greatly appreciate your suggestions.