Unable to Reclaim Disk Space after Dropping Databases on Sharded Cluster (WiredTiger, v4.0.0)

Adding Approaches Tried in Dev section again as it was not formatted properly in my initial request

Approach Description & Outcome
db.dropDatabase() Dropping inactive databases in both dev and prod successfully removes collections and indexes from Mongo but _data files remain large on disk. No immediate OS-level space reclamation
compact command Compact command works at collection level. We have dropped the db directly
repair command Operation rebuilds all collections and indexes into new .wt files, but final directory size remains roughly the same
Manually delete .wt files in dbPath Wired Tiger stores the data with Collection1.wt, Collection2.wt ….and index1.wt, index2.wt …
Add a brand-new Secondary and allow resync Add a clean Secondary to each shard’s replicaset. Let it replicate only the remaining (non-dropped) databases/collections. Once initial sync is complete, step down the old Primary, make the new node Primary, and rebuild/delete the old data files. Drawbacks - Extra hardware/network cost. Data size is very large; resync takes an unacceptably long time. Risk of data divergence if not all collections are strictly identical
mongodump + drop data files + mongorestore mongodump entire production data (all shards). Stop all mongod processes. Delete all files under dbPath. mongorestore to load back only active databases. Drawbacks - Creating dumps of hundreds of GBs sometimes results in corrupted dump files (especially for large collections). Requires significant downtime (all shards must be offline). Risk of missing opLog entries or replication lag. Restoring hundreds of GB takes days, which we cannot afford