Sharded collection stop migration

Hello mongodb guru out there:

I need some help to troubleshoot migration issue on a sharded collection. We found one shard group is using more disk space than the other group. It is then we notice migration has been stopped on collection fs.chunks.

autoinputserver.fs.chunks
        shard key: { "files_id" : 1, "n" : 1 }
        unique: false
        balancing: false
        chunks:
                sf_autoinput_group1     25401
                sf_autoinput_group2     3019
        too many chunks to print, use verbose if you want to force print

We are on mongodb 3.4.14. Already checked with sh.status(true) and found no jumbo chunk. Once in a while I can see in mongod.log the following sharding events, but no actual migration took place.

2020-03-28T15:13:02.947Z I SHARDING [conn101473] request split points lookup for chunk autoinputserver.fs.chunks { : ObjectId('5e7f41b7a800af7dfc9ba4d1'), : 514 } -->> { : MaxKey, : MaxKey }
2020-03-28T15:13:02.958Z I SHARDING [conn101473] received splitChunk request: { splitChunk: "autoinputserver.fs.chunks", configdb: "sf-autoinput/mongo-configsvr-vm1.fra1.framework:27034,mongo-configsvr-vm2.fra1.framework:27034,mongo-configsvr-vm3.fra1.framework:27034", from: "sf_autoinput_group1", keyPattern: { files_id: 1.0, n: 1.0 }, shardVersion: [ Timestamp 2578000|28555, ObjectId('5b582192ed63e64322a1275e') ], min: { files_id: ObjectId('5e7f41b7a800af7dfc9ba4d1'), n: 514 }, max: { files_id: MaxKey, n: MaxKey }, splitKeys: [ { files_id: ObjectId('5e7f41b7a800af7dfc9ba4d1'), n: 838 }, { files_id: ObjectId('5e7f697efcf0ca0f05670b4f'), n: 8 } ] }
2020-03-28T15:13:02.967Z I SHARDING [conn101473] distributed lock 'autoinputserver.fs.chunks' acquired for 'splitting chunk [{ files_id: ObjectId('5e7f41b7a800af7dfc9ba4d1'), n: 514 }, { files_id: MaxKey, n: MaxKey }) in autoinputserver.fs.chunks', ts : 5e7f697e3815f68730cdd934
2020-03-28T15:13:02.968Z I SHARDING [conn101473] Refreshing chunks for collection autoinputserver.fs.chunks based on version 2578|28555||5b582192ed63e64322a1275e
2020-03-28T15:13:03.032Z I SHARDING [CatalogCacheLoader-113] Refresh for collection autoinputserver.fs.chunks took 64 ms and found version 2578|28555||5b582192ed63e64322a1275e
2020-03-28T15:13:03.053Z I SHARDING [conn101473] Refreshing chunks for collection autoinputserver.fs.chunks based on version 2578|28555||5b582192ed63e64322a1275e
2020-03-28T15:13:03.084Z I SHARDING [CatalogCacheLoader-113] Refresh for collection autoinputserver.fs.chunks took 31 ms and found version 2578|28558||5b582192ed63e64322a1275e
2020-03-28T15:13:03.227Z I SHARDING [conn101473] Refreshing metadata for collection autoinputserver.fs.chunks from collection version: 2578|28555||5b582192ed63e64322a1275e, shard version: 2578|28555||5b582192ed63e64322a1275e to collection version: 2578|28558||5b582192ed63e64322a1275e, shard version: 2578|28558||5b582192ed63e64322a1275e
2020-03-28T15:13:03.330Z I SHARDING [conn101473] distributed lock with ts: 5e7f697e3815f68730cdd934' unlocked.
2020-03-28T15:13:03.578Z I SHARDING [conn101473] request split points lookup for chunk autoinputserver.fs.chunks { : ObjectId('5e7f697efcf0ca0f05670b4f'), : 8 } -->> { : MaxKey, : MaxKey }
2020-03-28T15:13:03.876Z I SHARDING [conn101473] request split points lookup for chunk autoinputserver.fs.chunks { : ObjectId('5e7f697efcf0ca0f05670b4f'), : 8 } -->> { : MaxKey, : MaxKey }
2020-03-28T15:13:04.216Z I SHARDING [conn101473] request split points lookup for chunk autoinputserver.fs.chunks { : ObjectId('5e7f697efcf0ca0f05670b4f'), : 8 } -->> { : MaxKey, : MaxKey }
2020-03-28T15:13:04.527Z I SHARDING [conn101473] request split points lookup for chunk autoinputserver.fs.chunks { : ObjectId('5e7f697efcf0ca0f05670b4f'), : 8 } -->> { : MaxKey, : MaxKey }

So I want to see if anyone can give me some pointers as to where to check the shard migration issue?

Thanks,
Eric