The below attached is my output for the sh.status():
shards
[
{
_id: ‘replicaset1’,
host: ‘replicaset1/:27018,:27018’,
state: 1,
topologyTime: Timestamp({ t: 1687678667, i: 1 })
},
{
_id: ‘replicaset2’,
host: ‘replicaset2/:27018,:27018’,
state: 1,
topologyTime: Timestamp({ t: 1687678683, i: 1 }),
draining: true
}
]
Did you run removeshard command?
Was it successful?
What does the status show?
yes I did run the removeShard command and for the shard I did that was in draining: true
state. It stayed there for a few days even though the data inside it was very small.
And the cluster was not working fine the very whole time, it showed an error that one of my shard replicaset does not have a preferedPrimary as it read preference when I tried running simple commands such as show dbs and few other operations.
For small data idraining should not take that long
Did you check if any db exists on the shard you are dropping
You have to move it to primary shard and issue removeShard
What readpreference you used?
Is it set from connect string?