Wired stats spikes after dploying new mongodb stack and migration

Hello Everyone,

I was deploying a mongodb stack on docker swarm using this repo as guide:

The stack worked perfectly for 18 months, but now I have to upgrade it. Using the same stack file, only increasing the amount of shards and machines, I managed to deploy a new stack with working connection string that consists of to routers just as the repo describes.

The initial stack was consisting of 2 machines, 2 configsvrs, 2 shards (a+b), 1 router.
The new stack is consisting of 3 machines, 3 configsvrs, 3 shards (a+b+c), 2 routers.

Before migrating the data from the old db stack using this script:

LOGFILE="/mongodb_migration.log"
SOURCE_MONGO_URI=""
TARGET_MONGO_URI=""
DUMP_PATH="/mongo-dump"
LOCKFILE="/tmp/mongo_migration.lock"

# Check for concurrent execution
if [ -f "$LOCKFILE" ]; then
    echo "Migration is already running." | tee -a "$LOGFILE"
    exit 1
fi

# Create a lockfile
touch "$LOCKFILE"

# Ensure dump path exists
mkdir -p "$DUMP_PATH"

# List of databases to duplicate - make sure there are no trailing commas or spaces
DATABASES=("db1" "db2" "db3" "dbtest")

# Starting the migration
echo "Starting MongoDB migration..." | tee -a "$LOGFILE"
date | tee -a "$LOGFILE"

# Dump and Restore
for db in "${DATABASES[@]}"; do
    echo "--------------------------------" | tee -a "$LOGFILE"
    echo "Processing $db" | tee -a "$LOGFILE"
    
    # Dumping
    echo "Dumping $db from source MongoDB..." | tee -a "$LOGFILE"
    mongodump --uri="$SOURCE_MONGO_URI" --db="$db" --out="$DUMP_PATH/$db" --verbose | tee -a "$LOGFILE"

    # Restoring
    echo "Restoring $db to target MongoDB..." | tee -a "$LOGFILE"
    mongorestore --uri="$TARGET_MONGO_URI" --nsInclude="$db.*" --dir="$DUMP_PATH/$db" --verbose | tee -a "$LOGFILE"
done

echo "MongoDB migration completed." | tee -a "$LOGFILE"
date | tee -a "$LOGFILE"

# Remove the lockfile
rm "$LOCKFILE"

The “Performance” window in compass connecting to the new stack was behaving regularly (before any application connections were made), but after the migration script above (still no external connections), I noticed wired spikes in the OPERAIONS and NETWORK stats as you can see in the picture (6 days after migration script):

When the spikes started, I was sure it is part of the shards sync process, but now it is 7 days after the migration and not only the spikes are still there, they are also increase in NETWORK and OPERATION size, here is how it looks (7 days after migration script):

Did anyone encountered such behavior? Can someone think about a reason for it?

Any help would be much appreciated.

If im connecting trough a connectiong string that points only to 1 of the routers, the spikes are gone.
It happens only when using more than 1 IP in the connection string, any idea?