Sync between two clusters throwing "context cancelled" errors

I’ve deployed 2 replica sets on a t2.micro with 1CPU and 1GB RAM for each node. 6 EC2 instances in total all running on RHEL 8.

Status of replica sets are fine and all of them are bound to the public IP.

I’ve also installed mongosync on another EC2 instance running RHEL8 with 16GB RAM (since mongosync recommends at least 10GB memory)

For both replica sets, I’ve created a user and password for with the role of ‘root’ for the db ‘admin’.

I run the following command:

mongosync \
      --cluster0 "mongodb://username:password@primarypublicIP:primaryPort,secondaryIP:secondaryPort,secondaryIP:secondaryPort/?authMechanism=SCRAM-SHA-256" \
      --cluster1 "mongodb://username:password@primarypublicIP:primaryPort,secondaryIP:secondaryPort,secondaryIP:secondaryPort/?authMechanism=SCRAM-SHA-256"

I have also added inbound rules for port 27017 on all the EC2 instances.

Running the command returns the following:

"message":"Server heartbeat failed: &{DurationNanos:498974199 Failure:connection(xxxxxxx[-14]) incomplete read of message header: context canceled ConnectionID:xxxxxxxxx[-14] Awaited:true}"}
{"time":"2023-10-05T09:07:27.799540Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"message":"Server closed: &{Address:xxxxxxxxx TopologyID:ObjectID(\"651e7ccfb3a7aa6a8208100e\")}"}
{"time":"2023-10-05T09:07:27.799693Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"message":"Server heartbeat failed: &{DurationNanos:499386383 Failure:connection(13.212.118.248:27017[-13]) incomplete read of message header: context canceled ConnectionID:13.212.118.248:27017[-13] Awaited:true}"}
{"time":"2023-10-05T09:07:27.799743Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"message":"Server closed: &{Address:xxxxxxxxx TopologyID:ObjectID(\"651e7ccfb3a7aa6a8208100e\")}"}
{"time":"2023-10-05T09:07:27.799817Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"message":"Server heartbeat failed: &{DurationNanos:438125753 Failure:connection(xxxxxxxxx[-11]) incomplete read of message header: context canceled ConnectionID:xxxxxxxxx[-11] Awaited:true}"}
{"time":"2023-10-05T09:07:27.799866Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"address":"xxxxxxxxx","reason":"poolClosed","message":"Pool connection closed."}
{"time":"2023-10-05T09:07:27.799904Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"message":"Server closed: &{Address:xxxxxxxxx TopologyID:ObjectID(\"651e7ccfb3a7aa6a8208100e\")}"}
{"time":"2023-10-05T09:07:27.799952Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"message":"Topology closed: &{TopologyID:ObjectID(\"651e7ccfb3a7aa6a8208100e\")}"}
{"time":"2023-10-05T09:07:27.819803Z","level":"debug","serverID":"d893f6f3","mongosyncID":"coordinator","clusterType":"dst","driver logging":1,"command name":"endSessions","connection ID":"xxxxxxxxx[-8]","duration nanos":19647598,"request ID":28,"reply":"{\"ok\": {\"$numberDouble\":\"1.0\"},\"$clusterTime\": {\"clusterTime\": {\"$timestamp\":{\"t\":1696496847,\"i\":1}},\"signature\": {\"hash\": {\"$binary\":{\"base64\":\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\"subType\":\"00\"}},\"keyId\": {\"$numberLong\":\"0\"}}},\"operationTime\": {\"$timestamp\":{\"t\":1696496847,\"i\":1}}}","server connection ID":1654997,"message":"Command succeeded."}

Not exactly sure what the issue is and if I’m missing a step. Documentation isn’t showing any information about this.