Cannot reconnect to cluster during autoscaling

We just faced the same issue and I believe the documentation should be adjusted to not mislead customers. Differently from kubernets, the system seems to be restarting the nodes one by one. That means that for clusters setup with 3 nodes (2 readers), applications that are configured to prefer secondaries, may shift all load to one node, putting the complete system down with overloaded DB (timeouts) until the cluster is fully scaled and queues are normalized.
It would be very helpful if the documentation would describe the limitations instead of ensuring there are no downtimes, and even more helpful if minimum availability of 100% of the nodes is ensured during the scaling process. Meaning, all new nodes are fully started until the switch happens.