Problem by adding new node to replica set

I had a replica set cluster in the following configuration 2 data bearing nodes + 1 arbiter all on version 4.4.6. The hardware was a bit outdated for version 5.0. So we decided to change out these servers for newer ones (and to use 3 data bearing nodes) so we can use the new version. The problem appeared as I tried to add the 1st new node to the cluster. As soon as the new server entered the startup2 status my clients started the experience problems, sometimes they got answer from DB but sometimes they timed out. The new node was on the 5.0.6 version and was added via mongo shell with the rs.add(“host:port”) command (no other options like priority:0 and hidden:true). After we detected problem I tried to remove the new node, but it wouldn’t let me. I’ve ran a TCPDUMP on the new node and saw that all my clients were trying to connect to this node while it was in startup2 state (my clients use Secondary Preferred connections). After the initial sync was over everything went back to normal. The other 2 nodes Ive added without problems with the rs.add( { host: “host:port”, priority: 0, hidden: true } ) command. Is this a normal behavior? because I have another cluster that needs this same procedure.