Why do you bother with sharding if you only have one shard?
It is highly not recommended to run 2 arbiters on the same replica set.
You are wasting resources while adding latency and complexity. Running multiple instances on the same physical hardware is most likely resulting in reduce performance as the difference instances are battling for shared resources.
As to temporary fix your problem of unavailability you may do the NON RECOMMENDED configuration.
Connect to server-3 configuration server and remove the server-1 and server-2 members. Do the same with your shard replica set. Starts 3 new instances on server-3. Add one to the configuration server replica set as a data bearing node. Add 2nd new instance to the configuration server replica set as an arbiter. Finally add the third new instances as data bearing node to your shard replica set.
If your a little bit experienced with file system operations, you may seed the 2 load bearing instances with file system snapshot from the current load bearing nodes with data.
This will give you:
. 1 mongos
. 1 PSA configuration for the configuration server
. 1 PSA configuration for your shard
Depending of the amount of data and capability of server-3 this configuration will struggle, but it might be functional.
But if you can, just run one single normal PSA if you only have 1 physical.
@steevej: There is no performance benefit setting up a single shard as a sharded cluster, but one motivation for a single shard deployment is when you anticipate adding further shards for scaling in the reasonably near future.
A sharded cluster involves extra configuration and resources (config servers, mongos ) that are not required in a replica set deployment, but if you start with this configuration you will have fewer operational changes to make when you start adding shards (for example, no change to your connection string or backup procedures).
@Kim_Hakseon: Since you have lost a majority of members for a shard replica set, you can force reconfigure the replica set using one of the surviving members. I would emphasise the docs description that this is only intended as an emergency procedure:
The force option forces a new configuration onto the member. Use this procedure only to recover from catastrophic interruptions. Do not use force every time you reconfigure. Also, do not use the force option in any automatic scripts and do not use force when there is still a primary.
As far as arbiters go: my recommendation would be to never have more than one, and ideally avoid using arbiters altogether in modern versions of MongoDB. For more background, please see my response on Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie_X.