Can't add replica set to PRIMARY node

If you are using docker compose file i think it is automated process
So why you are adding manually from shell?
The error may be due to bindIp param
Node connectivity means assuming they are on separate machines you should be able to connect giving address/port from each node to others

1 Like

yeah, I think I would try to use the bash script to automate the process.
I think I will have more and more bugs with mongodb, so do you have skype or any social media that I can connect with you.
Thank you very much.

I am not docker savvy but I think that you cannot use localhost for your replica set.

I think that each docker instance has a separate address space and that localhost on one docker instance refers to itself not to the host localhost. You would need to setup the replica set using the IP address of each docker instance.

You may access localhost:27017 from the main host because the port 27017 is redirected to a given docker instance.

1 Like

whether docker or not, if members are not running on the same machine, you cannot use localhost on any of them.

there are 3 places you need to check. /etc/hosts to give each member’s host machine a name in other members’ host machine, config file (usually /etc/mongo.conf) for net.binIp to allow from other members, and rs.add to add members with their respective IP addresses or DNS names (can be set in hosts file)

this is not required if you start all instances on a single host on different ports.

actually all memeber run in one server, so I think using localhost is OK in this case

I’m start all instances in single host

contradicts

It is a contradiction because each container is an isolated virtual host. (Italicized because it is a little bit different than a real virtual machine)

It is clearly not OK because you get

So despite the fact that I am not a docker user, I am pretty sure one container cannot access another container using localhost. And each member of a replica set has to connect to every member of the replica set.

I think a little bit of reading about docker and networking might help you:

I also think that you should stick with running non-docker version on your system.

You start all the docker instances from the same host but each docker is a separate entity.

Running all the instances of a replica set on the same physical host using 3 dockers or 3 VMs is foolish and useless because when you loose your physical host you loose your data.

1 Like

Thanks, I see the problem here, I will try to setup into 3 physical servers

I also think that you should stick with running non-docker version on your system.

why? I think deploy mongodb on docker container is a great way to bring into production

You misunderstood the concept of how docker and compose work. Check those recommended links @steevej gave above. but in simple terms it goes like this:

  • “localhost” in a container is completely isolated from your host machine. a container is a completely new virtual machine (just not traditional VPC) with its own IP address and network, separated from your host’s network, just as any other pc in your network.
  • if you start 3 “mongod” instances in a single container, you use different ports and “localhost” when adding to the replica set.
    • this is just like when you start 3 instances without docker.
  • if you start 3 “containers” within a compose file, you get 3 different machines each having its own IPs. you cannot use “localhost” to connect to others.
    • you don’t need different ports, but you need to know each containers IP address
    • I think you may set static IP inside compose file, but I haven’t tried.
  • if you start single container using compose file, but scale to 3 in kubernetes, you will not have control over their IP addresses.
    • you have to definetly know IP address of each after they fully start.
    • It will not be easy to maintain replica set, but should not be impossible.

You can still run 3 “mongod” instances in you host machine without polluting anything.

  • have 3 “mongod_X.conf” for each member. edit the file so:
    • each have different data folder, port, log path
    • make sure data and log folders exists
    • make sure net.bindIp allows others to connects. (actually not needed in localhost :slight_smile: )
    • run with mongod --config mongod_X.conf
    • stop instances and remove data folders and log files when you complete your “study” on them.

From my above answer, you can use the exact same steps, of running 3 instances in a host machine, to run 3 instances in a “single” container, provided that it has resources to handle them

  • disk space dedicated to that container to hold data of 3 mongod instances
  • cpu power dedicated to that container to run 3 mongod instances
1 Like

While you can do the above for experimentation it is not advised to do that for production because

A good way to start multiple instances on the same host is to use mlaunch.

1 Like

Thanks for your reply but to be clear I’m not running 3 instances in a single container ( What I meant above “single host” I meant single server sorry for misunderstanding ), I have 3 containers, one for primary node and the other 2 containers for replica sets

Please, read my other answer above the one you quoted, for this plus how you run 3 containers.

1 Like

Thanks I will read them carefully, once I found solutions for this I’ll let you guys know

I have one very beginner question : is it good way to start a mongodb like this:
I wrote a docker compose file for mongodb and the image is mongo:latest, once I complete setup the mongodb container , I then go inside mongodb container ,installed and cloned a flask application into it and this container contains:

  1. Mongod instance run in port 27017
  2. A flask app run in port 5000.
    I then I docker-commit this docker container as a image to docker-hub with new tag, for example the new tag would be like hoanglh/first_release:v1 , then I pulled this image and run it in another machine.
    My question is if I install mongodb and flask app in the same container is it good practice or ok to do so?

it is good for starters to understand. but you tie the app and db together, thus you will not be able to scale it in the future.
db needs lots of space and should run at stable ports when you want a replica set. app does not need to store anything and should be independent so you can use load balancers (hundreds of them, for example). that is the basics of using multiple containers. you need to also learn networking so you can tie containers together.

2 Likes

Hi everyone,

Does someone have the answer to this question? I have the same problem and I fix “Connection refused” by ensuring that:

  • Can ping each other (open port ICMP inbound)
  • Same keyFile and chmod 400 (read-only)

But I still have error “stateStr” : “(not reachable/healthy)” and I check my mongod.log show error code 18 AuthenticationFailed. Details in the topic:
[MongoDB replicaSet error AuthenticationFailed (code 18)]

Many thanks!!

I haven’t tried this before but there seems another way to connect containers from different host machines: overlay network and swarm.

Initializing a docker swarm needs 3 host machines (manager and workers, the manager can also be a worker) and some ports opened in each host.

I think it is just like trying to set up a VPN but in the docker’s way. This “3” is a bit annoying though.

Anyways, you may want to give it a shot now that we have more topic names available to check :wink:

Hi everyone,

I established successfully the replica set Mongo with a hybrid different shell server outside and container inside. There are 7 members is distribute 3 machines differently and members are running on both container and shell in the same machine.

My way is not need to set up a subnet, swarm mode or any VPN anymore, I just expose port directly from the container to outside, through that they can ping each other without getting any prevention.

Many thanks for your effort support !!

Hi @Khiem_Nguy_n , glad to hear back and know you could solve the problem. but can you please apply the following two steps:

  • reply also to your other topic so others will know the problem was solved.
  • describe the missing part a bit more clearly. was it a missing port mapping all this time? or firewall or something else?

by the way, 7 is one of those high numbers that may have unseen port problems. have you tested the network? by “stepping down” primary members (rs.stepDown()) and also removing members from majorities by shutting down individually (login to a member, not to replicaset)?