Docker-compose ReplicaSets - getaddrinfo ENOTFOUND

As title suggests, I am using docker-compose running 3 mongo containers. I have attached the rs.status() log and also a screenshot of Studio3t to scan my ports for the members.

I tried many variations of the connections string and ports based on the mongo documentation. I am new at this and still learning so any info helps!

1 Like

Hi @TheAdrianReza_N_A and welcome in the MongoDB Community :muscle: !

Have you set your IP addresses correctly in your bindIp network configuration? Did you include in there the IP address of your client in your 3 config files?

If that’s not it, could you please share your config file and your docker-compose.yml maybe so we have a bit more information to work with?

Also ─ just to confirm ─ “mongo-rs0-1”, “mongo-rs0-2” and “mongo-rs0-3” are 3 different physical servers, correct?


I do not believe I have done much in the network configuration. Feel free to suggest ways I can improve my docker-compose.yml file! It’s a boilerplate mongo-rs docker container I found.

And yes they are on 3 different servers.

mongo.conf -

oplogSizeMB: 1024
replSetName: rs0

Really appreciate the quick response! Let me know if there is anything else I can get you to help my situation!

From my understanding, docker-compose is a tool that starts multiple containers that will work together on the same machine.

So the way I understand it, your 3 “mongo-rs0-X” containers will be started on the same machine which makes me want to ask a simple question:

Replica Sets are here for one main reason: High Availability in prod. If your 3 nodes depends on some piece of hardware they have in common (same power source, same disk bay, etc), it means you are not really HA because that piece of equipment can fail and bring all your nodes down, all at once. Which is a big NO NO in prod.

That’s the reason why it’s a good practice to deploy your nodes in different data centers.

I also see you are using --smallfiles which is a deprecated option which was only for MMapV1 which is gone now and --oplogSize 128 is definitely a terrible idea.

So ─ based on this ─ I think you are trying to deploy a development or test environment here but then I really don’t see the point of deploying 3 nodes on the same cluster. A single node replica set would most probably be good enough, no?

Here is the docker command I use to start an ephemeral single replica set node on my machine when I need to hack something:

docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.3 --replSet=test && sleep 4 && docker exec mongo mongo --eval "rs.initiate();"

I actually made an alias out of it which is in my ~/.bash_aliases file:

alias mdb='docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.3 --replSet=test && sleep 4 && docker exec mongo mongo --eval "rs.initiate();"'

And ─ because of the --rm option ─ I can just destroy the container and everything it contains (volumes included) with a simple:

docker stop mongo

Is it what you were looking for or you really need to make these 3 nodes on the same machine work?



Make sure that you have added replica set nodes in the host machine in etc/hosts file.
Just like the below example -	mongoset1 mongoset2 mongoset3

Note - is your host machine and mongoset1, mongoset2 and mongoset3 are the nodes (members) of the replicaset.

1 Like

It’s a nonsense to run 3 members of the same RS on the same machine. Running multiple data bearing mongod on the same machine shouldn’t exist.

The only exception to that rule would be if you are learning how RS works and you want to experiment while learning.

1 Like

" It’s a nonsense to run 3 members of the same RS on the same machine. Running multiple data bearing mongod on the same machine shouldn’t exist."

How are we supposed to test transactions locally? We are already using docker compose for our local setup so not having this functionatliy would make it impossible to test Mongo Transactions. How are you testing your transactions? Are you using them? PS for the record I think this is the only reason you would want to setup replicas locally, or to mirror your hosted env for testing or for educational purposes. This feature does belong in mongod however.

Hey Maxine -

The but why meme I almost find offensive because I am here ONLY because my team needs to test mongo transactions and it was YOUR TEAM that implemented in a way where they can only be tested with this replica configuration. So to come here, having spent my morning trying to get this to work and see that you meme the OP for doing this when you created the problem is :exploding_head:

I’m sorry that you found that a bit offensive. I’m being a bit sarcastic to REALLY explain why it doesn’t make sense and get the point across. If you read my entire post, the answer and justification is in it.

I explained in my answer why it’s a bad idea and I also explained the solution: Single Node Replica Set.

Transactions, Change Streams and a few other features in MongoDB rely on the special oplog collection that only exists in Replica Set setups. BUT you can set up a Single Node Replica Set that only contains a single Primary node and all the features will work just as good as in a 7 nodes Replica Set.

So again, I reiterate:

It’s a nonsense to run 3 members of the same RS on the same machine.

Use a Single Node RS instead. Same features but it’s using 3X less ressources.

1 Like

Since the original question basically is “How can I access a replica set inside a docker network from the host”, this solution might be useful:

  1. Create a public DNS entry to a “localhost” host on domain you own, for example →
  2. Use this single host in the replica set config as name (use only one mongo in the set)
  3. In your docker setup, override the DNS with the docker host running the replica. In docker-compose, the network aliases section can be used for this

This will result in name resolve allowing access to mongo via both from inside docker and from the host (given the right exposed port)

Note that the usual solution to this problem is “modify your local /etc/hosts file”, which works too, but requires every dev/system in your organization to modify system files.

1 Like

So in order to have a docker instance of a replica set MongoDB I need to modify the local etc/hosts file?
How has that behavior ever rolled to a production version?

What about environments I do not have access to the system configuration (eg. CI/CD pipelines)?

@MaBeuLux88 are you really a mongo employee? this is for testing purposes.

@TheAdrianReza_N_A I believe it has something to do with replicaset config, you can try using bitnami images and try with the MONGODB_ADVERTISED_HOSTNAME as localhost, if my own tests are successful will post back.

Sorry for the half response, but had to chime in to this nonsense with Maxime Beugnet!

Hi @Marco_Maldonado and welcome in the MongoDB Community :muscle: !

Yes I am. It’s written right here. :point_up_2:

I think I’ll write a blog post about this because apparently nobody wants to hear that single node RS for a localhost dev environment is just fine. :smiley:

This is what I use daily to run a localhost dev environment:

alias mdb='docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:6.0.5 --replSet=RS && sleep 5 && docker exec mongo mongosh --quiet --eval "rs.initiate();"'
alias m='docker exec -it mongo mongosh --quiet'

It works perfectly fine. I can use ACID transactions, change streams, …

If I want to keep the data, I can use a volume but I don’t need to.



@Marco_Maldonado, Maxime is actually correct. You don’t need a multinode cluster for testing or doing things in MongoDB.

I don’t really agree with the “nonsense” comment, but I do understand the sentiment because unless your test environment is intended to directly test and evaluate performance in an actual production environment to actually have full awareness of what impact things will have on an environment level, there really isn’t as much of a need for a multinode replica set.

@MaBeuLux88 otherwise I do agree with.

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.