Thank you for the prompt response.
In regards to numer 1. It is preserved, at least for me (under ubuntu 22.04), but the one can’t control where the data is stored as in community mongo image.
For our team number 2 is definitely a priority since if cluster breaks it will becomes unusable and has to be re-created which leads to data loss same as it would be in the 1st case.
Can’t confirm number 3 under my ubuntu environment either.
@David_Vincent bumping up the other question.
For #2: can you share the details of how you’re stopping and starting the container after the initial “docker run… bash”?
I would like to add some input to this discussion because we’d like to persist our local Atlas development environment, too.
For #2: Some team members use Macs, some Windows with Docker on a Vagrant machine (provider virtualbox), so for the latter, the Docker service runs on a virtual Linux system.
The containers are usually only stopped when we restart our computers. That, however, happens regularly, e.g. to apply operating system updates.
Relating to the current issues, it seems I’m unable to establish a connection to a Local Mongo Atlas Docker container from another Docker container within my local environment. Is such connection unfeasible?
I am seemingly able to connect to the local Atlas container utilizing ‘mongosh’, indicative that the container is indeed accepting connections. Despite this, a connection between my application in a separate container, and the Mongo Atlas container remains unsuccessful.
We are also running into all of the issues described here. I did manage to find a workaround for #2 (container restarts). You can intercept the container kill signal in your entrypoint script and pause the deployment:
atlas-entrypoint
#!/usr/bin/env bash
DEPLOYMENT_INFO=$(atlas deployments list | grep 'my_deployment')
if [[ $DEPLOYMENT_INFO ]]; then
# Restart a deployment
atlas deployments start my_deployment
else
# Create a new deployment
atlas deployments setup my_deployment --type local --port 27778 --username root --password root --bindIpAll --skipSampleData --force
fi
# Pause the deployment whenever this container is shutdown to avoid corruption.
function graceful_shutdown() {
atlas deployments pause my_deployment
}
trap 'graceful_shutdown' EXIT
sleep infinity &
wait $!
Haven’t tested it extensively but it at least works when you run docker compose down.
Importantly, this does not work if you use tail -f /dev/null as suggested in the docs. I had to switch that to the sleep and wait commands shown above.
I haven’t been able to get persistence between runs working, either. I’m assuming it has to do with the cluster not being available when first running the compose stack, and the data directory is not in the usual place. This is a killer for me. I can’t see how anybody could use this as is, frankly
Thanks John, I’ll check it out! I’d moved onto just using a cloud instance in the meantime, but I’d prefer a local solution for dev purposes. Thanks again sir
Hi Jakub and MongoDB team,
I am wondering if you have any update or ETA for an update on the issue #3 summarized by Jakub earlier. We are using MongoDB Atlas local deployments for integration tests and solving this issue with caching MongoDB binaries would be a big help to speed things up.
Thanks in advance,
Leo