Introducing a Local Experience for Atlas, Atlas Search, and Atlas Vector Search with the Atlas CLI

Thank you both! Really appreciate the thorough description of the steps you’re taking.

It seems that there’re a couple of issues here:

  1. When using docker-compose data is not being persisted across runs (or actually ups)
  2. Wehn using docker run… the deployment is corrupted if not paused before restarting the container
  3. When using docker-compose MongoDB binaries are not being cached, requiring its download with every run

Would it ublock you (for the time being) if #2 was fixed? Trying to figure out the priorities here.

In meantime, I’ll regroup internally to see what’s the best course of action to provide a quick help for you.

Thank you for the prompt response.
In regards to numer 1. It is preserved, at least for me (under ubuntu 22.04), but the one can’t control where the data is stored as in community mongo image.
For our team number 2 is definitely a priority since if cluster breaks it will becomes unusable and has to be re-created which leads to data loss same as it would be in the 1st case.
Can’t confirm number 3 under my ubuntu environment either.

Thanks Igor,

For #2: can you also share the details of how you’re stopping and starting the container after the initial “docker run… bash”?

1 Like

#2 is also the priority for us. We think #1 might be doable if #2 is solved.

@David_Vincent bumping up the other question.
For #2: can you share the details of how you’re stopping and starting the container after the initial “docker run… bash”?

I would like to add some input to this discussion because we’d like to persist our local Atlas development environment, too.

For #2: Some team members use Macs, some Windows with Docker on a Vagrant machine (provider virtualbox), so for the latter, the Docker service runs on a virtual Linux system.
The containers are usually only stopped when we restart our computers. That, however, happens regularly, e.g. to apply operating system updates.

Relating to the current issues, it seems I’m unable to establish a connection to a Local Mongo Atlas Docker container from another Docker container within my local environment. Is such connection unfeasible?
I am seemingly able to connect to the local Atlas container utilizing ‘mongosh’, indicative that the container is indeed accepting connections. Despite this, a connection between my application in a separate container, and the Mongo Atlas container remains unsuccessful.

pymongo.errors.ServerSelectionTimeoutError: atlasdb:27778: [Errno 111] Connection refused, Timeout: 5.0s, Topology Description: <TopologyDescription id: *****, topology_type: Single, servers: [<ServerDescription ('atlasdb', 27778) server_type: Unknown, rtt: None, error=AutoReconnect('atlasdb:27778: [Errno 111] Connection refused')>]>

Thank you

Persistency is a major setback to our team as well. If anyone managed to solve it, please share

We are also running into all of the issues described here. I did manage to find a workaround for #2 (container restarts). You can intercept the container kill signal in your entrypoint script and pause the deployment:

atlas-entrypoint

#!/usr/bin/env bash

DEPLOYMENT_INFO=$(atlas deployments list | grep 'my_deployment')

if [[ $DEPLOYMENT_INFO ]]; then
    # Restart a deployment
    atlas deployments start my_deployment
else
    # Create a new deployment
    atlas deployments setup my_deployment --type local --port 27778 --username root --password root --bindIpAll --skipSampleData --force
fi

# Pause the deployment whenever this container is shutdown to avoid corruption.
function graceful_shutdown() {
    atlas deployments pause my_deployment
}
trap 'graceful_shutdown' EXIT

sleep infinity &
wait $!

docker-compose.yml

...
mongodb_atlas:
  container_name: 'mongodb_atlas'
  image: 'mongodb/atlas:v1.14.2'
  ports:
    - '27778:27778'
  privileged: true
  entrypoint: '/home/scripts/atlas-entrypoint'
  volumes:
    - './scripts:/home/scripts'

Haven’t tested it extensively but it at least works when you run docker compose down.
Importantly, this does not work if you use tail -f /dev/null as suggested in the docs. I had to switch that to the sleep and wait commands shown above.

Hello! Sharing my answer to another post here in case it might help others:

I haven’t been able to get persistence between runs working, either. I’m assuming it has to do with the cluster not being available when first running the compose stack, and the data directory is not in the usual place. This is a killer for me. I can’t see how anybody could use this as is, frankly

Hey @William_Hatch I was able to persist the data by using an entrypoint script to manage the podman containers. Take a look, hopefully it will help you out: Mongodb/atlas docker container - unable to start deployment if the container restart

1 Like

Thanks John, I’ll check it out! I’d moved onto just using a cloud instance in the meantime, but I’d prefer a local solution for dev purposes. Thanks again sir

1 Like

Hi Jakub and MongoDB team,
I am wondering if you have any update or ETA for an update on the issue #3 summarized by Jakub earlier. We are using MongoDB Atlas local deployments for integration tests and solving this issue with caching MongoDB binaries would be a big help to speed things up.
Thanks in advance,
Leo