Atlas Local Deployments can stop processing search index management requests

When using a mongo atlas local deployment and working with search indices, the service will altogether stop processing index management requests if at any point you attempt to create an index with the same name as one that already exists on that collection (say, if you forget to check if the index exists before asking the server to create it). When this happens, mongot emits this error roughly ever second:

E MONGOT [incremental-config-cycle-updater] [c.x.m.c.m.DesiredConfigStateUpdater] Supplied configuration violated invariant, not updating: duplicate index names for the same collection: <redacted>

The container service will silently fail to process any subsequent index management requests for any database or collection until the container is paused / started or recreated.

This appears to be a bug with the local deployment container image, because when I tried to reproduce the issue in a live M10 Atlas cluster running MongoDB 8.x, I instead receive a DuplicateIndex command error, which is what I would expect to happen.

At the time I wrote this post, I’m using this linux/arm64 image:

REPOSITORY                    TAG       IMAGE ID       CREATED      SIZE
mongodb/mongodb-atlas-local   latest    eb676cc510e7   2 days ago   1.31GB

Thanks!
Matt Quinn

1 Like

We are facing the same issue, did you manage to solve it? @Matt_Quinn

@Jeremy_Care , for now the best I’ve been able to do is work around the issue by checking to make sure a search index with the same name doesn’t already exist before trying to create one. It’s probably just good defensive coding anyway, but I’m not used to having to do it since Mongo no-ops when you try to make a duplicate standard index, and a genuine hosted Atlas deployment throws an command error that is easy to catch and discard.

Hey!

For us, it wasn’t really the issue. We were getting this error because we were not cleaning the search indexes in our unit tests. We verified and didn’t have any duplicate indexes in all of our clusters.
We had unique numbers on each index names.

We solved it by removing our indexes in the afterEach and now it’s working fine.