Are there performance implications of index keys above the old 1KB limit?

I have an application that pre-dates the change in MongoDB 4.2 that removed the Index Key Limit.

Documents in my application are indexed by a unique slug which can be of any length (within reason). To get around the limit, I store a hash and index on that field instead. The hashing is done at application level and uses MD5 for speed. I have yet to experience a collision.

I am currently overhauling my data model completely and wondering whether to keep the hashing system as it is, or index the slug field directly. The performance overhead in my application is no doubt minimal, but not non-existant. Particularly when a single script execution performs thousands of hashes. I would ideally like to get rid of it for simplicity.

So to ask a question - Are there any performance implications on the MongoDB side when storing large index keys? I can enforce a maximum slug length if needed. Advice appreciated on what a good limit might be.

Hi @timw

It is my understanding that the old 1KB limit was primarily for the old MMAPv1 storage engine. Since WiredTiger is a much more modern storage engine, it doesn’t have this limit, and thus the index key limitation was removed along with the MMAPv1 storage engine in 4.2.

As a result, I don’t believe there would be any performance issue for large index keys. Having said that, it’s probably best to test it yourself with your expected workload, just to be sure :slight_smile:

Best regards
Kevin