Could Fixed-Size Data Chunks Replace Traditional Indexing in NoSQL Architectures?

Hi all,

As NoSQL systems like MongoDB evolve, many architectural features—like indexing, compression, and memory optimization—tend to coexist in complex ways. I’ve been exploring an idea around whether fixed-size data chunks (for example, 64–80 bytes each) could offer a new approach to indexing and data access.

The concept is to store and retrieve data using discrete, memory-aligned blocks that match logical structures (like segments or pages). This would allow the system to skip runtime evaluation, and directly load only the relevant memory range—potentially replacing or reducing the need for traditional index creation.

Some potential benefits:

  • Faster access to target data without scanning entire collections.
  • Lower memory overhead, especially when features like joins or groupings are involved.
  • Simplified architecture, avoiding performance hits from multiple overlapping features (indexing, compression, etc.).
  • Improved decompression that doesn’t rely on repeated data patterns, and works well even when data is large and diverse.

Has anyone experimented with this kind of chunk-based data access model in MongoDB or similar NoSQL systems? I’d love to hear thoughts on:

  • Whether this could simplify or optimize index-heavy collections,
  • How chunking strategies could integrate with JSON-like document models,
  • Whether similar ideas are being considered in future MongoDB architectures.

Looking forward to your insights!