MongoDB 4.4: Scale Without Fear or Friction

Mat Keep

#Releases

Welcome to part 2 in our 3-part what’s new series.

In the previous post, we announced that MongoDB 4.4 is now generally available and ready for production usage. We also covered enhancements to the MongoDB Query Language (MQL) – including the new Union aggregation pipeline stage and custom aggregation expressions – along with the availability of Rust and Swift drivers.

As discussed in that post, bringing 4.4 to market was a significant project. The MongoDB engineering team invested close to 270,000 hours and closed over 5,200 tickets. And so today, we will see where those efforts have been applied to making our distributed systems architecture even more flexible and performant for your applications.

Sharding in MongoDB 4.4

MongoDB’s native sharding is highly valued by developers. It enables you to horizontally scale-out your database across multiple nodes and geographic regions to accommodate application growth and to colocate data with your distributed applications and users. It’s elastic so you scale your database cluster out and in at any time to align consumption with resource requirements, and it’s transparent to your applications.

MongoDB sharding is also highly flexible, allowing data to be partitioned by range, hash, or zone – the latter implemented as Global Clusters in our fully-managed MongoDB Atlas global cloud database service. With this flexibility, you can shard your database to match both your query patterns and data residency needs. This is something you don’t get from most other distributed databases that purely hash a partition key, resulting in data being randomly sprayed across a cluster.

Even with the flexibility of MongoDB sharding, the inability to modify either the shard key or shard key value has been a frustration for some developers. This is because choosing a shard key that could not accommodate changing application requirements could result in data being unevenly distributed across the cluster, leading to inefficient use of provisioned resources and potentially poor performance.

Through developments in both the prior MongoDB 4.2 release and the new 4.4 release, sharding becomes even more flexible and adaptable:

  • MongoDB 4.2 introduced the ability to modify shard key values using a distributed, multi-document ACID transaction to change the placement of a document in a cluster. This is useful when you want to rehome a document to a different geographic region or age data out to a lower storage tier.
  • MongoDB 4.4 brings the ability to refine the shard key itself, and to create compound hashed shard keys to allow for an even more fine-grained distribution of data within the cluster.

With these enhancements, sharding your database is now even easier, giving you more flexibility in selecting your shard key and making ongoing scaling operations more seamless as your applications evolve.

Refinable Shard Keys

With the ability to define and refine your shard keys at any time, you can now adapt data distribution across your cluster as your database grows and applications evolve, without impacting system availability.

Refinable Shard Key

You can start scaling-out with a simple data distribution strategy, and then refine the shard key as needed by adding a suffix to it. For example, you may select customer_id as the shard key for your orders collection. As customers place more orders you want to split new order data evenly across more shards to accommodate growing sales volumes. What you can do is add order_id as a suffix to the shard key to provide a more evenly balanced distribution of the orders collection across the cluster.

Refining the shard key is non-blocking and transparent, imposing no impact to application availability. Of course if documents do have to be rebalanced across shards after refining the shard key, you will see greater computational and I/O overhead, so you should factor that into your planning.

And remember, even with refinable shard keys, it is still important to properly evaluate selection of your shard key. No amount of refinement can overcome the selection of a poor initial shard key. The documentation provides guidance on choosing an appropriate shard key.

Note that you currently cannot refine your shard keys if you are using MongoDB Atlas Global Clusters.

Compound Hashed Shard Keys

By adding support for hashing any single field in a compound shard key, you get higher database throughput by more evenly distributing load across shards, without losing data locality or creating hot shards as your application scales.

You have the flexibility to hash either the suffix or the prefix of the compound shard key. This gives you more fine-grained and even partitioning of documents for specific use cases, such as:

  1. When data is distributed across a subset of shards that are co-located within a geographic region for data residency.
  2. When a field of your preferred shard key has a monotonic value, such as customer_id where your newest customers are also the most active, resulting in most traffic hitting a single shard.

Prior to MongoDB 4.4, you could only hash a single-field shard key. To accommodate the use cases above, you had to implement hashing on the application side, and store the hash within the document; a non-hashed compound shard key would then be defined across the client-hashed field and one or more other fields.

Bringing support for hashed fields in compound shard keys allows you to create more scalable and performant applications by balancing read and write distribution with data locality, while reducing application and schema complexity. Note that compound hashed shard keys are currently not supported if you are using MongoDB Atlas Global Clusters.

Hedged Reads for Consistent Low Latencies

High latency directly correlates to lost revenue. A Google study measured bounce rates increasing 50% when page load times exceeded 3 seconds, while Akamai found that each 100ms website delay resulted in ecommerce conversion rates declining by 7%.

To minimize p95 and p99 latencies, the mongos query router can now be configured with Hedged Reads that submit read requests to multiple replicas in a sharded cluster, returning results to the client as soon as the quickest node responds. This avoids queries waiting on one node that could be busy syncing to disk, applying an index build, or where there is a transient network or server issue.

Hedged Reads

When you specify read preference nearest to the driver, the mongos will automatically hedge reads to the two closest nodes, measured by network latency.

As a result of Hedged Reads, your users get higher and more predictable performance by reducing the possibility of long-tail query latency outliers caused by the degradation of a single node.

Additional Scalability and Performance Enhancements in MongoDB 4.4

Streaming Replication

Oplog messages are now continuously streamed from the primary to secondary nodes, improving the freshness of secondary reads and reducing latency for application using the majority committed write concern.

Prior to MongoDB 4.4, secondary replicas would poll the primary, receiving new oplog messages in 16MB batches. The secondary would then have to send a getMore command and wait on a network round-trip before receiving the next batch of messages.

While “mileage may vary” based on your specific environment, internal tests of streaming replication have measured performance improvements of up to 50% over high load, high latency networks using w:majority writes.

Hidden Indexes

Making it faster and more efficient for you to tune performance as your application evolves, you can now hide an index from the query planner to evaluate the impact of removing it, without actually dropping the index.

If you later determine the index is required, then you simply unhide it, avoiding the expense of a full index rebuild. The hidden index is fully maintained alongside all other indexes on your collection, and can be hidden and unhidden on-demand.

Simultaneous Indexing

New indexes are now built concurrently on the primary and all secondaries, and are available when all eligible nodes in the replica set have successfully created the index. With this new approach, chances of replication lag are reduced, ensuring queries targeting secondaries are served with fresher data. Note that a maximum of 3 concurrent index builds are supported with this new method.

Getting Started with MongoDB 4.4

A good place to learn more about all of the features and enhancements in our latest release is to watch the on-demand What’s New in MongoDB 4.4 webinar.

Alternatively, if you want to dive straight in, there are multiple ways you can get going:

  • Spin it up 4.4 in the cloud using the fully-managed and global MongoDB Atlas database service.
  • Alternatively, download 4.4 and run on your own infrastructure (select 4.4.x under Version).
  • Review the documentation in the 4.4 Release Notes.

We have one more post to go in this series. Next up I’ll cover resilience and security.