Using

1 result

Retail Reference Architecture Part 3: Query Optimization and Scaling

Series: Building a Flexible, Searchable, Low-Latency Product Catalog Approaches to Inventory Optimization Query Optimization and Scaling Recommendations and Personalizations In part one of this series on reference architectures for retail, we discussed how to use MongoDB as the persistence layer for a large product catalog . In part two, we covered the schema and data model for an inventory system . Today we&#x2019;ll cover how to query and update inventory, plus how to scale the system. Inventory Updates and Aggregations At the end of the day, a good inventory system needs to be more than just a system of record for retrieving static data. We also need to be able to perform operations against our inventory, including updates when inventory changes, and aggregations to get a complete view of what inventory is available and where. The first of these operations, updating the inventory, is both pretty straightforward and nearly as performant as a standard query, meaning our inventory system will be able to handle the high-volume we would expect to receive. To do this with MongoDB we simply retrieve an item by its &#x2018;productId&#x2019;, then execute an in-place update on the variant we want to update using the $inc operator: db.inventory.update( { “storeId”:”store100”, “productId”:“20034”, “vars.sku”:”sku11736” }, {“$inc”:{“vars.$.q”:20}} ) For aggregations of our inventory, the aggregation pipeline framework in MongoDB gives us many valuable views of our data beyond simple per-store inventory by allowing us to take a set of results and apply multi-stage transformations. For example, let&#x2019;s say we want to find out how much inventory we have for all variants of a product across all stores. To get this we could create an aggregation request like this: db.inventory.aggregate([ {$match:{productId:”20034”}, {$unwind:”$vars”}, {$group:{ _id:”$result”, count:{$sum:”$vars.q”}}}]) Here, we are retrieving the inventory for a specific product from all stores, then using the $unwind operator to expand our variants array into a set of documents, which are then grouped and summed. This gives us a total inventory count for each variant that looks like this: {“_id”: “result”, “count”: 101752} Alternatively, we could have also matched on &#x2018;storeId&#x2019; rather than &#x2018;productId&#x2019; to get the inventory of all variants for a particular store. Thanks to the aggregation pipeline framework, we are able to apply many different operations on our inventory data to make it more consumable for things like reports and gain real insights into the information available. Pretty awesome? But wait, there&#x2019;s more! Location-based Inventory Queries So far we&#x2019;ve primarily looked at what retailers can get out of our inventory system from a business perspective, such as tracking and updating inventory, and generating reports, but one of the most notable strengths of this setup is the ability to power valuable customer-facing features. When we began architecting this part of our retail reference architecture, we knew that our inventory would also need to do more than just give an accurate snapshot of inventory levels at any given time, it would also need to support the type of location-based querying that has become expected in consumer mobile and web apps. Luckily, this is not a problem for our inventory system. Since we decided to duplicate the geolocation data from our stores collection into our inventory collection, we can very easily retrieve inventory relative to user location. Returning to the geoNear command that we used earlier to retrieve nearby stores, all we need to do is add a simple query to return real-time information to the consumer, such as the available inventory of a specific item at all the stores near them: db.runCommand({ geoNear:”inventory”, near:{ type:”Point”, coordinates:[-82.8006,40.0908]}, maxDistance:10000, spherical:true, limit:10, query:{“productId”:”20034”, “vars.sku”:”sku1736”}}) Or the 10 closest stores that have the item they are looking for in-stock: db.runCommand({ geoNear:”inventory”, near:{ type:”Point”, coordinates:[-82.8006,40.0908]}, maxDistance:10000, spherical:true, limit:10, query:{“productId”:”20034”, “vars”:{ $elemMatch:{”sku1736”, //returns only documents with this sku in the vars array quantity:{$gt:0}}}}}) //quantity greater than 0 Since we indexed the &#x2018;location&#x2019; attribute of our inventory documents, these queries are also very performant, a necessity if the system is supporting the type of high-volume traffic commonly generated by consumer apps, while also supporting all the transactions from our business use case. Deployment Topology At this point, it&#x2019;s time to celebrate, right? We&#x2019;ve built a great, performant inventory system that supports a variety of queries, as well as updates and aggregations. All done! Not so fast. This inventory system has to support the needs of a large retailer. That means it has to not only be performant for local reads and writes, it must also support requests spread over a large geographic area. This brings us to the topic of deployment topology. Datacenter Deployment We chose to deploy our inventory system across three datacenters, one each in the west, central and east regions of the U.S. We then sharded our data based on the same regions, so that all stores within a given region would execute transactions against a single local shard, minimizing any latency across the wire. And lastly, to ensure that all transactions, even those against inventory in other regions, were executed against the local datacenter, we replicated each all three shards in each datacenters. Since we are using replication, there is the issue of eventual consistency should a user in one region need to retrieve data about inventory in another region, but assuming a good data connection between datacenters and low replication-lag, this is minimal and worth the trade-off for the decrease in request latency, when compared to making requests across regions. Shard Key Of course, when designing any sharded system we also need to carefully consider what shard key to use. In this case, we chose {storeId:1},{productId:1} for two reasons. The first was that using the &#x2018;storeId&#x2019; ensured all the inventory for each store was written to the same shard. The second was cardinality. Using &#x2019;storeId&#x2019; alone would have been problematic, since even if we had hundreds of stores, we would be using a shard key with relatively low cardinality, a definite problem that could lead to an unbalanced cluster if we are dealing with an inventory of hundreds of millions or even billions of items. The solution was to also include &#x2018;productId&#x2019; in our shard key, which gives us the cardinality we need, should our inventory grow to a size where multiple shards are needed per region. Shard Tags The last step in setting up our topology was ensuring that requests were sent to the appropriate shard in the local datacenter. To do this, we took advantage of tag-aware sharding in MongoDB, which associates a range of shard key values with a specific shard or group of shards. To start, we created a tag for the primary shard in each region: - sh.addShardTag(“shard001”,”west”) - sh.addShardTag(“shard002”,”central”) - sh.addShardTag(“shard003”,”east”) Then assigned each of those tags to a range of stores in the same region: - sh.addTagRange(“inventoryDB.inventory”),{storeId:0},{storeId:100},”west”) - sh.addTagRange(“inventoryDB.inventory”),{storeId:100},{storeId:200},”central”) - sh.addTagRange(“inventoryDB.inventory”),{storeId:200},{storeId:300},”east”) In a real-world situation, stores would probably not fall so neatly into ranges for each region, but since we can assign whichever stores we want to any given tag, down to the level of assigning a single storeId to a tag, it allows the flexibility to accommodate our needs even if storeIds are more discontinuous. Here we are simply assigning by range for the sake of simplicity in our reference architecture. Recap Overall, the process of creating our inventory system was pretty simple, requiring relatively few steps to implement. The more important takeaway than the finished system itself is the process we used to design it. In MongoDB, careful attention to schema design, indexing and sharding are critical for building and deploying a setup that meets your use case, while ensuring low latency and high performance, and as you can see, MongoDB offers a lot of tools for accomplishing this. Up next in the final part of our retail reference architecture series: scalable insights, including recommendations and personalization! Learn More To discover how you can re-imagine the retail experience with MongoDB, read our white paper . In this paper, you'll learn about the new retail challenges and how MongoDB addresses them. Learn more about how leading brands differentiate themselves with technologies and processes that enable the omni-channel retail experience. Read our guide on the digitally oriented consumer << Read Part 2 Read Part 4 >>

May 20, 2015