BlogAnnounced at MongoDB.local NYC 2024: A recap of all announcements and updatesLearn more >>
MongoDB Developer
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right

Building a Real-Time, Dynamic Seller Dashboard on MongoDB

Karthic Subramanian, Katya Kamenieva7 min read • Published Dec 21, 2022 • Updated Apr 02, 2024
Facebook Icontwitter iconlinkedin icon
building a real-time, dynamic dashboard
Rate this tutorial
One of the key aspects of being a successful merchant is knowing your market. Understanding your top-selling products, trending SKUs, and top customer locations helps you plan, market, and sell effectively. As a marketplace, providing this visibility and insights for your sellers is crucial. For example, SHOPLINE has helped over 350,000 merchants reach more than 680 million customers via e-commerce, social commerce, and offline point-of-sale (POS) transactions. With key features such as inventory and sales management tools, data analytics, etc. merchants have everything they need to build a successful online store.
In this article, we are going to look at how a single query on MongoDB can power a real-time view of top selling products, and a deep-dive into the top selling regions.
Real-time view of top 5 products sold

Status Quo: stale data

In the relational world, such a dashboard would require multiple joins across at least four distinct tables: seller details, product details, channel details, and transaction details.
Entity relationship diagram showing the 4 different tables to be joined for a merchant dashboard view
This increases complexity, data latency, and costs for providing insights on real-time, operational data. Often, organizations pre-compute these tables with up to a 24-hour lag to ensure a better user experience.

How can MongoDB help deliver real-time insights?

With MongoDB, using the Query API, we could deliver such dashboards in near real-time, working directly on operational data. The required information for each sales transaction can be stored in a single collection.
Each document would look as follows:
This specific document is from the “sales” collection within the “sample_supplies” database, available as sample data when you create an Atlas Cluster. Start free on Atlas and try out this exercise yourself. MongoDB allows for flexible schema and versioning which makes updating this document with a “seller” field, similar to the customer field, and managing it in your application, very simple. From a data modeling perspective, the polymorphic pattern is ideal for our current use case.

Desired output

In order to build a dashboard showcasing the top five products sold over a specific period, we would want to transform the documents into the following sorted array:
With just the “_id” and “total_volume” fields, we can build a chart of the top five products. If we wanted to deliver an improved seller experience, we could build a deep-dive chart with the same single query that provides the top five locations and the quantity sold for each.
The output for each item would look like this:
With the Query API, this transformation can be done in real-time in the database with a single query. In this example, we go a bit further to build another transformation on top which can improve user experience. In fact, on our Atlas developer data platform, this becomes significantly easier when you leverage Atlas Charts.

Getting started

  1. Set up your Atlas Cluster and load sample data “sample_supplies.”
  2. Connect to your Atlas cluster through Compass or open the Data Explorer tab on Atlas.
In this example, we can use the aggregation builder in Compass to build the following pipeline.
(Tip: Click “Create new pipeline from text” to copy the code below and easily play with the pipeline.)

Aggregations with the query API

Keep scrolling to see the following code examples in Python, Java, and JavaScript.
This short but powerful pipeline processes our data through the following stages:
  • First, it filters our data to the specific subset we need. In this case, sale transactions are from the specified dates. It’s worth noting here that you can parametrize inputs to the $match stage to dynamically filter based on user choices.
Note: Beginning our pipeline with this filter stage significantly improves processing times. With the right index, this entire operation can be extremely fast and reduce the number of documents to be processed in subsequent stages.
  • To fully leverage the polymorphic pattern and the document model, we store items bought in each order as an embedded array. The second stage unwinds this so our pipeline can look into each array. We then group the unwound documents by item and region and use $sum to calculate the total quantity sold.
  • Ideally, at this stage we would want our documents to have three data points: the item, the region, and the quantity sold. However, at the end of the previous stage, the item and region are in an embedded object, while quantity is a separate field. We use $addFields to move quantity within the embedded object, and then use $replaceRoot to use this embedded _id document as the source document for further stages. This quick maneuver gives us the transformed data we need as a single document.
  • Next, we group the items as per the view we want on our dashboard. In this example, we want the total volume of each product sold, and to make our dashboard more insightful, we could also get the top five regions for each of these products. We use $group for this with two operators within it:
    • $sum to calculate the total quantity sold.
    • $topN to create a new array of the top five regions for each product and the quantity sold at each location.
  • Now that we have the data transformed the way we want, we use a $sort and $limit to find the top five items.
  • Finally, we use $set to convert the array of the top five regions per item to an embedded document with the format {region: quantity}, making it easier to work with objects in code. This is an optional step.
Note: The $topN operator was introduced in MongoDB 5.2. To test this pipeline on Atlas, you would require an M10 cluster. By downloading MongoDB community version, you can test through Compass on your local machine.

What would you build?

While adding visibility on the top five products and the top-selling regions is one part of the dashboard, by leveraging MongoDB and the Query API, we deliver near real-time visibility into live operational data.
In this article, we saw how to build a single query which can power multiple charts on a seller dashboard. What would you build into your dashboard views? Join our vibrant community forums, to discuss more.
For reference, here’s what the code blocks look like in other languages.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial

RAG Series Part 1: How to Choose the Right Embedding Model for Your Application

Apr 15, 2024 | 16 min read

Triggers Treats and Tricks - Auto-Increment a Running ID Field

Sep 23, 2022 | 3 min read

Migrating PostgreSQL to MongoDB Using Confluent Kafka

Apr 10, 2024 | 10 min read

Build Smart Applications With Atlas Vector Search and Google Vertex AI

Jan 17, 2024 | 4 min read
Table of Contents