Genevieve Broadhead

3 results

Break Down Silos with a Data Mesh Approach to Omnichannel Retail

Omnichannel experiences are increasingly important for customers, yet still hard for many retailers to deliver. In this article, we’ll cover an approach to unlock data from legacy silos and make it easy to operate across the enterprise — perfect for implementing an omnichannel strategy. Establishing an omnichannel retail strategy An omnichannel strategy connects multiple, siloed sales channels (web, app, store, phone, etc.) into one cohesive and consistent experience. This strategy allows customers to purchase through multiple channels with a consistent experience (Figure 1). Most established retailers started with a single point of sale or “channel” — the first store — then moved to multiple stores and introduced new channels like ecommerce, mobile, and B2B. Omnichannel is the next wave in this journey, offering customers the ability to start a journey on one channel and end it on another. Figure 1: Omnichannel experience examples. Why are retailers taking this approach? In a super-competitive industry, an omnichannel approach lets retailers maximize great customer experience, with a subsequent effect on spend and retention. Looking at recent stats , Omnisend found that purchase frequency is 250% higher on omnichannel, and Harvard Business Review’s research saw omnichannel customers spend 10% more online and 4% more in-store. Omnichannel: What's the challenge? So, if all retailers want to provide these capabilities to their customers, why aren’t they? The answer lies in the complex, siloed data architectures that underpin their application architecture. Established retailers who have built up their business over time traditionally incorporated multiple off-the-shelf products (e.g., ERP, PIMS, CMS, etc.) running on legacy data technologies into their stack (mainframe, RDBMS, file-based). With this approach, each category of data is stored in a different technology, platform, and rigid format — making it impossible to combine this data to serve omnichannel use cases (e.g., in-store stock + ecommerce to offer same-day click and collect). See Figure 2. Figure 2: Data sources for omnichannel. The next challenge is the separation of operational and historical data — older data is moved to archives, data lakes, or warehouses. Perhaps you can see today’s stock in real time, but you can’t compare it to stock on the same day last year because that is held in a different system. Any business comparison occurs after the fact. To meet the varied volume and variety of requests, retailers must extract, transform, and load (ETL) data into different databases, creating a complex disjointed web of duplicated data. Figure 3 shows a typical retailer architecture: A document database for key-value lookup, cache added for speed, wide column storage for analytics, graph databases to look up three degrees of separation, time series to track changes over time, etc. Figure 3: An example of a typical data architecture sprawl in modern retailers. The problem is that ETL’d data becomes stale as it moves between technologies, lagging behind real-time and losing context. This sprawl of technology is complex to manage and difficult to develop against — inhibiting retailers from moving quickly and adapting to new requirements. If retailers want to create experiences that can be used by consumers in real-time — operational or analytical — this architecture does not give them what they need. Additionally, if they want to use AI or machine learning models, they need access to current behavior for accuracy. Thus, the obstacle to delivering omnichannel experiences is a data problem that requires a data solution. Let's look at a smart approach to fixing it. Modern retailers are taking a data mesh approach Retail architectures have gone through many iterations, starting from vendor solutions per use case, moving toward a microservices approach, and landing into domain-driven design (Figure 4). Vendor Applications Microservices Domain-Driven Design * Each vendor decides the framework and governance of the data layer. The enterprise has no control over app or data * Microservices pull data from the API layer * Microservices and core datasets are combined into bounded contexts by business function * Data is not interoperable between components * DevOps teams control their microservices, but data is managed by a centralized enterprise team * DevOps teams control microservices AND data Figure 4: Architecture evolution. Domain-driven design has emerged through an understanding that the team with domain expertise should have control over the application layer and its associated data — this is the “bounded context” for their business function. This means they can change the data to innovate quickly, without reliance on another team. Of course, if data remains in its bounded context only, we end up with the same situation as the commercial off-the-shelf (COTS) and legacy architecture model. Where we see value is when the data in each domain can be used as a product throughout the organization. Data as a product is a core data mesh concept — it includes data, metadata, and the code and infrastructure to use it. Data as a product is expected to be discoverable (searchable), addressable, self-identifying, and interoperable (Figure 5). In a retail example, the product, customer, and store can be thought of as bounded contexts. The product bounded context contains the product data and the microservices/applications that are built for product use cases. But, for a cross-domain use case like personalized product recommendations, the data from both customer and product domains must be available “as a product.” Figure 5: Bounded contexts and data as a product. What we’re creating here is a data mesh — an enterprise data architecture that combines intentionally distributed data across distinctly defined, bounded contexts. It is a business domain-oriented, decentralized data ownership and architecture, where each makes its data available as an interoperable “data product.” The key is that the data layer must serve all real-time workloads that are required of the business — both operational and real-time analytical (Figure 6). Figure 6: Data mesh. Why use MongoDB for omnichannel data mesh Let’s look at data layer requirements needed for a data mesh move to be successful and how MongoDB can meet those requirements. Capable of handling all operational workloads: Expressive query language, including joining data, ACID transactions, and IoT collections make it great for multiple workloads. MongoDB is known for its performance and speed. The ability to use secondary indexes means that several workloads can run performantly. Search is key for retail applications — MongoDB Atlas has Lucene search engine built-in for full-text search with no data movement. Omnichannel experiences often involve mobile interaction. MongoDB Realm and Flexible Device Sync can seamlessly ensure consistency between mobile and backend. Capable of handling analytical workloads: MongoDB’s distributed architecture means analytical workloads can run on a real-time data set, without ETL or additional technology and without disturbing operational workloads. For real-time analytical use cases, the aggregation framework can be used to perform powerful data transformations and run ad hoc exploratory queries. For business intelligence or reporting workloads, data can be queried by Atlas SQL or piped through the BI Connector to other data tools (e.g., Tableau and PowerBI). Capable of serving data as a product: When serving data as a product, it is often by API: MongoDB’s BSON-based document model maps well to JSON-based API payloads for speed and ease. MongoDB Atlas provides both the Data API and the GraphQL API fully hosted. Depending on the performance needed, direct access may also be required. MongoDB has drivers for all common programming languages, meaning that other teams using different languages can easily interact with it. Rules for access of course must be defined, and one option is to use MongoDB App Services . Real-time data can also be published to Apache Kafka topics using the MongoDB Kafka Connector , which can act as a sync and a source for data. For example, one bounded context could publish data in real-time to a named Kafka topic, allowing another context to consume this and store it locally to serve latency-sensitive use cases. The tunable schema allows for flexibility in non-product fields, while schema validation capabilities enforce specific fields and data types in a collection to provide consistent datasets. Resilient, secure, and scalable: MongoDB Atlas has a 99.995% uptime guarantee and provides auto-healing capability, with multi-region and multi-cloud resiliency options. MongoDB provides the ability to scale up or down to meet your application requirements — vertically and horizontally. MongoDB follows a best-in-class security protocol. Choose the flexible data mesh approach Providing customers with omnichannel experiences isn’t easy, especially with legacy siloed data architectures. Omnichannel requires a way of making your data work easily across the organization in real-time, giving access to data to those who need it while also giving the power to innovate to the domain experts in each field. A data mesh approach provides the capability and flexibility to continuously innovate. Ready to build deeper business insights with in-app analytics and real-time business visibility? Read our new white paper: Application-Driven Analytics: In-App and Real-Time Insights for Retailers .

January 10, 2023

MACH Aligned for Retail: Headless

The MACH Alliance is a non-profit organization fostering the adoption of composable architecture principles, namely Microservices , API-First , Cloud-Native SaaS , and Headless. MongoDB, among many other technology companies, is a member of this Alliance, enabling developers to adopt these principles in their applications. In this article, we’ll focus on the fourth principle championed by the MACH Alliance: Headless. Let’s dive in. What is headless? A headless architecture is one where the layers or components of the architecture are decoupled. The “heads” (i.e., frontends) operate independently from the backend logic or “core body” microservices and share data via API. This concept is key to a successful shift toward microservices — without decoupling the architectural layers, you’re running on a modern monolith. Looser coupling also leads to an increase in frontend change and flexibility, reusability of core features, less downtime because there’s no single point of failure, and promotes reusability of key features. Headless applied to retail Retail was one of the first industries to embrace headless architectures, with the term coined in 2012 by Dirk Hoerig, founder of commercetools . These concepts were originally applied to building modern ecommerce solutions and are now being expanded to any application in the IT stack. In this model, the head can be an ecommerce web frontend, or mobile app, or an internal frontend system for stock management. The core body components support the heads (Figure 1). They can be a payment system, a checkout solution, a product catalog, or a warehouse management application. Figure 1:   The “head” and “core body” components, sharing data as part of APIs. Customers and their experiences are at the heart of retail. Adopting headless principles can greatly help companies meet rapidly changing customer requirements and stand out from the competition. Customers require a seamless journey between mobile, web applications, and in-store with data and logic consistent across channels. New channels might also need to be added such as integration with social media, to reach a younger customer base. Retailers might need to be able to sell in multiple regions or across product lines, requiring them to adopt multiple frontends to serve different customer groups without having to rewrite or duplicate the whole IT stack. New features might need to be added quickly to reflect competitors’ moves without tracing changes back through every component of the stack or experiencing downtime. Internal workforce systems can follow similar principles. The common denominators of these example use cases include speed of change and frontend flexibility, avoiding downtime, and reusability of the backend components. Headless solutions enable developers to avoid duplicating efforts by reusing the core capabilities of applications and adapting them to various target systems and use cases. Those principles save developers’ time and can be leveraged to provide a seamless experience to customers, as the underlying data layer and workflows are shared across multiple services offering similar functionalities. Headless architectures also come with the following advantages. Bring new features to market faster New features and MVPs can be introduced with minimal impact on other application components. Release cycles can be managed efficiently via a microservice architecture relying on different squads, and new releases can be pushed to production when ready, independently of the work of other squads. For example, a retailer can expand into a new country quickly by developing a country-specific frontend that reuses existing core components and requires no backend downtime. Scale to meet seasonal demand Companies can independently scale application components where and when required. For example, increased user traffic might require more resources to support frontend components, leaving the backend untouched and vice versa. In an ecommerce scenario, this can take the form of expected deviations from a seasonality standpoint (e.g., end-of-month transactions following salary distribution, holiday shopping) or unplanned variations (e.g., influencer marketing). Thus, this model can result in: Cost savings: Achieve cost reductions as a headless architecture running on the cloud enables to further decouple its pay-as-you-go model, by only paying for the infrastructure required by each front/backend component. Improved customer experience: Develop highly available and responsive applications so that customer experience is not affected by computing resources. Leverage best-of-breed technologies Headless architectures can help companies gain greater flexibility in deploying and managing the IT stack, allowing them to: Focus on value-add development: A composable headless architecture enables companies to choose to build or buy individual components in the stack. As the components are decoupled, it becomes easier to unpick than if the stack is fully integrated — as the APIs can be redirected to the new solution more easily. This approach lets companies put their development activity into value-added functionality should a best-of-breed vendor solution arrive on the market delivering core functionality. Avoid vendor lock-in: This also allows for more seamless technology switches should companies decide to bring development back in-house or switch vendors. Improve talent acquisition and retention: Deploying in a flexible and composable manner lets development teams choose the programming languages and tools they feel best match the requirements, allowing companies to attract and retain top talent. Less downtime with faster troubleshooting A headless architecture also makes it easier to pinpoint which single layer/component is the root cause of issues, as opposed to troubleshooting in monolithic applications where dependencies can be difficult to map. Fewer dependencies mean less downtime; when a change or failure occurs to one component, it doesn't affect the whole stack. For ecommerce retailers, any downtime can have a direct impact on revenue, so an architecture that supports a move towards 24/7 uptime is ideal. Removing data silos and sharing data across multiple journeys also enables companies to implement truly omnichannel experiences and leverage the datasets for other downstream processes, such as user personalization and analytics. Learn how Boots is using MongoDB Atlas to standardize their infrastructure via an API and microservice-driven approach . How can MongoDB help? Headless architectures require a strong data layer to reap all the above-mentioned benefits. MongoDB includes several key features that enable developers to speed up the pace of delivery of new features and bug fixes, scale with minimal effort, and leverage APIs to share data with the different components of the stack. Deliver faster with no downtime MongoDB provides a flexible document model that easily adapts to the needs of different microservices and supports adding new features and data fields without having to rethink the underlying data schema or experience downtime. Let’s consider a product catalog microservice that uses a particular API to read data from certain fields. A second microservice can be developed requiring the same set of fields as the first along with a few new ones connecting via a new API. MongoDB allows the change to be made with no downtime of the product catalog microservice and related API. Scale effortlessly Adding new features and services will likely require scaling the data layer to cater to higher storage and workload. MongoDB, through its sharding capabilities , enables a distributed architecture by horizontally scaling the data layer and by distributing data across multiple servers. This approach can provide better efficiency than a single high-speed, high-capacity server (vertical scaling), to build highly responsive retail solutions. Support composable architectures MongoDB also possesses strong API capabilities to support a microservice-based backend architecture and make data accessible and shareable across components (Figure 2). These capabilities include APIs and drivers supporting a dozen programming languages on the market, such as C, Python, Node.js, and Scala. The MongoDB Unified Query API allows working with data of any type, including time series, arrays, and geospatial. MongoDB Atlas, MongoDB’s Developer Data Platform, comes with the Atlas Data API allowing to programmatically create, read, update, and delete data stored on Atlas clusters as part of standard HTTPS requests. The Atlas GraphQL API allows fine-tuning of API requests by returning only the required data (e.g., information about a particular customer or product). Figure 2:   MongoDB supports a headless architecture via APIs. Data availability and resiliency should also be considered when adopting headless architectures. MongoDB Atlas clusters are highly available and backed by an industry-leading uptime SLA of 99.995% across all cloud providers. If a primary node becomes unavailable, MongoDB Atlas will automatically failover in seconds. Clusters can be also deployed across multiple cloud regions to weather the unlikely event of a total region outage, or in multiple cloud platforms together. Summary Adopting a headless architecture is paramount for retailers wanting to enhance customer experience and build more resilient applications. MongoDB, with its leading database offering, API layer, and high availability is strongly suited to meet the requirements of modern applications. Read our previous blog posts in the MACH series covering Microservices , API-First , and Cloud-Native SaaS .

November 30, 2022

MACH Aligned for Retail: Cloud-Native SaaS

MongoDB is an active member of the MACH Alliance , a non-profit cooperation of technology companies fostering the adoption of composable architecture principles promoting agility and innovation. Each letter in the MACH acronym corresponds to a different concept that should be leveraged when modernizing heritage solutions and creating brand-new experiences. MACH stands for Microservices, API-first, Cloud-native SaaS, and Headless. In previous articles in this series, we explored the importance of Microservices and the API-first approach. Here, we will focus on the third principle championed by the alliance: Cloud-native SaaS. Let’s dive in. What is cloud-native SaaS? Cloud-native SaaS solutions are vendor-managed applications developed in and for the cloud, and leveraging all the capabilities the cloud has to offer, such as fully managed hosting, built-in security, auto-scaling, cross-regional deployment, automatic updates, built-in analytics, and more. Why is cloud-native SaaS important for retail? Retailers are pressed to transform their digital offerings to meet rapidly shifting consumer needs and remain competitive. Traditionally, this means establishing areas of improvement for your systems and instructing your development teams to refactor components to introduce new capabilities (e.g., analytics engines for personalization or mobile app support) or to streamline architectures to make them easier to maintain (e.g., moving from monolith to microservices). These approaches can yield good results but require a substantial investment in time, budget, and internal technical knowledge to implement. Now, retailers have an alternative tool at their disposal: Cloud-native SaaS applications. These solutions are readily available off-the-shelf and require minimal configuration and development effort. Adopting them as part of your technology stack can accelerate the transformation and time to market of new features, while not requiring specific in-house technical expertise. Many cloud-native SaaS solutions focused on retail use cases are available (see Figure 1), including Vue Storefront , which provides a front-end presentation layer for ecommerce, and Amplience , which enables retailers to customize their digital experiences. Figure 1: Some MACH Alliance members providing retail solutions. At the same time, in-house development should not be totally discarded, and you should aim to strike the right balance between the two options based on your objectives. Figure 2 shows pros and cons of the two approaches: Figure 2: Pros and cons of cloud-native SaaS and in-house approaches. MongoDB is a great fit for cloud-native SaaS applications MongoDB’s product suite is cloud-native by design and is a great fit if your organization is adopting this principle, whether you prefer to run your database on-premises, leveraging MongoDB Community and Enterprise Advanced , or as SaaS with MongoDB Atlas . MongoDB Atlas, our developer data platform, is particularly suitable in this context. It supports the three major cloud providers (AWS, GCP, Azure) and leverages the cloud platforms’ features to achieve cloud-native principles and design: Auto-deployment & auto-healing: DB clusters are provisioned, set up, and healed automatically, reducing operational and DBA efforts. Automatically scalable: Built-in auto-scaling capabilities enable the database RAM, CPU, and storage to scale up or down depending on traffic and data volume. A MongoDB Serverless instance allows abstracting the infrastructure even further, by paying only for the resources you need. Globally distributed: The global nature of the retail industry requires data to be efficiently distributed to ensure high availability and compliance with data privacy regulations, such as GDPR , while implementing strict privacy controls. MongoDB Atlas leverages the flexibility of the cloud with its replica set architecture and multi-cloud support, meaning that data can be easily distributed to meet complex requirements Secure from the start: Network isolation, encryption, and granular auditing capabilities ensure data is only accessible to authorized individuals, thereby maintaining confidentiality. Always up to date: Security patches and minor upgrades are performed automatically with no intervention required from your team. Major releases can be integrated effortlessly, without modifying the underlying OS or working with package files. Monitorable and reliable: MongoDB Atlas distributes a set of utilities that provides real-time reporting of database activities to monitor and improve slow queries, visualize data traffic, and more. Backups are also fully managed, ensuring data integrity. Independent Software Vendors (ISVs) increasingly rely on capabilities like these to build cloud-native SaaS applications addressing retail use cases. For example, Commercetools offers a fully managed ecommerce platform underpinned by MongoDB Atlas (see Figure 3). Their end-to-end solution provides retailers with the tools to transform their ecommerce capabilities in a matter of days, instead of building a solution in-house. Commercetools is also a MACH Alliance member, fully embracing composable architecture paradigms explored in this series. Adopting Commercetools as your ecommerce platform of choice lets you automatically scale your ecommerce as traffic increases, and it integrates with many third-party systems, ranging from payment platforms to front-end solutions. Additionally, its headless nature and strong API layer allow your front-end to be adapted based on your brands, currencies, and geographies. Commercetools runs on and natively ingests data from MongoDB. Leveraging MongoDB for your other home-grown applications means that you can standardize your data estate, while taking advantage of the many capabilities that the MongoDB data platform has to offer. The same principles can be applied to other SaaS solutions running on MongoDB. Figure 3: MongoDB Atlas and Commercetools capabilities. Find out more about the MongoDB partnership with Commercetools . Learn how Commercetools enabled Audi to integrate its in-car commerce solution and adapt it to 26 countries . MongoDB supports your home-grown applications MongoDB offers a powerful developer data platform, providing the tools to leverage composable architecture patterns and build differentiating experiences in-house. The same benefits of MongoDB’s cloud-native architecture explored earlier are also applicable in this context and are leveraged by many retailers globally, such as Conrad Electronics, running their B2B ecommerce platform on MongoDB Atlas . Summary Cloud-native principles are an essential component of modern systems and applications. They support ISVs in developing powerful SaaS applications and can be leveraged to build proprietary systems in-house. In both scenarios, MongoDB is strongly positioned to deliver on the cloud-native capabilities that should be expected from a modern data platform. Stay tuned for our final blog of this series on Headless and check out our previous blogs on Microservices and API-first .

September 22, 2022