Francesco Baldissera

9 results

VertexAI and MongoDB for Intelligent Retail Pricing

In today’s competitive retail environment, the ability to quickly adjust pricing in response to market trends, consumer demand, and competitors’ moves is not just an advantage — it's essential for survival. This is where dynamic pricing comes into play, serving as a strategic tool for businesses to pull in their quest for market dominance. Dynamic pricing goes beyond changing numbers; it’s a strategic approach that reflects the dynamic nature of the market, powered by data-driven insights that enable prices to be adjusted in real-time for maximum effectiveness. This shift towards a more agile, data-driven pricing strategy underscores a broader trend in the business world: the recognition of data as a foundational element in decision-making processes. By leveraging real-time data, businesses can ensure their pricing strategies are not only responsive to market fluctuations but also strategically aligned with their overall business objectives, thus driving retail competitiveness to new heights. Let’s uncover how integrating both platforms empowers developers when it comes to delivering best-in-class, data-driven applications. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Google Cloud: A platform for real-time analytics and AI Google Cloud stands out as a powerhouse in real-time analytics and artificial intelligence (AI), offering the infrastructure necessary for dynamic pricing strategies and other data-driven business approaches. It's designed to facilitate big data analysis, machine learning, and operational agility. Built-in tools form the backbone of an effective dynamic pricing strategy. These include Vertex AI for advanced machine learning models following best-in-class MLOps practices, and Pub/Sub for real-time messaging to solve real-time data ingestion. By harnessing the power of Google Cloud, retailers can analyze vast quantities of data in real-time, from current market trends to customer behavior and competitor pricing. This enables businesses to make informed decisions swiftly, adjusting their pricing strategies to reflect the ever-changing market conditions. MongoDB: Flexible data modeling and rapid application development MongoDB complements Google Cloud by offering a high-performance document-based database with a flexible data model that allows rapid application development. For pricing data in particular, where there may be different variants for different sizes of stores or countries, the flexibility allows for the ease of storage of complex or hierarchical data. In addition, polymorphic capabilities allow you to use a single interface to represent different types, making your system more flexible. It also supports scalability as new types can be easily integrated. Lastly, it enhances efficiency by allowing the same operation to behave differently based on the object, reducing code redundancy. This flexible schema also enables seamless integration with AI models. MongoDB Atlas supports workload isolation , ensuring dedicated resources for AI tasks and smooth operation alongside core application workloads. Additionally, change streams and triggers can be utilized to capture real-time updates in the pricing data, allowing the AI model to be called upon for immediate analysis and adaptation and enabling in-app analytics for retailers to gain a competitive edge. Figure 1: MongoDB replica set: Workload Isolation In the dynamic pricing reference architecture, Atlas collections function as an ML feature store. By leveraging the capabilities of MongoDB Atlas as a developer data platform, we are able to embed real-time automated decision-making into our e-commerce applications and reduce operational overhead for both business operations and MLOps model fine-tuning. This is achieved through implementing a streamlined approach to data management, incorporating real-time, automated decision-making, workload isolation, change streams, triggers for immediate updates, and seamless integration with AI models. Dynamic pricing microservice overview Building an event-driven AI architecture leveraging MongoDB Atlas in Google Cloud is straightforward. We can summarize our dynamic pricing microservice by first describing the different components of its architecture, what they are used for, and how they interact with each other: Figure 2: Description of the different technology components of a dynamic pricing microservice and what they are used for. Handling data sources The proposed solution uses Google Cloud Pub/Sub to ingest data sources like customer behavior events in JSON format. Using a technology like Pub/Sub allows for scaling to handle a large number of messages and efficiently distribute them to many subscribers. This is partly because it allows for parallel processing of messages and can be distributed across multiple servers or instances. It is often a fundamental pattern in event-driven architectures, where the flow of the program is determined by events or messages, supporting reactive programming and making the system more responsive and efficient. Data federation We’ll use Vertex AI Notebooks to clean the data and train a TensorFlow model. This model will learn the non-linear relation between customer events, product names, and prices, enabling it to calculate the optimal predicted price. Orchestrating Using Cloud Functions, we orchestrate the customer events coming from the Pub/Sub topic to be converted into tensors, which are then stored in a MongoDB Atlas collection. This collection acts as a feature store serving as a centralized repository designed to store, manage, and serve features for machine learning (ML) models. Features represent individual measurable properties or characteristics used by ML models to make predictions or decisions. MongoDB’s document model flexibility paired with the document versioning pattern will allow us to design time-sensitive chunks of events and granularly manage the training datasets for our models. Serving The Cloud Function will use the event tensor to invoke our trained model that is served in a Vertex AI endpoint. The model will provide a predicted price score that can then be inserted into our product catalog stored in MongoDB so our e-commerce application can read the price change in real time. Dynamic pricing architecture: Putting it all together In the following architecture diagram, the blue data flow illustrates how customer event data is ingested into a Pub/Sub topic. This allows us to make a push subscription to a Cloud Function from the topic. This function orchestrates the data transformation from a raw event into a tensor and calls an endpoint to then update the predicted price into our MongoDB product catalog collection. By using this architectural approach, we can isolate raw events threads and build different services around them, reacting in real time for dynamic pricing or asynchronously for model training. With every component loosely coupled, we prevent the system from crashing completely. Moreover, publishers and subscribers can continue to process their logic without the need for the other components to receive or publish messages. Figure 3: Dynamic pricing architecture integrating different Google Cloud components and MongoDB Atlas as a Feature Store For businesses, this translates into more precise and responsive pricing strategies. In the model building and optimization phase, by utilizing TensorFlow within Google Cloud Vertex AI notebooks, retailers can harness the power of deep learning capabilities. The neural network model is capable of analyzing intricate patterns and relationships within large datasets. This is how businesses may capture nuanced market dynamics, customer behavior, and pricing elasticity with greater accuracy, leading to more optimized pricing decisions. But even the best of the models should be consistently optimized. Maintaining model effectiveness requires continuous adaptation. Regularly evaluating accuracy and performing feature engineering ensures your models stay sensitive to market changes. This underscores the importance of retraining as a core principle in a continuous improvement data science approach. Using MongoDB Atlas as your operational data layer means that your feature store is always accessible, reducing downtime and improving the efficiency of machine learning operations. On the other hand, cross-region deployments can bring features closer to where machine learning models are being trained or served, reducing latency and improving model performance. Get started The integration of Google Cloud and MongoDB presents an easy approach to modernizing dynamic pricing strategies. Leveraging real-time analytics, flexible data modeling, and reactive microservices architecture, it empowers businesses to achieve operational efficiencies and gain a competitive advantage in their pricing strategies. For retailers looking to elevate their pricing strategies, considering a strategic partnership with both technologies is essential. For a deeper dive into integrating the different components of this architecture, make sure to check our GitHub repository. Check out our AI resource page to learn more about building AI-powered apps with MongoDB.

April 17, 2024

How Atlas Edge Server Bridges the Gap Between Connected Retail Stores and the Cloud

Efficient operations and personalized customer experiences are essential for the success of retail businesses. In today's competitive retail industry, retailers need to streamline their operations, optimize inventory management, and personalize the customer experience to stay ahead. In a recent announcement at MongoDB .local London, we unveiled the private preview of MongoDB Atlas Edge Server , offering a powerful platform that empowers retailers to achieve their goals, even when low or intermittent connectivity issues may arise. What is edge computing, and why is it so relevant for retail? The retail industry's growing investment in edge computing, projected to reach $208 billion by 2023, confirms the strategic shift retailers are willing to take to reach new markets and enhance their offers. And for good reason — in scenarios where connectivity is unreliable, edge computing allows operations to continue uninterrupted. Edge computing is a strategic technology approach that brings computational power closer to where data is generated and processed, such as in physical retail stores or warehouses. Instead of relying solely on centralized data centers, edge computing deploys distributed computing resources at the edge of the network. The evolution of investments in edge computing reflects a journey from initial hesitation to accelerated growth. As edge computing continues to mature and demonstrate its value, retailers are likely to further embrace and expand their focus in bringing applications where the computing and data is as close as possible to the location where it's being used. Let’s dig into how MongoDB addresses the current challenges any retailer would experience when deploying or enhancing in-store servers using edge computing. Connected store: How MongoDB's versatile deployment from edge to cloud powers critical retail applications. Currently, many retail stores operate with an on-site server in place acting as the backbone for several critical applications within the store ecosystem. Having an on-site server means that the data doesn't have to travel over long distances to be processed, which can significantly reduce latency. This setup can often also be more reliable, as it doesn't depend on internet connectivity. If the internet goes down, the store can continue to operate since the essential services are running on the local network. This is crucial for applications that require real-time access to data, such as point-of-sale (POS) systems, inventory management, and workforce-enablement apps for customer service. The need for sync: Seamless edge-to-cloud integration The main driver for retailers taking a hybrid approach is that they want to experience the low latency and reliability of an on-site server coupled with the scalability and power of cloud computing for their overall IT stack. The on-site server ensures that the devices and systems that are critical to sales floor operations — RFID tags and readers for stock management, mobile scanners for associates, and POS systems for efficient checkout — remain functional even with intermittent network connectivity. This data must be synced to the retailer’s cloud-based application stack so that they have a view of what’s happening across the stores. Traditionally this was done with an end-of-day batch job or nightly upload. The aim for the next generation of these architectures is to give real-time access to the same data set, seamlessly reflecting changes made server-side or in the cloud. This needs to be achieved without a lag from the store being pushed to the cloud and without creating complex data sync or conflict resolution that needs to be built and maintained. These complexities may cause discrepancies between the online and offline capabilities of the store's operations. It makes sense that for any retailer wanting to benefit from both edge and cloud computing, it must simplify its architecture and focus on delivering value-added features to delight the customer and differentiate from their competitors. Low-latency edge computing with Atlas Edge Server and its different components to achieve data consistency and accuracy across layers This is when Atlas Edge Server steps in to bridge the gap. Edge Server runs on-premises and handles sync between local devices and bi-directional sync between the edge server and Atlas. It not only provides a rapid and reliable in-store connection but also introduces a tiered synchronization mechanism, ensuring that data is efficiently synced with the cloud. These devices are interconnected through synchronized data layers from on-premises systems to the cloud, simplifying the creation of mobile apps thanks to Atlas Device SDK , which supports multiple programming languages, development frameworks, and cloud providers. Additionally, Atlas Device Sync automatically handles conflicts, eliminating the need to write complex conflict-resolution code. In the below diagram, you can see how the current architecture for a connected store with devices using Atlas Device SDK and Atlas Device Sync would work. This is an ideal solution for devices to sync to the Atlas backend. A high-level overview of the Architecture for connected devices in a retail space with MongoDB Device Sync and MongoDB Atlas when connectivity is unreliable. In a store with Atlas Edge Server, the devices sync to Atlas on-premises. All changes made on the edge or on the main application database are synced bidirectionally. If the store server goes offline or loses connectivity, the devices can still access the database and update it locally. The store can still run its operations normally. Then, when it comes back online, the changes on both sides (edge and cloud) are resolved, with conflict resolution built into the sync server. A high-level overview of the architecture for connected devices in a retail space with MongoDB Device Sync and MongoDB Atlas solving connectivity issues by implementing an on-premises Atlas Edge Server. Deploying Atlas Edge Server in-store turns connected stores into dynamic, customer-centric hubs of innovation. This transformation produces advantageous business outcomes including: Enhanced inventory management — The hybrid model facilitates real-time monitoring of logistics, enabling retailers to meticulously track stock in store as shipments come in and sales or orders are processed. By processing data locally and syncing with the cloud, retailers gain immediate insights, allowing for more precise inventory control and timely restocking. Seamless operational workflows — The reliability of edge computing ensures essential sales tools — like RFID systems, handheld scanners, workforce apps, and POS terminals — remain operational even during connectivity hiccups. Meanwhile, the cloud component helps ensure that all data is backed up and accessible for analysis, leading to more streamlined store operations. Customized shopping experiences — With the ability to analyze data on-the-spot (at the edge) and harness historical data from the cloud, retailers can create highly personalized shopping experiences. This approach enables real-time, tailored product recommendations and promotions, enhancing customer engagement and satisfaction. Conclusion With Atlas Edge Server, MongoDB is committed to meeting the precise needs of modern retail stores and their diverse use cases. Lacking the seamless synchronization of data between edge devices and the cloud, delivering offline functionality that enables modern, next-generation workforce applications, as well as in-store technologies like POS systems, is daunting. Retailers need ready-made solutions so they don't have to deal with the complexities of in-house, custom development. This approach allows them to channel their development efforts towards value-added, differentiating features that directly benefit their customers by improving their in-store operations. With this approach, we aim to empower retailers to deliver exceptional customer experiences and thrive in the ever-evolving retail landscape. Ready to revolutionize your retail operations with cutting-edge technology? Discover how MongoDB's Atlas Edge Server can transform your store into a dynamic, customer-centric hub. Don't let connectivity issues hold you back. Embrace the future of retail with Atlas Edge Server!

November 30, 2023

Boost the Accuracy of E-commerce Search Results with Atlas Vector Search

Artificial Intelligence’s (AI) growth has led to transformative advancements in the retail industry, including natural language processing, image recognition, and data analysis. These capabilities are pivotal to enhancing the efficiency and accuracy of e-commerce search results. E-commerce, characterized by its vast product catalogs and diverse customer base, generates enormous amounts of data every day. From user preferences and search histories to product reviews and purchase patterns — and add to that images, video, and audio associated with product campaigns and user search — the data is both a goldmine and a challenge. Traditional search mechanisms, which rely on exact keyword matches, are inadequate at handling such nuanced and voluminous data. This is where vector search comes into play as the perfect data mining tool . As a sophisticated search mechanism, it leverages AI-driven algorithms to understand the intrinsic relationships between data points. This enables it to discern complex patterns, similarities, and contexts that conventional keyword-based searches might overlook. Let’s dig deeper into the differences between traditional keyword matching search and vector search, and answer questions like: What type of queries does vector search improve in the retail search landscape? What are the challenges associated with it? And how can your business tap into the competitive advantage it represents? Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Traditional Keyword Matching vs. Vector Search Traditional search functionalities for e-commerce platforms — keyword matching, typo tolerance, autocomplete, highlighting, facets, and scoring — are often built in-house or implemented on top of typical search engines like Apache Lucene, AtlasSearch, or ElasticSearch, relying heavily on metadata textual descriptions. While this has served the industry well for years, it often falls short of understanding the nuanced needs of modern consumers. For instance, a customer might be looking for a "blue floral summer dress," but if the product description lacks these terms, it might not appear in the search results, even if it perfectly matches the visual description. Figure 1: As embeddings encode numerically the meaning of documents, semantically close documents will be geometrically close as well. Vector search is a method that finds similar items in a dataset based on their vector representations, and offers a more efficient and accurate way to sift through large datasets. Instead of relying on exact matches, it uses mathematical techniques to measure the similarity between vectors, allowing it to retrieve items that are semantically similar to the user's query, even if the query and the item descriptions don't contain exact keyword matches. Figure 2: Data flow diagram showcasing how applications, vector embedding algorithms, and search engines work together at a high level. One great thing about Vector search is that by encoding any type of data, i.e. text, images or sound, you can perform queries on top of that, creating a much more comprehensive way of improving the relevance of your search results. Let’s explore examples of queries that involve context, intent, and similarity. Visual similarity queries Query: "Find lipsticks in shades similar to this coral lipstick." Vector Search Benefit: Vector search can recognize the color tone and undertones of the specified lipstick and suggest similar shades from the same or different brands. Data type: image or text Contextual queries Query: "Affordable running shoes for beginners." Vector Search Benefit: Vector search can consider both the price range and the context of "beginners," leading to relevant shoe suggestions tailored to the user's experience level and budget. Data type: text, audio (voice) Natural language queries Query: "Show me wireless noise-canceling headphones under $100." Vector Search Benefit: Capture intent. Vector search can parse the query's intent to filter headphones with specific features (wireless, noise-canceling) and a price constraint, offering products that precisely match the request. Data type: text, audio (voice) Complementary product queries Query: "Match this dress with elegant heels and a clutch." Vector Search Benefit: Vector search can comprehend the user's request to create a coordinated outfit by suggesting shoes and accessories that complement the selected dress. Data type: text, audio (voice), image Challenging landscape, flexible stack Now that we've explored different queries and their associated data types that could be used in vector embeddings for search, we can see how much more information can be used to deliver more accurate results and fuel growth. Let’s consider some of the challenges associated with a vector search solution data workflow and how MongoDB Atlas Vector Search helps bridge the gap between challenges and opportunities. Data overload The sheer volume of products and user-generated data can be overwhelming, making it challenging to offer relevant search results. By embedding different types of data inputs like images, audio (voice), and text queries for later use with vector search, we can simplify this workload. Storing your vector encoding in the same shared operational data layer your applications are built on top of, but also generating search indexes based on those vectors, makes it simple to add context to your application search functionalities. Using Atlas Vector Search combined with MongoDB App Services , you can reduce operational overhead by creating a trigger that could “see” when a new document is created in your collections and automatically make the call to the embedding API of your preference, pushing the document to it and storing the retrieved embedding data in the same document stored in your collection. Figure 3: Storing vectors with the data simplifies the overall architecture of your application. As the number of documents or vectors grows, efficient indexing structures ensure that search performance remains reasonable. By simply creating an index based on the embedded data field, you can leverage the optimized retrieval of the data, reduce the computational load, and accelerate its performance, especially for nearest neighbor search tasks, where the goal is to find items that are most similar to a given query. Altogether, the combination of MongoDB Vector Search capabilities with App Services and indexing provides a robust and scalable solution to achieve real-time responsiveness. An indexed vector search database can provide rapid query results, making it suitable for applications like recommendation engines or live search interfaces. Changing consumer behavior Developing an effective vector search solution involves understanding the nuances of the retail domain. Retailers must consider factors like seasonality, trends, and user behavior to improve the accuracy of search results. To overcome this challenge, retailers will need to be able to adjust their business model by categorizing their product catalogs and user data according to different criteria, for example: So as you can see all this vast amount of information can be embedded to build more comprehensive criteria for relevance, but first it needs to be properly captured and organized. This is where the value of the flexible document model comes into play. The document model allows you to define different fields and attributes for each category of data. This can be used to capture the various categorization criteria. Retailers could also utilize embedded subdocuments to associate relevant information with products or customers. For instance, you can embed a subdocument containing marketing campaign data, engagement channels, and geographic location within products to track their performance. As categorization criteria evolve, dynamic schema evolution allows you to add or modify fields without disrupting existing data. This flexibility easily accommodates changing business needs. Retailers may also use embedded arrays to record purchase history for customers. Each array element can represent a transaction, including product details and purchase date, facilitating segmentation based on recency and frequency. By embedding all these different data types, and leveraging the flexible capabilities of the document model, retailers can create a comprehensive and dynamic system that effectively categorizes data according to diverse criteria in a fast and resilient way. This enables personalized search experiences and enhanced customer engagement in the e-commerce space. Sitting on a goldmine Every retailer worldwide now realizes that with their customer data, they are sitting on a goldmine. Using the proper enabling technologies would allow them to build better experiences for their customers while infusing their applications with automated, data-driven decision-making. Retailers offering more intuitive and contextual search results can ensure their customers find what they're looking for by personalizing the relevance of their search results, enhancing satisfaction, and increasing the likelihood of successful transactions. The future of e-commerce search lies in harnessing the power of technologies like Atlas Vector Search , as it’s not only another vector search database, but also an extended product for the developer data platform , providing them with an integrated set of data and application services. For retailers, the message is clear: to offer unparalleled shopping experiences, embracing and integrating vector search functionalities with a performant and reliant platform that simplifies your data organization and storage is not just beneficial, it's essential. Learn more and discover How to Implement Databricks Workflows and Atlas Vector Search for Enhanced E-commerce Search Accuracy with our developer guide, and check out our GitHub repository explaining the full code for deploying an AI-Enhanced e-commerce search solution

October 11, 2023

Fusing MongoDB and Databricks to Deliver AI-Augmented Search

With customers' attention more and more dispersed across channels, platforms, and devices, the retail industry rages with the relentless competition. The customer’s search experience on your storefront is the cornerstone of capitalizing on your Zero Moment of Truth, the point in the buying cycle where the consumer's impression of a brand or product is formed. Imagine a customer, Sarah, eager to buy a new pair of hiking boots. Instead of wandering aimlessly through pages and pages of search results, she expects to find her ideal pair easily. The smoother her search, the more likely she is to buy. Yet, achieving this seamless experience isn't a walk in the park for retailers. Enter the dynamic duo of MongoDB and Databricks. By equipping their teams with this powerful tech stack, retailers can harness the might of real-time in-app analytics. This not only streamlines the search process but also infuses AI and advanced search functionalities into e-commerce applications. The result? An app that not only meets Sarah's current expectations but anticipates her future needs. In this blog, we’ll help you navigate through what are the main reasons to implement an AI-augmented search solution by integrating both platforms. Let’s embark on this! Check out our AI resource page to learn more about building AI-powered apps with MongoDB. A solid foundation for your data model For an e-commerce site built around the principles of an Event Driven and MACH Architecture , the data layer will need to ingest and transform data from a number of different sources. Heterogeneous data, such as product catalog, user behavior on the e-commerce front-end, comments and ratings, search keywords, and customer lifecycle segmentation- all of this is necessary to personalize search results in real time. This increases the need for a flexible model such as in MongoDB’s documents and a platform that can easily take in data from a number of different sources- from API, CSV, and Kafka topics through the MongoDB Kafka Connector . MongoDB's Translytical capabilities, combining transactional (OLTP) and analytical (OLAP) offer real-time data processing and analysis, enabling you to simplify your workloads while ensuring timely responsiveness and cost-effectiveness. Now the data platform is servicing the operational needs of the application- what about adding in AI? Combining MongoDB with Databricks, using the MongoDB Spark Connector can allow you to train your models with your operational data from MongoDB easily and to trigger them to run in real-time to augment your application as the customer is using it. Centralization of heterogeneous data in a robust yet flexible Operational Data Layer The foundation of an effective e-commerce data layer lies in having a solid yet flexible operational data platform, so the orchestrating of ML models to run at specific timeframes or responding to different events, enabling crucial data transformation, metadata enrichment, and data featurization becomes a simple, automated task for optimizing search result pages and deliver a frictionless purchasing process. Check out this blog for a tutorial on achieving near real-time ingestion using the Kafka Connector with MongoDB Atlas, and data processing with Databricks Spark User Defined Functions. Adding relevance to your search engine results pages To achieve optimal product positioning on the Search Engine Results Page (SERP) after a user performs a query, retailers are challenged with creating a business score for their products' relevance. This score incorporates various factors such as stock levels, competitor prices, and price elasticity of demand. These business scores are complex real-time analyses calibrated against so many factors- it’s a perfect use case for AI. Adding AI-generated relevance to your SERPs can accurately predict and display search results that are most relevant to users' queries, leading to higher engagement and increased click-through rates, while also helping businesses optimize their content based on the operational context of their markets. The ingestion into the MongoDB Atlas document-based model laid the groundwork for this challenge, and leveraging the MongoDB Apache Spark Streaming Connector companies can persist their data into Databricks, taking advantage of its capabilities for data cleansing and complex data transformations, making it the ideal framework for delivering batch training and inference models. Diagram of the full architecture integrating MongoDB Atlas and Databricks for an e-commerce store, real-time analytics, and search MongoDB App Services act as the mortar of our solution, achieving an overlap of the intelligence layer in an event-driven way, making it not only real-time but also cost-effective and rendering both your applications and business processes nimble. Make sure to check out this GitHub repository to understand in depth how this is achieved. Data freshness Once that business score can be calculated comes the challenge of delivering it over the search feature of your application. With MongoDB Atlas native workload isolation, operational data is continuously available on dedicated analytics nodes deployed in the same distributed cluster, and exposed to analysts within milliseconds of being stored in the database. But data freshness is not only important for your analytics use cases, combining both your operational data with your analytics layer, retailers power in-app analytics and build amazing user experiences across your customer touch points. Considering MongoDB Atlas Search 's advanced features such as faceted search, auto-complete, and spell correction, retailers rest assured of a more intuitive and user-friendly search experience not only for their customers but for their developers, as it minimizes the tax of operational complexity as all these functionalities are bundled in the same platform. App-driven analytics is a competitive advantage against traditional warehouse analytics Additionally, the search functionality is optimized for performance, enabling businesses to handle high search query volumes without compromising user experience. The business score generated from the AI models trained and deployed with Databricks will provide the central point to act as a discriminator over where in the SERPs any of the specific products appear, rendering your search engine relevance fueled and securing the delivery of a high-quality user experience. Conclusion Search is a key part of the buying process for any customer. Showing customers exactly what they are looking for without investing too much time in the browsing stage reduces friction in the buying process, but as we’ve seen it might not be so easy technically. Empower your teams with the right tech stack to take advantage of the power of real-time in-app analytics with MongoDB and Databricks. It’s the simplest way to build AI and search capabilities into your e-commerce app, to respond to current and future market expectations. Check out the video below and this GitHub repository for all the code needed to integrate MongoDB and Databricks and deliver a real-time machine-learning solution for AI-augmented Search.

September 19, 2023

Amplificando las Operaciones de Retail con IA Generativa y Búsqueda Vectorial: El Potencial Inexplorado

En el mundo hipercompetitivo del comercio minorista, los líderes de la industria buscan constantemente nuevas formas de revolucionar la experiencia del cliente y optimizar las operaciones. Ahí es donde entran en juego la IA generativa y la búsqueda vectorial. Ambos ofrecen un potencial transformador en múltiples casos de uso minorista, desde campañas de marketing personalizadas hasta una gestión eficiente del inventario, convirtiéndolos en indispensables para aquellos que buscan mantenerse a la vanguardia de la industria. En este blog, exploremos juntos cómo la IA Generativa y la Búsqueda Vectorial ayudan a los minoristas a abordar ineficiencias y contratiempos en sus operaciones para romper nuevos límites y cómo la plataforma de desarrollo MongoDB Atlas es perfecta para lograrlo. La obsolescencia tradicional Tradicionalmente, los minoristas han confiado en sistemas manuales basados en reglas y modelos predictivos básicos para navegar por sus complejos paisajes. Sin embargo, estos sistemas a menudo no son suficientes para manejar el enorme volumen y diversidad de datos generados en entornos minoristas. Como resultado, la focalización personalizada del cliente, la previsión del inventario y otras operaciones cruciales no solo son complejas sino ineficientes. Las implicaciones directas de sistemas tan complejos e ineficientes resultan en pérdida de ventas, pérdida de ingresos, inventarios con exceso o falta de stock y, lo más importante, oportunidades perdidas para establecer relaciones más profundas con los clientes. Como reacción, algunos minoristas han comenzado a explorar soluciones avanzadas de IA y Machine Learning (ML). Pero integrar estas tecnologías en los sistemas existentes suele ser una tarea titánica. Implica lidiar con silos de datos, entender modelos complejos de IA y invertir significativamente en infraestructura y experiencia. Además, estas soluciones a menudo no ofrecen el retorno de inversión deseado debido a la complejidad de implementarlas, gestionarlas y escalarlas para adaptarse a las necesidades cambiantes. Con el document model y la API unificada, aumentas la resiliencia al futuro de tus aplicaciones Las operaciones de inventario, la experiencia del cliente y el desarrollo de productos son algunas de las áreas en las que los minoristas pueden aprovechar la IA Generativa y la Búsqueda Vectorial, así que profundicemos en ellas para comprender completamente los desafíos y oportunidades. La búsqueda de la excelencia operacional Todo comienza con la evolución de tu Gestión de Inventario, de modo que constituya la capa nuclear sobre la cual construir los modelos de IA Generativa para analizar y categorizar grandes cantidades de datos de productos en tiempo real, facilitando la previsión eficiente del inventario. Esto puede ayudar a los minoristas a predecir con precisión la demanda y evitar escenarios de sobreabastecimiento o desabastecimiento. Permitiría una eficiencia operativa en todos los niveles de la cadena de suministro. Imagina tus operaciones de Back of House (BHO) y Front of House (FHO) funcionando en tiempo real, con datos mejorados por IA fluyendo a través de la cadena con aplicaciones Offline First. Te permitiría entender cómo fluyen tus clientes a través de cada canal (ofreciendo verdaderaomnicanalidad), cómo interactúan con tus productos y usar esos datos para crear nuevos flujos de ingresos utilizando modelos LLM para entender cosas como los artículos que se compran comúnmente juntos para mejorar la mercadotecnia visual en tienda y digital, activando la reposición automática inteligente en la cadena de suministro y ofreciendo nuevas formas de añadir contexto a la búsqueda del usuario. Interacciones cliente-marca reimaginadas Abordar los desafíos de gestionar tu catálogo de productos de manera eficiente significa que más adelante será más fácil optimizar experiencias basadas en recomendaciones de productos en tiempo real, campañas de marketing personalizadas y soporte al cliente inteligente. Los modelos de IA Generativa requieren grandes cantidades de datos de entrenamiento de alta calidad para generar salidas significativas y precisas. Si los datos de entrenamiento están sesgados, incompletos o son de baja calidad, los resultados pueden ser poco fiables. La flexibilidad del modelo de documentos MongoDB Atlas combinado con Atlas Device Sync es la base perfecta para usar como solución principal para construir tus modelos de recomendación o tus aplicaciones de experiencia del cliente centralizadas. Usar MongoDB Atlas como tu capa de datos central garantiza que tus modelos de IA Generativa serán alimentados con los datos correctos en tiempo real, y además crear una capa de inteligencia para tus aplicaciones. Añade algo de Atlas Vector Search a la arquitectura de tu aplicación para que maneje grandes volúmenes de datos de manera eficiente, buscando rápidamente a través de espacios vectoriales de alta dimensión, lo que puede acelerar la recuperación de datos de entrenamiento y la generación de diferentes tipos de contenido con IA generativa, con una precisión mejorada que permite la búsqueda semántica encontrando los puntos de datos más similares en el conjunto de entrenamiento para cualquier prompt, que podría ser texto, imágenes o videos. Alimentar los LLM con tus propios datos dinámicos evita que "alucinen", mejorando tu experiencia de búsqueda y el soporte al cliente. Como resultado, puedes evolucionar las experiencias de marca de tus clientes mejorando los modelos de recomendación de productos y los esfuerzos de soporte al cliente, proporcionando soluciones precisas que resuenen con las búsquedas de los clientes, incluso con entradas vagas o parciales. Extendiendo la experiencia de compra mediante la personalización, ayudando a los clientes a navegar a través de catálogos de productos y hacer selecciones basadas en sus preferencias y necesidades, permitiendo a los clientes buscar productos usando imágenes, clave en sectores como la moda o la decoración del hogar. Otro caso de uso valioso sería utilizar modelos LLM para habilitar el análisis de sentimiento en opiniones de clientes, comentarios en redes sociales y otras formas de retroalimentación del cliente para determinar el sentimiento general hacia un producto, marca o servicio, proporcionando valiosos insights para los equipos de marketing y desarrollo de productos. Desarrollo y marketing de productos simplificados Sin una comprensión clara de lo que el cliente necesita o quiere, un producto puede no encontrar mercado. La investigación de mercado y el compromiso del cliente son cruciales para el desarrollo exitoso del producto. En mercados saturados, destacar entre los competidores puede ser un desafío. Las propuestas de valor únicas y las características innovadoras son esenciales para distinguir un producto. La velocidad es esencial en el paisaje de mercado de hoy en día. Los retrasos en el desarrollo de productos pueden llevar a oportunidades perdidas, especialmente al tratar con tendencias o avances tecnológicos. La IA Generativa puede analizar grandes cantidades de datos de clientes e identificar tendencias, preferencias y necesidades. Al generar ideas a partir de estos datos, puede ayudar a desarrollar productos que se adapten mejor a las necesidades de los clientes, creando oportunidades para la venta cruzada o la venta adicional mediante recomendaciones de productos. La IA Generativa también puede extender ciertos esfuerzos de marketing de productos mejorando su fase de diseño y marketing de contenido. Enriqueciendo la generación de contenidos para productos específicos con contexto generado desde tu Vista de Cliente 360 y datos de Front-of-house y Back-of-house, los minoristas podrían crear bucles de crecimiento automatizados para sus líneas de productos en diferentes canales maximizando los ingresos sin sacrificar grandes cantidades de recursos. Conectando los datos de sus canales de marketing con sus datos de contexto empresarial a través de la API unificada de MongoDB Atlas, un servicio de middleware totalmente gestionado, y emparejándolo con Atlas Vector Search, cualquier minorista puede maximizar el ROI de la fase de promoción de su estrategia de entrada al mercado, impulsando sus esfuerzos de marketing de una manera verdaderamente data-driven. El crecimiento automatizado por la IA sólo es posible reduciendo la fricción entre los diferentes silos de datos para facilitar su flujo Si quieres saber más sobre cómo construir búsqueda semántica usando MongoDB Atlas y OpenAI no te pierdas nuestra guía sobre Cómo hacer búsqueda semántica en MongoDB usando Atlas Vector Search .

July 21, 2023

Amplifying Retail Operations with Generative AI and Vector Search: The Unexplored Potential

In the hyper-competitive world of retail, industry leaders are continually looking for new ways to revolutionize the customer experience and optimize operations. That's where generative AI and vector search come into play. Both offer transformative potential in myriad retail use cases, from personalized marketing campaigns to efficient inventory management, making them indispensable for those aiming to stay at the industry's cutting edge. In this blog let’s explore together how Generative AI and Vector Search help retailers address inefficiencies and setbacks in their operations to break new ground and how the MongoDB Atlas developer platform is a perfect fit for achieving this. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. The traditional playbook Traditionally, retailers have relied on manual, rule-based systems and basic predictive models to navigate their complex landscapes. However, these systems often fall short when it comes to handling the sheer volume and diversity of data generated in retail environments. As a result, personalized customer targeting, inventory forecasting, and other crucial operations are not only complex but downright inefficient. The direct implications of such complex and inefficient systems result in lost sales - loss of revenue, overstocked or understocked inventories, and, most importantly, missed opportunities for establishing deeper customer relationships. As a reaction, some retailers have started to explore advanced AI and Machine Learning (ML) solutions. But integrating these technologies into existing systems is often a herculean task. It involves dealing with data silos, understanding complex AI models, and investing significantly in infrastructure and expertise. Additionally, these solutions often don't deliver the desired return on investment due to the sheer complexity of implementing, managing, and scaling them to accommodate evolving needs. With the document model and unified API, you future-proof your retail operations. Inventory operations, customer experience, and product development are some of the areas in which retailers can leverage Generative AI and Vector Search, so let’s dig deeper into them to fully understand the challenges and opportunities. The quest for operational excellence It all starts with evolving your Inventory Management , so it will constitute the nuclear layer on top of which build the Generative AI models to analyze and categorize large amounts of product data in real-time, facilitating efficient inventory forecasting. This can help retailers accurately predict demand and avoid overstocking or understocking scenarios. It would allow operational efficiency at every level of the supply chain. Imagine your Back of House (BHO) and Front of House (FHO) operations running on real-time, AI-enhanced data flowing through the chain with Offline First applications. It would allow you to understand how your customers flow through every channel (rendering real omnichannel capabilities), how they are interacting with your products, and use that data to create new revenue streams by using LLM models to understand things like commonly bought together items for improvement of the in-store and digital visual merchandising, triggering intelligent auto-replenishment on the supply chain, and providing new ways of adding relevance to user search. Better customer experience Addressing the challenges of managing your product catalog efficiently means that later on it will be easier to streamline experiences based on real-time product recommendations, personalized marketing campaigns, and intelligent customer support. Generative AI models require massive amounts of high-quality training data to generate meaningful and accurate outputs. If the training data is biased, incomplete, or low quality, the results can be unreliable. The flexibility of the MongoDB Atlas document model paired with Atlas Device Sync is the perfect baseline to use as the core solution to on top of that build your recommendation models or your centralized customer experience applications. Using MongoDB Atlas as your central data layer ensures that your Generative AI models will be fed with the right data in real-time, and on top of that create an intelligence layer for your applications. Add some Atlas Vector Search into your application architecture for it to handle large volumes of data efficiently, by quickly searching through high-dimensional vector spaces, which can accelerate the retrieval of training data and the generation of AI outputs, with improved accuracy enabling semantic search by finding the most similar data points in the training set for any given input which could be text, images or videos. Feeding LLMs with your own dynamic data prevents them from "hallucinating" bettering your search experience and customer support As a result, you can evolve your customers' brand experiences by improving the product recommendation models and customer support efforts, providing accurate solutions that resonate with customer queries even with vague or partial inputs. Extending the shopping experience by personalization, helping customers navigate through product catalogs and make selections based on their preferences and needs, allowing customers to search for products using images, key in sectors such as fashion or home decor. Another valuable use case would be using LLM models to enable sentiment analysis in customer reviews, social media comments, and other forms of customer feedback to determine overall sentiment towards a product, brand, or service, providing valuable insights for marketing and product development teams. Streamlined product development and marketing Without a clear understanding of what the customer needs or wants, a product may fail to find a market. Market research and customer engagement are crucial for successful product development. In crowded markets, standing out from competitors can be challenging. Unique value propositions and innovative features are essential to distinguish a product. Speed is of the essence in today's fast-paced market landscape. Delays in product development can lead to missed opportunities, especially when dealing with trends or technological advancements. Generative AI can analyze large amounts of customer data and identify trends, preferences, and needs. By generating insights from this data, it can aid in developing products that better cater to customer needs, creating opportunities for cross-selling or upselling by product recommendations. Generative AI can also extend certain product marketing efforts improving its content creation and marketing phase. Enhancing the content generation for specific products by enriching it with context generated from your Customer 360 View and Front-of-house and Back-of-house data, retailers could create automated growth loops for their product lines over different channels maximizing revenues without sacrificing huge amounts of resources. By connecting their marketing channels data with their business context data through MongoDB Atlas unified API , a fully managed middleware service, pairing it with Atlas Vector Search any retailer can maximize the ROI of the promotion phase of their go-to-market strategy, driving their marketing efforts forward in a truly data-driven way. Automated AI-fueled growth is only possible through reducing friction between realms so data flows through the system If you want to know more about how to build a Semantic Search solution using MongoDB Atlas and OpenAI check out our guide on How to Do Semantic Search in MongoDB Using Atlas Vector Search .

July 20, 2023

Lleva al siguiente nivel tu estrategia de fijación de precios con MongoDB y Databricks

La implantación de soluciones de análisis en tiempo real con el stack tecnológico adecuado puede tener beneficios transformadores. Los minoristas quieren hacer crecer su marca o mejorar la experiencia del cliente con Value Based Pricing, sin dejar de ser competitivos y rentables. A pesar de las expectativas empresariales de construir operaciones cada vez más "data driven" en aras de la eficiencia, las empresas suelen fracasar en ese empeño , y en el centro de ello se encuentran los esfuerzos por realizar analítica en tiempo real. En este blog, exploraremos la arquitectura de la Figura 1 y elaboraremos las ventajas de integrar MongoDB Atlas y Databricks como un emparejamiento perfecto para las estrategias de fijación de precios utilizando IA en tiempo real. La solución que describiremos integra conceptos de la arquitectura Event Driven en la parte de generación e ingesta de datos, orquestación de procesos ETL en tiempo real, machine learning y microservicios. ¡Vamos a empezar! Figure 1: Arquitectura simplificada de una solución para generación de precios en tiempo real Reduce la fricción con flexibilidad La complejidad de los datos de precios para un minorista con una línea de productos extensa y diversa aumenta debido a factores como las campañas estacionales, la expansión global y la introducción de nuevos productos. El seguimiento y el análisis de los precios históricos y las ofertas de productos se vuelven más difíciles a medida que evolucionan durante el año. Las soluciones analíticas creadas en torno a arquitecturas basadas en eventos intentan explicar lo que está ocurriendo en un sistema o aplicación específica en función de cualquier suceso significativo, como el comportamiento de los usuarios, las actualizaciones de datos, las alertas del sistema o las lecturas de sensores, por ejemplo. Decidir qué eventos son cruciales para comprender a sus clientes e instrumentar su modelo de negocio en torno a ellos es cuando las cosas empiezan a complicarse. Especialmente cuando se trata de instrumentar los modelos de datos utilizando sistemas tradicionales de gestión de bases de datos relacionales y su desventaja cuando se trata de emparejar esa estructura de datos tabular con aplicaciones orientadas a objetos. La incapacidad de un minorista para adaptar su modelo de datos al comportamiento del cliente se traduce rápidamente en fricciones en las operaciones y como consecuencia, en una presencia más débil en el mercado. Por ejemplo, estrategias de precios deficientes en comparación con competidores por la falta de información sobre los precios históricos y cómo estos varían entre la gama de productos. Figure 2: Un modelo de datos inflexible es un limitante de la innovación Esa fricción es contagiosa a lo largo de toda la cadena de valor de una organización, afectando la capa semántica del negocio (un puente entre las estructuras de datos técnicas y el entendimiento de éstas por parte de los usuarios de la capa de negocio), generando inconsistencias en los datos, aumentando el tiempo necesario para generar valor analítico y, en definitiva, erosionando la cultura de datos de la organización. La capacidad del modelo conceptual de tu empresa de adaptarse a un comportamiento de usuario que está en constante cambio ayuda a reducir esa fricción significativamente, ya que su flexibilidad permitiría un modelado de datos más intuitivo de los eventos del mundo real. Para sus retos de estrategias de precios en tiempo real, el document model de MongoDB Atlas, con sus capacidades de embedding y referencia extendida se convierte en la herramienta perfecta para ello, ya que permite un desarrollo de funcionalidades más rápido, un crecimiento de ventas, implementación de operaciones testeadas y la capacidad de retención del talento digital en tu empresa como consecuencia. La combinación de queries de alto rendimiento y la escalabilidad horizontal dotan de robustez a cualquier solución donde se necesite lidiar con altas cargas de clickstream como por lo general suelen tener los sitios de ecommerce y aún así ser capaz de responder con funcionalidades en tiempo real basadas en datos. Su facilidad de integración con otras plataformas gracias a sus capacidades de generación de APIs y diferentes drivers lo hace una solución perfecta sobre la cual construir tanto la capa operacional como la de inteligencia, donde además se evitará vendor lock-in y los científicos de datos podrán aprovechar fácilmente los diferentes frameworks para trabajar con datos nuevos y de calidad. Además, los principios de “distribución por defecto”, siguiendo un conjunto de mejores prácticas garantizan que tu base de datos operacional podrá con la carga de trabajo necesaria. Del ¿Qué? al ¿Cómo? La capa de inteligencia Para desbloquear crecimiento relevante en el mercado y además hacerlo a escala tus esfuerzos de analítica tienen que evolucionar de sólo entender lo que está pasando a través de analizar los datos históricos , hacia entender por qué los eventos de tu capa operacional están sucediendo e incluso intentar predecirlos. Para una solución de precios dinámicos, los retailers necesitarían ganar acceso a puntos de datos históricos de precios para sus líneas de productos y modelarlos a través de procesos ETL (Earn, Transform, Load) para alimentar modelos de aprendizaje automático. Este proceso es por lo general complicado y frágil cuando se usan data warehouses tradicionales, a menudo incurriendo en duplicación de datos haciendo todo más costoso y difícil de manejar. Figure 3: Fricción reducida gracias a la integración fluida de las diferentes capas de datos La ventaja de usar MongoDB Atlas como tu base de datos operacional, es que a través de las Aggregation Pipelines puedes dar forma a tus datos de la manera que tenga más sentido para tu contexto, y luego, a través de MongoDB App Services , puedes instrumentar Tiggers y funciones serverless para simplificar toda la orquestación y poder consumir los datos en Databricks usando el connector Spark de MongoDB Atlas . Databricks proporciona una manera dinámica de trabajar con tus modelos de analítica avanzada, escribiendo código en python en notebooks alojados en sus clusters. Puedes sacar ventaja de su integración con MLFlow para registrar experimentos que luego pueden ser convertidos en modelos y eventualmente desplegados en un endpoint. De esta manera transformar tus datos e integrar tu capa operacional, a través de conectores y llamadas API con triggers y funciones, con tu capa de inteligencia para hacer aprendizaje automático, puedes construir fácilmente una solución de precios dinámicos que te permitiría generar crecimiento en el mercado para tu organización, desde el núcleo operacional hasta la capa semántica actuando como un puente entre los aspectos técnicos del almacenamiento de datos y los requerimientos de negocio del análisis de datos. Descubre nuevas oportunidades de crecimiento Diseñar una solución analítica en tiempo real con MongoDB Atlas y Databricks no sólo es la forma más rápida de desbloquear las capacidades de tu equipo para idear estrategias de fijación de precios, sino que también establece la piedra angular para construir reglas automatizadas para soluciones más complejas. Otras formas de automatizar tu aplicación con insights impulsados por la IA podrían ser: optimizar su presupuesto de marketing mix mediante la elasticidad del precio de cada producto, añadir otra capa analítica de datos de segmentación de clientes para lograr precios dinámicos personalizados u optimizar tu cadena de suministro con previsiones de ventas en tiempo real. Aprovechando MongoDB Charts o el MongoDB BI Connector , puede alimentar sus cuadros de mando empresariales, convirtiendo esa capa semántica del modelo de negocio en el punto central para la alineación de sus equipos. Bases para el crecimiento Los sitios de comercio electrónico modernos liberan el poder de la analítica en tiempo real y la automatización para crear mejores experiencias para los clientes y un enfoque más profundo de la analítica de clientes, desbloqueando el poder del aprendizaje automático para descubrir tendencias en los datos de comportamiento, convirtiendo eficazmente a las empresas en máquinas de crecimiento automatizadas. Si quieres descubrir cómo construir una solución sencilla de precios dinámicos integrando MongoDB Atlas y Databricks asegúrate de leer esta guía .

July 12, 2023

Fueling Pricing Strategies with MongoDB and Databricks

Deploying real-time analytics solutions with the right tech stack can have transformative benefits. Retailers want to grow their brand name or improve customer experience with value based pricing, whilst remaining competitive and cost effective. Despite their ambition to become “data driven” operations, companies often fail in their endeavors, at the core of this is the struggle to do real-time analytics. We will explore the architecture in Figure 1 and discuss the advantages of integrating MongoDB Atlas and Databricks as a perfect pairing for retail pricing strategies using real time AI. The solution we’ll describe integrates concepts from Event Driven architecture in the data generation and ingestion side, real time analytics orchestration, machine learning and microservices. Let’s get started! Figure 1:  Overview of a dynamic pricing solution architecture Reduce friction with flexibility The pricing data complexity for a retailer with a diverse product line increases due to factors like seasonal sales, global expansion, and new product introductions. Tracking and analyzing historical prices and product offerings become more challenging as they change throughout the year. Analytics solutions built around event driven architectures try to explain what is happening in a specific system or solution based on any significant occurrence such as user actions, data updates, system alerts, or sensor readings. Deciding which occurrences are crucial to understand your customers and instrument your business model around that is when things start to become more intricate. Specially when trying to instrument your data models using traditional relational database management systems and its disadvantage when it comes to pairing that data structure with object oriented applications. The inability of a retailer to adapt it’s data model to the customer behavior quickly translates into friction and a weaker presence in the market, for example poor pricing strategies compared to competitors because they lack information of historic price points and how they vary between products. Figure 2:  An inflexible data model is a road block for innovation That friction is contagious throughout the whole value chain of an organization, affecting the semantic layer of business (a bridge between the technical data structures and business user understanding), generating data inconsistencies, and reducing time to insight. The capacity of your conceptual data model to adapt to an ever changing customer behavior helps reduce that friction significantly as its flexibility allows for a more intuitive data modeling of real world events. For your real time pricing strategies challenges, MongoDB Atlas document model with its embedding and extended reference capabilities becomes the perfect tool for the job, as it allows for faster feature development and stronger test driven growth and talent retention as a consequence. In combination with it’s high performance queries and horizontal scalability the solution becomes robust as it will handle the high throughput of clickstreams on your ecommerce applications and yet be able to respond real time data driven decision making features. Its ease of integration with other platforms thanks to strong API capabilities and drivers make it the perfect solution on top of which to build your business operational and intelligence layers as you’ll avoid vendor lock-in and data scientists can easily leverage AI frameworks to work with fresh data. It’s distributed by default principle, plus following best practices principles guarantee your operational data layer will handle the workload needed. As the AI algorithms analyze vast amounts of historical and real-time data to make pricing decisions, having a distributed platform like MongoDB enables efficient storage, processing, and retrieval of data across multiple nodes. From what? To why? The intelligence layer To unlock meaningful market growth and achieve it at scale your analytics need to evolve from just understanding what is happening by querying and analyzing historical data, to understand why the events measured with your operational data layer are happening and even further try to forecast them. For a pricing solution, retailers would need to gather historical pricing points data for their product lines and shape through ETL (Earn, Transform, Load) pipelines to feed machine learning algorithms. This process is often complicated and brittle using the traditional data warehousing approach, often incurring in data duplication making it difficult and costly to manage. Figure 3:  Reduced friction thanks to seamless integration of the different data layers The advantage of using MongoDB Atlas as your operational data layer solution, is that through its Aggregation Pipelines you can shape your data in any way you need and then through MongoDB App Services , you can instrument Triggers and Functions to simplify this process and then consume that data in Databricks by leveraging the MongoDB Atlas via Spark connector . Databricks provides you with a streamlined way of working with your models, by writing python code on hosted clusters notebooks. You can leverage its MLFlow integration to be able to register experiments which then can be turned into deployed models over an endpoint. So transforming your data and integrating your operational layer, through connectors and API calls as triggers and functions, with your intelligence layer for machine learning and AI, you can easily build a pricing solution that will be able to generate market growth for your organization from its core through a semantic layer acting as a bridge between the technical aspects of data storage and the business requirements of data analysis. Uncover new growth opportunities Designing a real time analytics solution with MongoDB Atlas and Databricks is not only the fastest way to unlock your team's capabilities to devise pricing strategies, it also sets the cornerstone to build automated rules for more complex solutions. Other ways of automating your retail application with AI driven insight could include: optimizing your marketing mix budget by each product price elasticity, adding another analytical layer of customer segmentation data to achieve targeted dynamic pricing, or optimizing your supply chain with sales forecasting in real time. By taking advantage of MongoDB Charts or the MongoDB BI Connector , you can fuel your business dashboards, making that semantic layer of the business model the central point for your teams alignment. Foundations for growth Modern ecommerce sites unleash the power of real time analytics and automation to create better experiences for customers and a more profound approach to customer analytics by unblocking the power of machine learning to discover trends in behavioral data, effectively turning companies into automated growth machines. If you want to learn how to build a dynamic pricing solution integrating MongoDB Atlas and Databricks see our step-by step How To Guide , with all necessary code available on GitHub .

June 20, 2023

Three Ways Retailers Use MongoDB for Their Mobile Strategy

Mobile experiences are a crucial aspect of a retail omnichannel strategy. Retailers strive to create a consistent customer experience as consumers switch between online, in-store, and mobile channels. This presents a complex data management challenge as views across customer and workforce mobile applications need real-time access to the same data sets, both on or offline. Let’s dive into three ways retailers are tackling omnichannel data challenges with MongoDB mobile solutions. Mobile solutions to omnichannel challenges Achieving data centricity across channels Before building any mobile omnichannel solution, you first have to solve the data-centricity problem. Established retailers tend to have fragmented and siloed data both in-store and online, which needs to be combined in real time to facilitate omnichannel experiences. Consider Marks & Spencer ’s loyalty program, which was part of a key strategic initiative to increase customer retention and drive multi channel sales. This required a data-centric solution to gain deep insight into customer behavior. As data size and traffic grew, the legacy solution couldn’t scale. The company addressed this problem by re-platforming the Sparks mobile application backend onto MongoDB Atlas, a high-performance data platform capable of expanding vertically and horizontally to deal with the heavy read/write throughput of a data-driven enterprise. Its Sparks customer mobile app caters to more than 8 million unique customers and is capable of calculating more than 15 million unique offers a day. The flexibility of the document model allowed them to respond to trends in the market or new user behavior, and thus update its analytical framework. Taking advantage of the translytical data platform capabilities, business teams could classify and track customers, products, content, and promotions across any stage of the value chain, unlocking new revenue streams, all in real time. No matter the channel their customer is engaging with, which device they’re browsing on, or their geographical location, Marks & Spencer is able to cater their customers’ needs and use data to keep improving what their brand has to offer. Delivering a cohesive omnichannel retail brand experience It has become increasingly difficult for retailers to deliver a consistent experience across multiple channels. Think of the associated complexity of capturing and serving the right data at the right time, with extensive product catalogs, complex and changing categorization, regional nuances and language challenges for a global footprint, diverse seasonal sales, promotions, and more. Because customers are engaging in “phygital” behavior, browsing the online product catalog while also walking through the store, enabling your workforce to respond to customer questions becomes crucial to deliver the expected brand experience. Retailers are creating Workforce Enablement Mobile Apps for complex store management operations or browsing global inventory that requires synced data to achieve a connected store. For a company like 7-Eleven , whose value proposition is “Be the first choice for convenience. Anytime. Anywhere,” enabling its workforce became a critical issue for maintaining brand value. Using an omnichannel approach, 7-Eleven deployed a custom mobile device using MongoDB Realm, MongoDB’s unified mobile platform, to manage its in-store inventory system. Leveraging the power of Atlas Device Sync , MongoDB’s mobile database service for syncing data across devices, users, and backends, 7-Eleven’s front-line staff can start using devices immediately, not having to wait minutes to download the data on initial startup, increasing data accuracy, especially around real-time stock management. Ease of access to correct and real-time product information boosts 7-Eleven’s convenience-centered brand offering and secures the cohesion of their brand experience with the brand both in the digital and brick-and-mortar stores. Brand cohesion is dependent on efficient order management and visual merchandising. As customers are window shop, employees need to analyze real-time stock data from the retailer's supply chain while it passes from warehouse to in-store processes like stock delivery, visual merchandising, and product returns management, to optimize store operations and create seamless experiences. Imagine the case of a fashion retailer with data gathered from RFID product tagging, scanned through mobile devices in the store. It can reduce total operating costs through optimized order processing, enabling logistics managers to discover pain points in the supply chain, forecast demand, and avoid stock breaks thanks to real-time triggers and alerts with auto-replenishment capabilities. Retailers can also optimize in-store merchandising based on customer shopping behavior data gathered from garment SKUs scanned on the store racks or in fitting rooms with beacon-like RFID scanners. By viewing the movement of items and measuring things like how many times items are tried on related to how often they’re purchased, companies can understand how product assortment and movement affect purchasing intention, and relocate clothes based on that information. Thanks to MongoDB's real-time architectures combined with Kafka managing the event streaming, and with MongoDB Realm providing a simple, fully integrated way to sync real-time inventory data to MongoDB Atlas, companies can achieve a deeper understanding of customer behavior as a competitive advantage and a reduction of total operational costs. Data collection and its associated change events often occur in variable latency and low network availability scenarios, like warehouses, delivery trucks, or store buildings, creating the third challenge we will next address: Network Consistency. Dealing with network consistency issues Networks with variable latency across store floors, due to server distance, applications dealing with heavy content, or simply network congestion over sales periods, can generate the unwanted byproduct of data inconsistencies. Apps can restart or shut down any time due to bugs, using too much memory, or other apps working in the background. To address these issues, businesses need an on-device database with offline synchronization, or an Offline-First approach. The luxury market in particular expects perfection, where an item might range from $20,000 to more than $100,000. Customers expect more than just a purchase – they expect personalized experiences. One of our customers in luxury retail regularly holds pop-up events, often in destinations sometimes without network signal, which creates the need for a reliable mobile app with access to customer data, like purchase histories, to provide that personalized experience. For example, providing fast and easy check-in for customers at these events is critical to their experience-centric business. Thanks to MongoDB Realm, their event tablets always work, even when internet connections are poor. Capitalizing on MongoDB Realm ’s local data persistence, storing user data on their devices , and combining it with Atlas Device Sync , the retailer has a top-performing mobile web app with the ability to keep working offline storing user data to then sync it back to the database once connectivity is restored. This approach allows the company to build an uninterrupted and unified 360-degree customer management platform spanning web and mobile touchpoints, with relevant data always up to date. Bi-directional device to cloud sync with MongoDB Realm Sync work together Conclusion Mobile experiences are crucial in today's retail landscape, but integrating them can be challenging. By leveraging mobile channels, retailers can differentiate their brands, increase customer loyalty, and expand their reach in a fiercely competitive industry — staying ahead of the game and thriving in the omnichannel era. Learn how to quickly launch and scale secure mobile apps on our MongoDB for Mobile site. Thank you to Karolina Ruiz Rogelj for her contributions to this post. Want to learn more about modernizing retail experiences for your customers? Join one of two webinar sessions on May 17: European Time Zone Americas Time Zone

April 17, 2023