Applications

Customer stories, use cases, and experiences of MongoDB

Digital Receipts: Mining for Customer & Business Insight with MongoDB

Imagine walking out of your favorite store and moments later receiving a personalized recommendation for a matching item, based not only on what you just bought, but your entire purchase history. This level of tailored experience has long been difficult to achieve in brick-and-mortar retail, but that’s changing thanks to digital receipts. Digital receipts are gaining traction, with Realtimes UK reporting that a quarter of UK retailers now offer them exclusively . In physical stores, traditional paper receipts represent missed opportunities: static, one-time records that serve little purpose beyond proof of purchase. In contrast, digital receipts unlock a dynamic stream of customer insights, which are a gateway to AI-powered personalization, enabling retailers to transform sales data into timely, relevant recommendations. Retailers are also seeing greater adoption of their customer loyalty apps by embedding features like digital receipts and personalized offers, giving shoppers more reasons to engage after leaving the store. Retailers are increasingly investing in digital receipts, and MongoDB enables them to digitize in-store transactions, understand shopper behavior, and deliver personalized product suggestions immediately after checkout. With MongoDB’s flexible document model , retailers can efficiently store and analyze rich transactional data, powering real-time personalization and adaptive customer experiences. It’s a smarter, data-driven approach to customer engagement, built for the physical retail world. The challenge in capturing the in-store customer journey Personalized shopping experiences are a proven driver of customer loyalty and revenue, but to deliver them effectively, retailers need a complete view of each customer’s journey. For retailers who have a brick-and-mortar presence, that’s where the gap lies. Today, many retailers are making personalization decisions based on incomplete data. While loyalty programs and customer profiles may capture some purchase history, in-store transactions often go unrecorded or take too long to turn into actionable insights. Paper receipts dominate the checkout process, and without a digital trail, these interactions are lost to the retailer’s systems. This means that even a highly engaged, in-store shopper may appear invisible when it comes to targeting and recommendations. The impact of this is twofold. First, it limits the retailer’s ability to offer relevant product suggestions, personalized promotions, or timely follow-ups, missing key opportunities to increase basket size and repeat visits. Second, it affects the customer experience, particularly in the retailer’s mobile app. Shoppers who frequent physical stores often find that their app doesn’t reflect their recent purchases or preferences, making it feel disconnected and less useful. By digitizing receipts, retailers can close this gap. Every in-store purchase becomes a rich source of insight, directly tied to the customer profile. This enables more accurate, real-time personalization, both right after checkout and in future interactions. It also adds meaningful value to the retailer’s mobile app: customers see their full purchase history, receive smarter recommendations, and access personalized offers that feel relevant. The business impact is significant: better personalization drives more revenue, while a more engaging app experience leads to higher adoption, increased usage, and stronger loyalty. Getting the most out of day-to-day data: Building a digital receipt solution Retailers aiming to enhance personalization must first digitize in-store transactional data, particularly the information generated at checkout from point-of-sale (POS) systems. However, the majority of existing POS systems have fixed, non-changeable data formats, designed primarily for payment processing. These systems often vary across store locations, lack integration with customer profiles, and don't support rapid data access. To address these challenges, retailers should centralize transaction data from all stores into a consistent and accessible format. Ensuring each purchase is reliably linked to a customer identity, through loyalty sign-ins or digital prompts, and storing that information in a manner that supports immediate, personalized engagement is crucial. Integration with POS systems is essential, allowing retailers to capture transaction data instantly and store it. A flexible document model (like MongoDB’s) stores structured, unstructured, and AI-ready data in one format, making it ideal for managing complex customer profiles and purchase histories. It captures detailed transaction data, including items, prices, context, and nested info like product attributes, preferences, and loyalty activity, all within a single document. Figure 1. MongoDB’s document model contains the data used to render the digital receipts. This image shows how MongoDB's document model supports digital receipts by instantly ingesting all receipt details. It features a MongoDB document (left) containing both purchased product information and personalized recommendations, and the digital receipt on PDF (right). It also makes the data instantly usable for personalization engines and AI models, without the need for heavy transformation or complex joins across multiple systems. Should the retailer have several different brands or types of PoS systems which data in different formats, the flexible document model allows them to be combined more easily, including fast onboarding if new types are introduced. Seamless integration allows connectivity with existing POS systems and third-party analytics tools, reducing friction in adoption. MongoDB enables this through features like real-time data ingestion with change streams, flexible data connectors for systems like Kafka, and an API-driven approach that supports REST. Combined with MongoDB Atlas ’s multi-cloud deployment support, retailers can connect and scale across diverse infrastructures without needing to re-architect their existing systems. Retailers can surface digital receipts directly in the customer-facing app, enhancing the post-purchase experience. Shoppers gain instant access to their full purchase history, enabling features like receipt lookups, easy reorders, warranty tracking, and personalized product suggestions. This drives more app adoption and keeps customers engaged beyond the store visit. To support this experience at scale, retailers need an architecture that can handle high volumes of receipt data from numerous store locations. MongoDB Atlas supports this through horizontal scalability and workload isolation, ensuring operational workloads like customer app interactions remain fast and reliable as data grows. Some retailers optimize storage by keeping receipt metadata in MongoDB while storing the full receipt in an object store like Azure Blob Storage or Google Cloud Storage, enabling a cost-effective approach. Figure 2. Architecture diagram showing the Digital Receipts components. MongoDB’s ability to serve real-time queries with low latency ensures that every tap or search in the app feels instant, helping reinforce customer trust and satisfaction. This makes the app not just a digital companion but a key driver of loyalty and repeat visits. By making digital receipts easily accessible in the app, alongside personalized recommendations and seamless post-purchase interactions, retailers create a more engaging and convenient experience that keeps customers coming back. Increased app adoption leads to more touchpoints, better data collection, and more opportunities to upsell or cross-sell, ultimately boosting revenue and retention. A notable example of a retailer leveraging MongoDB for digital receipts is Albert Heijn, the largest supermarket chain in the Netherlands . By utilizing MongoDB Atlas, Albert Heijn developed a digital receipts feature within their customer-facing app, providing shoppers with real-time and historical insights into their in-store purchases. This adoption of MongoDB Atlas led to annual savings of 25%, improved developer productivity, and a more efficient customer experience. Retailers use digital receipt data to improve personalized recommendations by combining purchase history, preferences, and behavior. Digitized receipts enable tracking of items, frequency, and context, allowing real-time linking of in-store purchases to customer profiles for more accurate, timely offers. Figure 3. Diagram showing the Digital Receipts process flow. The image illustrates the digital receipts process: 1. A customer makes a purchase in-store, 2. receives a digital receipt via email or SMS, 3. verifies it through an app, 4. accesses purchase history and personalized recommendations, and 5. can repurchase items through the app. Using MongoDB’s aggregation pipelines and change streams, retailers can process data efficiently and enable AI-driven personalization immediately after checkout. This streamlined handling of structured and unstructured receipt data supports rapid analysis of customer preferences and purchasing patterns. MongoDB's workload isolation ensures that analytical processes do not impact the performance of customer-facing applications, maintaining a seamless user experience. Retailers can enhance customer engagement by leveraging this data to offer personalized promotions, loyalty rewards, and cross-selling opportunities. Ready to embrace digital receipts? Digital receipts are reshaping how brick-and-mortar retailers unlock customer insights and deliver AI-driven personalization. With MongoDB Atlas, retailers can instantly analyze transactional data, customer preferences, and purchase history within a flexible document model, powering real-time, tailored recommendations that increase basket size, drive repeat purchases, and boost conversions. Beyond personalization, digital receipts reduce printing costs and support sustainability by eliminating paper waste, while offering customers a convenient, app-based way to access and search past purchases. The real value lies in the data: by capturing rich, real-time insights from every in-store transaction, retailers can unify physical and digital touchpoints, improving customer engagement and business agility. MongoDB’s scalable architecture and real-time processing empower retailers to adapt quickly to changing behavior and deliver seamless, data-driven experiences. Now is the time to modernize your customer engagement strategy. Digital receipts aren’t just a convenience; they’re a competitive advantage! Discover how MongoDB Atlas can help you deliver seamless customer experiences across all channels through our solutions page .

June 12, 2025
Applied

Strengthening Security: Bug Bounty and GitHub Secret Scanning

Today, MongoDB is announcing two important updates that further strengthen its security posture: The free tier of MongoDB Atlas is now included in the company’s public bug bounty program . MongoDB has joined the GitHub secret scanning program . These updates empower MongoDB to identify and remediate security risks earlier in the development lifecycle. MongoDB has long been committed to proactively tackling security challenges, so the decision to open MongoDB Atlas to responsible testing by the security researcher community was an easy one. Its collaboration with GitHub further strengthens this approach by enabling the detection and validation of exposed MongoDB-specific credentials. Together, these efforts help protect customer data and support secure application development at scale. Expanding MongoDB’s bug bounty program to include MongoDB Atlas The free tier of MongoDB Atlas is now a part of the company’s public bug bounty program. This fully managed, multi-cloud database powers mission-critical workloads for thousands of customers, ranging from large enterprises to small startups and individual developers. MongoDB’s bug bounty program has already paid out over $140,000 in bounties to security researchers and has resolved over 130 bug reports. Integrating Atlas into the bug bounty program is the next step in hardening the database’s security posture, enabling earlier discovery and remediation of potential risks. The cyberthreat landscape is evolving faster than ever. Where organizations once faced a narrower set of risks, today’s threats are more diverse and sophisticated. These include emerging risks like generative AI misuse and supply chain compromises, alongside persistent threats such as phishing, software vulnerabilities, and insider attacks. One proven way to stay ahead of these threats is by working with the security research community through bug bounty programs. Researchers can help identify and report vulnerabilities early, enabling organizations to fix issues before attackers exploit them. Security researchers are expanding their expertise to address new attack vectors, according to HackerOne. In fact, 56% now specialize in API vulnerabilities and 10% focus on AI and large language models. 1 With MongoDB Atlas now included in the company’s bug bounty program, customers can expect: Continuous, real-world testing by a diverse security research community. Systems designed for faster detection of vulnerabilities than traditional penetration testing. Stronger confidence in MongoDB’s ability to safeguard sensitive data. By bringing MongoDB Atlas into its bug bounty program, MongoDB is doubling down on transparency, collaboration, and proactive defense. This is a critical step in reinforcing customer trust and ensuring MongoDB Atlas remains secure as threats evolve. Partnering with GitHub to detect credential leaks faster Building on its commitment to proactive threat detection, MongoDB has also joined GitHub’s secret scanning partner program to better protect customers from credential exposure. This program enables service providers like MongoDB to include their custom secret token formats in GitHub’s secret scanning functionality. This capability actively scans repositories to detect accidental commits of secrets such as API keys, credentials, and other sensitive data. Through this partnership, when GitHub detects a match of MongoDB Atlas–specific secrets, it will notify MongoDB. Then MongoDB can securely determine if the credential is active. As a result, MongoDB can rapidly identify potential security risks and notify customers. Stolen credentials remain one of the most common and damaging threats in cybersecurity. Stolen credentials have been involved in 31% of data breaches in the past decade, according to a Verizon report. Credential stuffing, where bad actors use stolen credentials to access unrelated services, is the most common attack type for web applications. 2 These breaches are particularly harmful, taking an average of 292 days to detect and contain. 3 By participating in GitHub’s secret scanning program, MongoDB helps ensure that MongoDB Atlas customers benefit from: Faster detection and remediation of exposed credentials. Reduced risk of unauthorized access or data leaks. More secure, developer-friendly workflows by default. Staying ahead of evolving security threats MongoDB is continuously evolving to help developers and enterprises stay ahead of security risks. By expanding its public bug bounty program to include MongoDB Atlas and by partnering with GitHub to detect exposed credentials in real time, MongoDB is deepening its investment in proactive, community-driven security. These updates reflect a broader commitment to helping developers and organizations build secure applications, detect risks early, and respond quickly to new and emerging threats. Learn more about these programs: MongoDB’s bug bounty program on HackerOne GitHub’s secret scanning partner program 1 Hacker-Powered Security Report , 8th Edition, HackerOne 2 Verizon Data Breach Investigations Report , 2024 3 IBM Cost of a Data Breach Report , 2024

May 27, 2025
Applied

Innovating with MongoDB | Customer Successes, May 2025

Welcome back to MongoDB’s bi-monthly roundup of customer success stories! In this series, we’ll share inspirational examples of how organizations around the globe are working with MongoDB to succeed and address critical challenges in today’s multihyphenate (fast-paced, ever-evolving, always-on) world. This month’s theme—really, it could be every month’s theme—is adaptability. It’s almost cliché but true: adaptability has never been more essential to business success. Factors like the increasing amount of data in the world (currently almost 200 zettabytes) and the rise of AI means that organizations everywhere have to adapt to fundamental changes—in what work looks like, how software is developed and managed, and what end-users expect. So this issue of “Innovating With MongoDB” includes stories of MongoDB customers leveraging our database platform’s flexible schema, seamless scalability, and fully integrated AI capabilities to adapt to what’s next, and to build the agile foundations needed for real-time innovation and dynamic problem-solving. Read on to learn how MongoDB customers like LG U+, Citizens Bank, and L’Oreal aren’t just adapting to change—they’re leading it. LG U+ LG U+ , a leader in mobile, internet, and AI transformation, operates one of Korea's largest customer service centers, handling 3.5 million calls per month. To tackle inefficiencies and improve consultation quality, LG U+ developed Agent Assist on MongoDB Atlas . Leveraging MongoDB Atlas Vector Search , LG U+ integrates vector and operational data, unlocking real-time insights such as customer intent detection and contextual response suggestions. Within four months, LG U+ increased resource efficiency by 30% and reduced processing time per call by 7%, resulting in smoother interactions between agents and customers. By paving the way for intelligent AI solutions, LG U+ can deliver more reliable and personalized experiences for its customers. Citizens Bank Citizens Bank , a 200-year-old financial institution, undertook a significant technological transformation to address evolving fraud challenges. In 2023, the bank initiated an 18-20 month overhaul of its fragmented fraud management systems, shifting from legacy, batch-oriented processes to a comprehensive, cloud-based platform on MongoDB Atlas on AWS . This transition enables real-time fraud detection, significantly reducing losses and false positives. Importantly, the new platform provides Citizens Bank customers with enhanced security and a smoother, more reliable banking experience. With Atlas’ flexible schema and cloud-based capabilities, Citizens Bank can quickly implement new fraud prevention strategies in minutes instead of weeks. The bank is now experimenting with MongoDB Atlas Search and generative AI to improve predictive accuracy and stay ahead of emerging fraud patterns. Through our partnership with The Stack, learn how our customers are achieving extraordinary results with MongoDB. This exclusive content could spark the insights you need to drive your business forward. BioIntelliSense BioIntelliSense is revolutionizing patient monitoring. Their BioButton® wearable device continuously captures vital signs and transmits the data to the BioDashboard™. This platform allows clinicians to monitor patients, access patient information, and receive near real-time alerts about potential medical conditions. After outgrowing its legacy SQL database, BioIntelliSense reengineered the end-to-end architecture of BioDashboard™ using MongoDB Atlas on AWS, Atlas Search, and MongoDB Time Series Collections . The new system now scales to support hundreds of thousands of concurrent patients while ensuring 100% uptime. By optimizing their use of MongoDB 8.0 , BioIntelliSense also identified 25% of their spend that can be redirected to support future innovation. Enpal Enpal , a German start-up, is addressing climate change by developing one of Europe's largest renewable energy networks through solar panels, batteries, and EV chargers. Beyond infrastructure, Enpal fosters a community interconnected through data from over 65,000 devices. By utilizing MongoDB Atlas with native time series collections, Enpal efficiently manages 200+ real-time data streams from these devices. This innovative approach forms a virtual power plant that effectively supports the energy transition and is projected to reduce processing costs by nearly 60%. MongoDB enables Enpal to manage large data volumes cost-effectively while providing precise, real-time insights that empower individuals to make informed energy decisions. Video spotlight: L’Oreal Before you go, be sure to watch one of our recent customer videos featuring the world's largest cosmetics company, L’Oreal. See why L'Oréal's Tech Accelerator team says migrating to MongoDB Atlas was like “switching from a family car to a Ferrari.” Want to get inspired by your peers and discover all the ways we empower businesses to innovate for the future? Visit our Customer Success Stories hub to see why these customers, and so many more, build modern applications with MongoDB.

May 20, 2025
Applied

Unlocking Literacy: Ojje’s Journey With MongoDB

In the rapidly evolving landscape of education technology, one startup is making waves with a bold mission to revolutionize how young minds learn to read. Ojje is redefining literacy education by combating one of the most pressing issues in education today—reading proficiency. To do so, Ojje leverages groundbreaking technology to ensure every child can access the world of stories, at their own pace, in their own language. That transformative change is powered by a strategic partnership with MongoDB . Meet Ojje: A vision beyond words From electric cars to diabetes apps, Adrian Chernoff has been at the forefront of breakthrough innovations. Now, as the Founder and CEO of Ojje , he's channeling his passion for invention and entrepreneurship into something deeply personal and universally important—literacy. At its core, Ojje is an adaptive literacy learning platform that offers stories in 15 different reading levels, available in both English and Spanish. Grounded in the science of reading, it features elements like read-aloud functionality and dyslexia-friendly fonts to engage every learner. Ojje is not just a tool—it’s a gateway to personalized literacy education. Ojje's mission is to reach every learner by providing materials that are leveled, accessible, and engaging. By doing so, Ojje aims to vastly improve reading outcomes across K-12 education. Solving a literacy crisis with innovative solutions With literacy rates in the U.S. alarmingly low—almost 70% of low-income fourth grade students cannot read at a basic level according to the National Literacy Institute— Ojje's mission couldn't be more crucial. Chernoff and his team developed their platform in response to teachers' complaints about the stark lack of appropriate reading materials available to students. Schools needed a tool that could effortlessly cater to varying reading abilities within a single classroom. Ojje fills this gap by offering a dynamic platform that adapts to individual students’ needs, allowing educators to personalize instruction. The potential to genuinely connect with every student is realized through Ojje’s innovative use of technology. Powered by MongoDB At the root of every great tech innovation is an infrastructure that allows it to flourish. For Ojje, MongoDB is that foundation. As a startup, speed and adaptability are vital, and MongoDB’s flexible document model provides just that. It allows the Ojje team to launch rapidly, scale efficiently, and to handle a variety of data structures seamlessly—all without the cumbersome need for rigid schemas. “MongoDB handles everything from structured data to student performance tracking, without unnecessary overhead,” Chernoff said. “The platform scales with our needs, and the built-in monitoring tools give our team confidence as usage grows.” Why MongoDB? For Ojje, it was about the flexibility to handle educational content, ensure secure data handling for students, and to offer scalability for thousands of classrooms. MongoDB proved to be the perfect fit, offering a balance of adaptability and comprehensive data management. Working with MongoDB also offered Ojje access to the MongoDB for Startups program, providing essential Atlas credits, valuable technical resources, and access to our vast network of partners. This support played a crucial role during Ojje’s developmental stages and early launch, helping to position the company for successful growth and innovation. What’s next for Ojje? With an eye towards broadening their impact, the Ojje team plans to expand its library to include STEM materials and engaging biographies, alongside enhancing existing content. Additionally, Ojje will introduce tools for educators to track each reader’s progress in real time, further personalizing instruction. “We believe every student deserves the chance to love reading—and every teacher deserves tools that make that possible,” Chernoff said. “That’s why we’re building Ojje: To make literacy more accessible, engaging, and joyful. When students can learn to read and read to learn, it transforms not only their K–12 experience but their entire future.” In an exciting development, Ojje will soon unveil Ojje at Home. This initiative aims to extend literacy support beyond the classroom, providing families with valuable resources to join their children on the journey to literacy. Building a future where every child reads Ojje's combination of strategic foresight, cutting-edge technology, and genuine passion for educational impact make it a standout player in the education sector. By partnering with MongoDB, the company has created a robust, adaptive platform that not only meets the demands of today’s classrooms but is poised to address future literacy challenges. As the digital landscape continues to evolve, so must our methods of teaching and learning. Ojje is leading the charge, ensuring that every child has the opportunity to love reading and reap the lifelong benefits it brings. Interested in MongoDB but not sure where to start? Check out our quick start guides for detailed instructions on deploying and using MongoDB.

May 15, 2025
Applied

Unlocking BI Potential with DataGenie & MongoDB

Business intelligence (BI) plays a pivotal role in strategic decision-making. Enterprises collect massive amounts of data yet struggle to convert it into actionable insights. Conventional BI is reactive, constrained by predefined dashboards, and human-dependent, thus making it error-prone and non-scalable. Businesses today are data-rich but insight-poor. Enter DataGenie, powered by MongoDB—BI reimagined for the modern enterprise. DataGenie autonomously tracks millions of metrics across the entire business datascape. It learns complex trends like seasonality, discovers correlations & causations, detects issues & opportunities, connects the dots across related items, and delivers 5 to 10 prioritized actionable insights as stories in natural language to non-data-savvy business users. This enables business leaders to make bold, data-backed decisions without the need for manual data analysis. With advanced natural language capabilities through Talk to Data, users can query their data conversationally, making analytics truly accessible. The challenges: Why DataGenie needed a change DataGenie processes large volumes of enterprise data on a daily basis for customers, tracking billions of time series metrics and performing anomaly detection autonomously to generate deep, connected insights for business users. The below diagram represents the functional layers of DataGenie. Figure 1. DataGenie’s functional layers. Central to the capability of DataGenie is the metrics store, which stores, rolls up, and serves billions of metrics. At DataGenie, we were using an RDBMS (PostgreSQL) as the metrics store. As we scaled to larger enterprise customers, DataGenie processed significantly higher volumes of data. The complex feature sets we were building also required enormous flexibility and low latency in how we store & retrieve our metrics. DataGenie had multiple components that served different purposes, and all of these had to be scaled independently to meet our sub-second latency requirements. With PostgreSQL as the metrics store for quite some time and tried to squeeze it to the maximum extent possible at the cost of flexibility. Since we over-optimized the structure for performance, we lost the flexibility we required to build our next-gen features, which were extremely demanding We defaulted to PostgreSQL for storing the insights (i.e. stories), again optimized for storage and speed, hurting us on the flexibility part For the vector store, we had been using ChromaDB for storing all our vector embeddings. As the data volumes grew, the most challenging part was maintaining the data sync We had to use a different data store for knowledge store and yet another technology for caching The major problems we had were as follows: Rigid schema that hindered flexibility for evolving data needs. High latency & processing cost due to extensive preprocessing to achieve the desired structure Slow development cycles that hampered rapid innovation How MongoDB gave DataGenie a superpower After extensive experimentation with time-series databases, document databases, and vector stores, we realized that MongoDB would be the perfect fit for us since it exactly solved all our requirements with a single database. Figure 2. MongoDB data store architecture. Metrics store When we migrated to MongoDB, we achieved a remarkable reduction in query latency. Previously, complex queries on 120 million documents took around 3 seconds to execute. With MongoDB's efficient architecture, we brought this down to an impressive 350-500 milliseconds for 500M+ docs , representing an 85-90% improvement in query speed for a much larger scale. Additionally, for storing metrics, we transitioned to a key-value pair schema in MongoDB. This change allowed us to reduce our data volume significantly— from 300 million documents to just 10 million documents —thanks to MongoDB's flexible schema and optimized storage. This optimization not only reduced our storage footprint for metrics but also enhanced query efficiency. Insights store By leveraging MongoDB for the insight service, we eliminated the need for extensive post-processing, which previously consumed substantial computational resources. This resulted in a significant cost advantage, reducing our Spark processing costs by 90% or more (from $80 to $8 per job). Querying 10,000+ insights took a minute before. With MongoDB, the same task is now completed in under 6 seconds—a 10x improvement in performance . MongoDB’s flexible aggregation pipeline was instrumental in achieving these results. For example, we extensively use dynamic filter presets to control which insights are shown to which users, based on their role & authority. The MongoDB aggregation pipeline dynamically adapts to user configurations, retrieving only the data that’s relevant. LLM service & vector store The Genie+ feature in DataGenie is our LLM-powered application that unifies all DataGenie features through a conversational interface. We leverage MongoDB as a vector database to store KPI details, dimensions, and dimension values. Each vector document embeds essential metadata, facilitating fast and accurate retrieval for LLM-based queries. By serving as the vector store for DataGenie, MongoDB enables efficient semantic search, allowing the LLM to retrieve contextual, relevant KPIs, dimensions, and values with minimal latency, enhancing the accuracy and responsiveness of Genie+ interactions. Additionally, integrating MongoDB Atlas Search for semantic search significantly improved performance. It provided faster, more relevant results while minimizing integration challenges.MongoDB’s schema-less design and scalable architecture also streamlined data management. Knowledge store & cache MongoDB’s schema-less design enables us to store complex, dynamic relationships and scale them with ease. We also shifted to using MongoDB as our caching layer. Previously, having separate data stores made syncing and maintenance cumbersome. Centralizing this information in MongoDB simplified operations, enabled automatic syncing, and ensured consistent data availability across all features. With MongoDB, DataGenie is reducing time-to-market for feature releases Although we started the MongoDB migration to solve only our existing scalability and latency issues, we soon realized that just by migrating to MongoDB, we could imagine even bigger and more demanding features without engineering limitations. Figure 3. MongoDB + DataGenie integration. DataGenie engineering team refers v2 magic moment since migrating to MongoDB makes it a lot easier & flexible to roll out the following new features: DataGenie Nirvana: A delay in the supply chain for a raw material can cascade into a revenue impact. Conventional analytics relies on complex ETL pipelines and data marts to unify disparate data and deliver connected dashboard metrics. DataGenie Nirvana eliminates the need for a centralized data lake by independently generating aggregate metrics from each source and applying advanced correlation and causation algorithms on aggregated data to detect hidden connections. DataGenie Wisdom: Wisdom leverages an agentic framework & knowledge stores, to achieve two outcomes: Guided onboarding: Onboarding a new use case in DataGenie is as simple as explaining the business problem, success criteria, and sharing sample data - DataGenie autonomously configures itself for relevant metrics tracking to deliver the desired outcome. Next best action: DataGenie autonomously surfaces insights - like a 10% brand adoption spike in a specific market and customer demographics. By leveraging enterprise knowledge bases and domain-specific learning, DataGenie would propose targeted marketing campaigns as the Next Best Action for this insight. Powered by Genie: DataGenie offers powerful augmented analytics that can be quickly configured for any use case and integrated through secure, high-performance APIs. This powers data products in multiple verticals, including Healthcare & FinOps, to deliver compelling augmented analytics as a premium add-on, drastically reducing their engineering burden and GTM risk. All of these advanced features require enormous schema flexibility, low latency aggregation, and a vector database that’s always in sync with the metrics & insights. That’s exactly what we get with MongoDB! Powered by MongoDB Atlas, DataGenie delivers actionable insights to enterprises, helping them unlock new revenue potential and reduce costs. The following are some of the DataGenie use cases in Retail: Demand shifts & forecasting: Proactively adjust inventory or revise marketing strategies based on product demand changes. Promotional effectiveness: Optimize marketing spend by understanding which promotions resonate with which customer segments. Customer segmentation & personalization: Personalize offers based on customer behavior and demographics. Supply chain & logistics: Minimize disruptions by identifying potential bottlenecks and proposing alternative solutions. Inventory optimization: Streamline inventory management by flagging potential stockouts or overstock. Fraud & loss prevention: Detect anomalies in transaction data that may signal fraud or errors. Customer retention & loyalty: Propose retention strategies to address customer churn. Staffing optimization: Optimize customer support staffing. Final thoughts Migrating to MongoDB did more than just solve DataGenie’s scalability and latency challenges - it unlocked new possibilities. The flexibility of MongoDB allowed DataGenie to innovate faster and conceptualize new features such as Nirvana, Wisdom, and ultra-efficient microservices. This transformation stands as a proof of concept for future product companies considering partnering with MongoDB. The partnership between DataGenie and MongoDB is a testament to how the right technology choices can drive massive business value, improving performance, scalability, and cost-efficiency. Ready to unlock deeper retail insights? Head over to our retail page to learn more. Check out our Atlas Learning Hub to boost your MongoDB skills.

April 16, 2025
Applied

Driving Retail Loyalty with MongoDB and Cognigy

Retail is one of the fastest moving industries, often the very first to leverage cutting-edge AI to create next-gen experiences for their customers. One of the latest areas we’re seeing retailers invest in is agentic AI: they are creating conversational chatbot “agents” that are pulling real-time information from their systems, using Natural Language processing to create conversational responses to customer queries, and then taking action- completing tasks and solving problems. In this race to stay ahead of their competition, retailers today are struggling to quickly bring to market these agents and don’t always have the AI skills in-house. Many are looking to the broad ecosystem of off-the-shelf solutions to leverage the best of what’s already out there—reducing time to market for their AI agents and leaving the AI models and integrations to the experts in the field. Some of the most successful retail conversational AI agents we’ve seen are built on Cognigy , a global leader in customer service solutions. With Cognigy, retailers are quickly spinning up conversational AI agents on top of their MongoDB data to create personalized conversational experiences that not only meet but anticipate customer expectations. Increasingly, whether or not retailers offer customers immediate, seamless interactions are key to retaining their loyalty. Why next-gen conversational AI matters in retail Customer loyalty has been declining yearly, and customers are moving to retailers who can provide an elevated experience at every interaction. According to HubSpot’s 2024 annual customer service survey , 90% of customers expect an immediate response to their inquiries, highlighting how speed has become a critical factor in customer satisfaction. Additionally, 45.9% of business leaders prioritize improving customer experience over product and pricing , demonstrating that in retail, speed and personalization are no longer optional as they define whether a customer stays or moves on. The chatbots of the past that relied on simple rules-based engines and static data don’t meet these customers' new expectations as they lack real-time business context, and can generate misleading answers as they’re not training on the retailer's in-house data sets. This is where Cognigy’s AI agents can create a more compelling experience: These intelligent systems integrate real-time business data with the capabilities of LLMs, enabling AI-driven experiences that are not only personalized but also precise and controlled. Instead of leaving responses open to interpretation, retailers can customize interactions , guide users through processes, and ensure AI-driven recommendations align with actual inventory, customer history, and business rules. This level of contextual understanding and action creates trust-driven experiences that foster loyalty. Having quality data and the ability to harness it effectively is the only way to meet the strategic imperatives that customers demand today. This requires key factors such as being fast, flexible, and high-performing at the scale of your business operations, as winning companies must store and manage their information efficiently. This is where MongoDB, a general-purpose database, truly shines. It is designed to manage your constantly evolving business data, such as inventory, orders, transaction history, and user preferences. MongoDB’s document model stands out in the retail industry, offering the flexibility and scalability businesses need to thrive in today’s fast-paced environment. Cognigy can use this real-time operational data from MongoDB as a direct input to build, run, and deploy conversational AI agents at scale. With just a few clicks, businesses can create AI-driven chatbots and voice agents powered by large language models (LLMs), following their business workflows in a smooth and easy-to-implement way. These agents can seamlessly engage with customers across various phone lines as a major driver for customer interactions, including website chat, Facebook Messenger, and WhatsApp, offering personalized interactions. On the back end, Cognigy is built on MongoDB as its operational data store, taking full advantage of MongoDB’s scalability and high performance to ensure that its conversational AI systems can efficiently process and store large volumes of real-time data while maintaining high availability and reliability. The power of combining AI agents with real-time business data transforms personalization from a static concept into a dynamic ever-evolving experience that makes customers feel truly recognized and understood at every touchpoint. By harnessing these intelligent systems, retailers can go beyond generic interactions to deliver seamless, relevant, and engaging experiences that naturally strengthen customer relationships. Ultimately, true personalization isn’t just about efficiency; it’s about creating meaningful connections that drive lasting customer engagement and loyalty. Let’s look at how this looks in the Cognigy interface when you’re creating a flow for your chatbot: What’s happening behind the scenes? Figure 1 below shows an example customer journey, and demonstrates how Cognigy and MongoDB work together to use real-time data to give reliable and conversational responses to customer questions: Figure 1. An Agentic AI conversational flow with Cognigy pulling user and order data from MongoDB This user’s journey starts when they make a purchase on a retailer’s ecommerce application. The platform securely stores the order details, including product information, customer data, and order status, in MongoDB. To coordinate the delivery, the user reaches out via a chatbot or phone conversation orchestrated by Cognigy AI agents, using advanced Large Language Models (LLMs) to understand the user’s inquiries and respond in a natural, conversational tone. The AI agent retrieves the necessary user information and order details from MongoDB, configured as the data source, taking real-time data that is always up to date. By understanding the user’s query, the agent retrieves the appropriate database information and is also able to update the database with any relevant information generated during the conversation, such as modifying a delivery appointment. As the user schedules their delivery, Cognigy updates the information directly in MongoDB, leveraging features like triggers and change streams to seamlessly synchronize real-time data with other key systems in the customer journey, such as inventory management and delivery providers. This ensures personalized user experiences at every interaction. Shaping the future of customer service with MongoDB and Cognigy Delivering responsive, personalized customer service is more essential than ever. By combining MongoDB’s flexible, versatile, and performant data management with Cognigy’s powerful conversational AI, businesses can create seamless, real-time interactions that keep customers engaged. The future of customer service is fast, dynamic, and seamlessly integrated into business operations. With MongoDB and Cognigy, organizations can harness the power of AI to automate and personalize customer interactions in real time, without the need for extensive development efforts. The MongoDB-Cognigy integration enables businesses to scale context-driven interactions, strengthen customer relationships, and exceed expectations while building lasting customer loyalty. Learn more about how Cognigy built a leading conversational AI solution with MongoDB on our customer story page. Needing a solution for your retail needs? Head over to our retail solutions page to learn how MongoDB supports retail innovation. Read our blog to learn how to enhance retail solutions with retrieval-augmented generation (RAG).

April 10, 2025
Applied

MongoDB: Gateway to Open Finance and Financial Data Access

This is the second in a two-part series about open finance and the importance of a flexible data store to open finance innovation. Check out part one here! Open finance is reshaping the financial services industry, pushing traditional institutions to modernize with a data-driven approach. Consumers increasingly expect personalized experiences, making innovation key to customer retention and satisfaction. According to a number of studies 1 , there is an exponential increase of dynamic transformations in financial services, driven primarily by the impact of Banking-as-a-Service (BaaS), embedded banking services, and AI. All of these initiatives are mainly powered by API services intended for data sharing, and have become must-have technical capabilities for financial institutions. Open finance can also unlock massive opportunities for continuous innovation. As a result, financial institutions must provision themselves with the right tools and expertise to be fully aware of the potential risks and challenges of embarking on such a “data-driven” journey. Now, let’s dive deeper into an application of open finance with MongoDB. MongoDB as the open finance data store Integrating diverse financial data while ensuring its security, compliance, and scalability represents a series of considerable challenges for financial institutions. Bringing together data from a variety of backend systems entails a set of complex hurdles for financial ecosystem participants—banks, fintechs, and third-party providers (TPP). First, they need to be able to handle structured, semi-structured, and increasingly unstructured data types. Then, cybersecurity and regulatory compliance concerns must be addressed. What’s more, an increase in data-sharing scenarios can open up potential vulnerabilities, which lead to the risk of breach exposure and cyber-attacks (and, therefore, possible legal penalties and/or eventual reputational damage). Figure 1. The power of open finance. To implement open finance strategies, organizations must first determine the role they will play: whether they act as data holders, are in charge of sharing the data with TPP, or whether they will be data users, the ones able to provide enhanced financial capabilities to end-users. Then, they must choose the most suitable technology for the data management strategy—and this is where MongoDB comes in, functioning as the operational data store. Let’s explore how MongoDB can play a crucial role for both actors—data holders and data users—through an open finance functional prototype. Open finance in action: Aggregated financial view for banking users Figure 2 below shows a digital application from a fictional bank—Leafy Bank—that allows customers to aggregate all their bank accounts into a single platform. Figure 2. Architecture of MongoDB as the open finance data store. Four actors are involved in this scenario: a. Customer - User b. Data Users - Leafy Bank c. Data Holders - External Institution d. Open Finance Data Store - MongoDB Atlas Now let’s go through the steps from the customer experience. Step 1. Log in to the banking application Once logged in, the Leafy Bank digital banking application allows users to aggregate their external bank accounts. It is done behind the scenes, through a RESTFul API request that will usually interchange data in JSON format. For the Leafy Bank prototype, we are using MongoDB and FastAPI together, exposing and consuming RESTful APIs and therefore taking advantage of MongoDB Atlas’s high performance, scalability, and flexibility. Figure 3. Logging in to the banking application. Step 2. User authentication and authorization A crucial step to ensure security and compliance is user consent. End-users are responsible for granting access to their financial information (authorization). In our case, Leafy Bank emulates the OAuth 2.0 authentication. It generates the corresponding tokens for securing the service communication between participants. To achieve efficient interoperability without security issues, data holders must enable a secured technological “fence” for sharing data while preventing the operational risk of exposing core systems. Figure 4. User authorization. Step 3. Data exposure After the authorization has been granted, Leafy Bank will fetch the corresponding account data from the data custodian—external banks (in our fictional scenario, Green Bank or MongoDB Bank)—via APIs. Usually, participants expose customers’ financial data (accounts, transactions, and balances) through their exposed services in JSON format to ensure compatibility and seamless data exchange. Because MongoDB stores data in BSON, a superset of JSON , it provides a significant advantage by allowing seamless storage and retrieval of JSON-like data—making it an ideal backend for open finance. Figure 5. Data exposure. Step 4. Data fetching The retrieved financial data is then pushed into the open finance data store—in our case, in MongoDB Atlas—where it is centrally stored. Unlike rigid relational databases, MongoDB uses a flexible schema model, making it easy for financial institutions to aggregate diverse data structures from different sources, making it ideal for dynamic ecosystems and easy to adapt without costly migrations or downtime. Figure 6. Data fetching from data holder into MongoDB Atlas Data Store. Step 5. Data retrieval Now that the data has been aggregated in the operational data store (powered by MongoDB Atlas), Leafy Bank can leverage MongoDB Aggregation Pipelines for real-time data analysis and enrichment. To become “open finance” compliant, our Leafy Bank provides a holistic financial view and a global position accessible in a single application, thus improving individuals' experience with their finances. Furthermore, this set of features also benefits financial institutions. They can unveil useful insights for building unique services meant to enhance customers' financial well-being. Figure 7. Data retrieval from MongoDB Atlas Data Store. Step 6. Bank connected! In the end, customers can view all their finances in one place, while enabling banks to offer competitive, data-driven, tailored services. Figure 8. Displaying the bank connection in Leafy Bank. Demo in action Now, let’s combine these steps into a real-world demo application: Figure 9. Leafy Bank - MongoDB as the Open Finance Data Store. Advantages of MongoDB for open finance Open finance presents opportunities for all the ecosystem participants. On the one hand, bank customers can benefit from tailored experiences. For personal financial management, it can provide end-users central visibility of their bank accounts. And open finance can enable extended payment initiation services, financial product comparison, enhanced insurance premium assessments, more accurate loan and credit scoring, and more. From a technical standpoint, MongoDB can empower data holders, data users, and TPP to achieve open finance solutions. By offering a flexible schema , banks can adapt to open finance’s evolving requirements and regulatory changes while avoiding the complexity of rigid schemas, yet allowing a secure and manageable schema validation if required. Furthermore, a scalable ( vertical and horizontal ) and cloud-native ( multi-cloud ) platform like MongoDB can simplify data sharing in JSON format, as it has been widely adopted as the data interchange “defacto” format, making it ideal for open finance applications. Internally, MongoDB uses BSON, the binary representation of JSON, for efficient storage and data traversal. MongoDB’s rich extensions and connectors support a variety of frameworks to create RESTful API development. Besides FastAPI, there are libraries for Express.js (Node.js), Django (Python), Spring Boot (Java), and Flask (Python). The goal is to empower developers with an intuitive and easy-to-use data platform that boosts productivity and performance. Additionally, MongoDB offers key features like its aggregation pipeline , which is designed to process data more efficiently by simplifying complex transformations, real-time analytics, and detailed queries. Sophisticated aggregation capabilities from MongoDB allow financial institutions to improve their agility while maintaining their competitive edge, all by having data as their strategic advantage. Lastly, MongoDB provides financial institutions with critical built-in security controls, including encryption, role-based access controls (RBAC), and auditing. It seamlessly integrates with existing security protocols and compliance standards while enforcing privileged access controls and continuous monitoring to safeguard sensitive data, as detailed in the MongoDB Trust Center . Check out these additional resources to get started on your open finance journey with MongoDB: Read part-one of our series to discover why a flexible data store is vital for open finance innovation. Explore our GitHub repository for an in-depth guide on implementing this solution. Visit our solutions page to learn more about how MongoDB can support financial services.

April 1, 2025
Applied

How Cognistx’s SQUARY AI is Redefining Information Access

In a world where information is abundant but often buried, finding precise answers can be tedious and time-consuming. People spend hours a week simply searching for the information they need. Cognistx, an applied AI startup and a member of the MongoDB for Startups program, is on a mission to eliminate this inefficiency. Through its flagship product, SQUARY AI, the company is building tools to make information retrieval faster, more reliable, and radically simpler. As Cognistx seeks to unlock the future of intuitive search with speed, accuracy, and innovation, MongoDB Atlas serves as a reliable backbone for the company’s data operations. A company journey: From bespoke AI projects to a market-ready solution Cognistx started its journey with a focus on developing custom AI solutions for clients. Over time, the company identified a common pain point across industries: the need for efficient, high-quality tools to extract actionable insights from large volumes of data. This realization led it to pivot toward a product-based approach, culminating in the development of SQUARY AI—a next-generation intelligent search platform. SQUARY AI’s first iteration was born out of a bespoke project. The goal was to build a smart search engine capable of extracting answers to open-ended questions across multiple predefined categories. Early on, the team incorporated features like source tracking to improve trustworthiness and support human-assisted reviews, ensuring that the AI’s answers could be verified and trusted. Seeing the broader potential of its technology, Cognistx began using advancements in natural language processing and machine learning, transforming its early work into a stand-alone product designed for diverse industries. The evolution of SQUARY AI: Using state-of-the-art large language models Cognistx initially deployed traditional machine learning approaches to power SQUARY AI’s search capabilities, such as conversation contextualization and multihop reasoning (the ability to combine information from multiple sources to form a more complete answer). Before the rise of large language models (LLMs), this was no small feat. Today, SQUARY AI incorporates state-of-the-art LLMs to elevate both speed and precision. The platform uses a combination of retrieval-augmented generation (RAG), custom text-cleaning methods, and advanced vector search techniques. MongoDB Atlas integrates seamlessly into this ecosystem. MongoDB Atlas Vector Search powers SQUARY AI’s advanced search capabilities and lays the groundwork for even faster and more accurate information retrieval. With MongoDB Atlas, the company can store vectorized data alongside the rest of its operational data. There’s no need to add a separate, stand-alone database to handle vector search. MongoDB Atlas serves as both the operational data store and vector data store. Cognistx offers multiple branches of SQUARY AI, including: SQUARY Chat: Designed for public-facing or intranet deployment, these website chatbots provide instant, 24/7 access to website content, eliminating the need for human agents. It also empowers website owners with searchable, preprocessed AI insights from user queries. These analytics enable organizations to directly address customer needs, refine marketing strategies, and ensure that their sites contain the most relevant and valuable information for their audiences. SQUARY Enterprise: Built with businesses in mind, this enterprise platform helps companies retrieve precise answers from vast and unorganized knowledge bases. Whether it’s assisting employees or streamlining review processes, this tool helps organizations save time, improve team efficiency, and deliver actionable insights. One of the standout features of SQUARY AI is it's AI-driven metrics that assess system performance and provide insights into user interests and requirements. This is particularly valuable for public-facing website chatbots. A powerful database: How MongoDB powers SQUARY AI Cognistx attributes much of its technical success to MongoDB. The company’s history with MongoDB spans years, and its trust in MongoDB’s performance and reliability made the database the obvious choice for powering SQUARY AI. “MongoDB has been pivotal in our journey,” said Cognistx Data Scientist Ihor Markevych. “The scalable, easy-to-use database has allowed us to focus on innovating and refining SQUARY AI without worrying about infrastructure constraints. With MongoDB’s support, we’ve been able to confidently scale as our product grows, ensuring both performance and reliability.” The team’s focus when selecting a database was on cost, convenience, and development effort. MongoDB checked all those boxes, said Markevych. The company’s expertise with MongoDB, coupled with years of consistent satisfaction with its performance, made it the obvious choice. With no additional ramp-up effort necessary, the team was able to deploy very quickly. In addition to MongoDB Atlas Vector Search, the other critical feature of MongoDB is its scalability, which Markevych described as seamless. “Its intuitive structure enables us to monitor usage patterns closely and scale up or down as needed. This flexibility ensures we’re always operating efficiently without overcommitting resources,” Markevych said. The MongoDB for Startups program has also been instrumental in the company’s success. The program provides early-stage startups with free MongoDB Atlas credits, technical guidance, co-marketing opportunities, and access to a network of partners. With help from MongoDB technical advisors, the Cognistx team is now confidently migrating data from OpenSearch to MongoDB Atlas to achieve better performance at a reduced cost. The free MongoDB Atlas credits enabled the team to experiment with various configurations to optimize the product further. It also gained access to a large network of like-minded innovators. “The MongoDB for Startups community has provided invaluable networking opportunities, enhancing our visibility and connections within the industry,” Markevych said. The future: Scaling for more projects Looking ahead, Cognistx is focusing on making SQUARY AI even more accessible and customizable. Key projects include automating the onboarding process, which will enable users to define and fine-tune system behavior from the start. The company also aims to expand SQUARY AI’s availability across various marketplaces. With a successful launch on AWS Marketplace, the company next hopes to offer its product on WordPress, making it simple for businesses to integrate SQUARY Chat into their websites. Cognistx is continuing to refine SQUARY AI’s balance between speed, accuracy, and usability. By blending cutting-edge technologies with a user-centric approach, the company is shaping the future of how people access and interact with information. See it in action Cognistx isn’t just building a tool; it’s building a movement toward intuitive, efficient, and conversational search. Experience the possibilities for yourself— schedule a demo of SQUARY AI today . To get started with vector search in MongoDB, visit our MongoDB Atlas Vector Search Quick Start guide .

March 26, 2025
Applied

Embracing Open Finance Innovation with MongoDB

The term "open finance" is increasingly a topic of discussion among banks, fintechs, and other financial services providers—and for good reason. Open finance, as the next stage of open banking, expands the scope of data sharing beyond traditional banking to include investments, insurance, pension funds, and more. To deliver these enhanced capabilities, financial service providers need a versatile and flexible data store that can seamlessly manage a wide array of financial data. MongoDB serves as an ideal solution, providing a unified data platform that empowers financial services providers to integrate various data sources, enabling real-time analytics, efficient data retrieval, and scalability. These capabilities are pivotal in enhancing customer experiences, providing users with a comprehensive view of their finances, and empowering them with greater visibility and control over their own data. By adopting MongoDB, financial services can seamlessly adapt to the growing demands of open finance and deliver innovative, data-driven solutions. Open finance's past and future As highlighted in a study conducted by the Cambridge Centre for Alternative Finance 1 , the terms 'open banking' and 'open finance' vary globally. Acknowledging these differences, we'll focus on the model displayed in Figure 1 due to its widespread adoption and relevance in our study. Figure 1. The three waves of innovation in financial services. The development of open finance started with open banking, which intended for banks to promote innovation by allowing customers to share their financial data with third-party service providers (TPP) and allow those TPP—fintech and techfin companies—to initiate transactions on their behalf solely in the context of payments. This proved to be an effective way to promote innovation and thus led to a broader spectrum of financial products adding loans, mortgages, savings, pensions, insurance, investments, and more. Leading to this new directive, commonly referred to as: open finance. If we take a step further—regardless of its final implementation—a third development called open data suggests sharing data beyond the traditional boundaries of the financial services industry (FSI), exponentially increasing the potential for financial services by moving into cross-sector offerings, positioning FSI as a horizontal industry rather than an independent vertical as it was previously known. Who and what plays a role in open finance? Among the different actors across open finance, the most important are: Consumers: End-users empowered to grant or revoke consent to share their data primarily through digital channels. Data holders: These are mainly financial services companies, and thereby consumer data custodians. They are responsible for controlling the data flow across the different third-party providers (TPPs). Data users: Data users are common third-party providers offering their services based on consumers’ data (upon request/consent). Connectivity providers: Trusted intermediaries that facilitate data flow, also known as TSPs in the EU and UK, and Account Aggregators in India. Regulatory authorities: Set standards, oversee processes, and may intervene in open finance implementation. They may vary according to the governance type. The interactions between all these different parties define the pillars for open finance functioning: Technology: Ensures secure data storage and the exposure-consumption of services. Standards: Establishes frameworks for data interchange schemas. Regulations and enforceability: Encompasses security policies and data access controls. Participation and trust: Enables traceability and reliability within a regulated ecosystem. Figure 2. High-level explanation of data sharing in open finance. Drivers behind open finance: Adoption, impact, and compliance Open finance seeks to stimulate innovation by promoting competition, safeguarding consumer privacy, and ensuring market stability—ultimately leading to economic growth. Additionally, it has the potential to provide financial institutions with greater access to data and better insights into consumers' preferences, allowing them to tailor their offerings and to enhance user experiences. This data sharing between the ecosystem’s participants requires a regulated set of rules to ensure data protection, security, and compliance according to each jurisdiction. As seen in Figure 3 below, there are two broad drivers of open finance adoption: regulation-led and market-driven adoption. Whether organizations adopt open finance depends on factors like market dynamics, digital readiness, and regulatory environment. Figure 3. An illustrative example of open finance ecosystem maturity. Even though there is not one single, official legal framework specifying how to comply with open finance, countries around the world have crafted their own specific set of norms as guiding principles. Recent market research reports reveal how several countries are already implementing open finance solutions, each coming from different starting points, with their own economic goals and policy objectives. In Europe, the Revised Payment Services Directive (PSD2) combined with the General Data Protection Regulation (GDPR) form the cornerstone of the regulatory framework. The European Commission published a proposal in June 2023 for a regulation on a framework for Financial Data Access 2 (FiDA) set to go live in 2027. 3 In the UK, open finance emerged from the need to address the market power held by a few dominant banks. In India, open finance emerged as a solution to promote financial inclusion by enabling identity verification for accounts opening through the national ID system. The aim is to create a single European data space – a genuine single market for data, open to data from across the world – where personal as well as non-personal data, including sensitive business data, are secure and businesses also have easy access to an almost infinite amount of high-quality industrial data, boosting growth and creating value, while minimising the human carbon and environmental footprint. 4 Build vs. buy: Choosing the right open finance strategy One of the biggest strategic decisions financial institutions face is whether to build their own open finance solutions in-house or buy from third-party open finance service providers. Both approaches come with trade-offs: Building in-house provides full ownership, flexibility, and control over security and compliance. While it requires significant investment in infrastructure, talent, and ongoing maintenance, it ensures lower total cost of ownership (TCO) in the long run, avoids vendor lock-in, and offers complete traceability—reducing reliance on external providers and eliminating “black box” risks. Institutions that build their own solutions also benefit from customization to fit specific business needs and evolving regulations. Buying from a provider accelerates time to market and reduces development costs while ensuring compliance with industry standards. However, it introduces potential challenges such as vendor lock-in, limited customization, and integration complexities with existing systems. For financial institutions that prioritize long-term cost efficiency, compliance control, and adaptability, the building approach offers a strategic advantage—though it comes with its own set of challenges. What are the challenges and why do they matter? As open finance continues to evolve, it brings significant opportunities for innovation—but also introduces key challenges that financial institutions and fintech companies must navigate. These challenges impact efficiency, security, and compliance, ultimately influencing how quickly new financial products and services can reach the market. 1. Integration of data from various sources Open finance relies on aggregating data from multiple institutions, each with different systems, APIs, and data formats. This complexity leads to operational inefficiencies, increased latency, and higher costs associated with data processing and infrastructure maintenance. Without seamless integration, financial services struggle to provide real-time insights and a frictionless user experience. 2. Diverse data types Financial data comes in various formats—structured, semi-structured, and unstructured—which creates integration challenges. Many legacy systems operate with rigid schemas that don’t adapt well to evolving data needs, making it difficult to manage new financial products, regulations, and customer demands. Without flexible data structures, innovation is slowed, and interoperability between systems becomes a persistent issue. 3. Data security With open finance, vast amounts of sensitive customer data are shared across multiple platforms, increasing the risk of breaches and cyberattacks. A single vulnerability in the ecosystem can lead to data leaks, fraud, and identity theft, eroding customer trust. Security vulnerabilities have financial consequences and can result in legal examination and long-term reputational damage. 4. Regulatory compliance Navigating a complex and evolving regulatory landscape is a major challenge for open finance players. Compliance with data protection laws, financial regulations, and industry standards—such as GDPR or PSD2—requires constant updates to systems and processes. Failure to comply can lead to legal penalties, substantial fines, and loss of credibility—making it difficult for institutions to operate confidently in a global financial ecosystem. These challenges directly impact the ability of financial institutions to innovate and launch new products quickly. Integration issues, security concerns, and regulatory complexities contribute to longer development cycles, operational inefficiencies, and increased costs—ultimately slowing the time to market for new financial services. In a highly competitive industry where speed and adaptability are critical, overcoming these challenges is essential for success in open finance. MongoDB as the open finance data store To overcome open finance’s challenges, a flexible, scalable, secure, and high-performing data store is required. MongoDB is an ideal solution, as it offers a modern, developer-friendly data platform that accelerates innovation while meeting the critical demands of financial applications. Seamless integration with RESTful JSON APIs According to OpenID’s 2022 research , most open finance ecosystems adopt RESTful JSON APIs as the standard for data exchange, ensuring interoperability across financial institutions, third-party providers, and regulatory bodies. MongoDB’s document-based model natively supports JSON, making it the perfect backend for open banking APIs. This enables financial institutions to ingest, store, and process API data efficiently while ensuring compatibility with existing and emerging industry standards. Flexible data model for seamless integration Open finance relies on diverse data types from multiple sources, each with different schemas. Traditional relational databases require rigid schema migrations, often causing downtime and disrupting high-availability services. MongoDB's document-based model—with its flexible schema—offers an easy, intuitive, and developer-friendly solution that eliminates bottlenecks, allowing financial institutions to adapt data structures dynamically, all without costly migrations or downtime. This ensures seamless integration of structured, semi-structured, and unstructured data, increasing productivity and performance while being cost-effective, enables faster iteration, reduced complexity, and continuous scalability. Enterprise-grade security and compliance Security and compliance are non-negotiable requirements in open finance, where financial data must be protected against breaches and unauthorized access. MongoDB provides built-in security controls, including encryption, role-based access controls, and auditing. It seamlessly integrates with existing security protocols and compliance standards, ensuring adherence to regulations such as GDPR and PSD2. MongoDB also enforces privileged access controls and continuous monitoring to safeguard sensitive data, as outlined in the MongoDB Trust Center . Reliability and transactional consistency Financial applications demand zero downtime and high availability, especially when processing transactions and real-time financial data. MongoDB’s replica sets ensure continuous availability, while its support for ACID transactions guarantees data integrity and consistency—critical for handling sensitive financial operations such as payments, lending, and regulatory reporting. The future of open finance The evolution of open finance is reshaping the financial industry, enabling seamless data-sharing while introducing new challenges in security, compliance, and interoperability. As financial institutions, fintechs, and regulators navigate this shift, the focus remains on balancing innovation with risk management to build a more inclusive and efficient financial ecosystem. For organizations looking to stay ahead in this landscape, choosing the right technology stack is crucial. MongoDB provides the flexibility, scalability, and security needed to power the next generation of open finance applications—helping financial institutions accelerate innovation while ensuring compliance and data integrity. In Part 2 of our look at open finance, we’ll explore a demo from the Industry Solutions team that leverages MongoDB to implement an open finance strategy that enhances customer experience, streamlines operations, and drives financial accessibility. Stay tuned! Head over to our GitHub repo to view the demo. Visit our solutions page to learn more about how MongoDB can support financial services. 1 CCAF, The Global State of Open Banking and Open Finance (Cambridge: Cambridge Centre for Alternative Finance, Cambridge Judge Business School, University of Cambridge, 2024). 2 “The Financial Data Access (FiDA) Regulation,” financial-data-access.com, 2024, https://www.financial-data-access.com/ 3 Maout, Thierry, “What is Financial Data Access (FiDA), and how to get ready?”, July 16th, 2024, https://www.didomi.io/blog/financial-data-access-fida?315c2b35_page=2 4 European Commission (2020), COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS, EUR-Lex.

March 25, 2025
Applied

Innovating with MongoDB | Customer Successes, March 2025

Hello and welcome! This is the first installment of a new bi-monthly blog series showcasing how companies around the world are using MongoDB to tackle mission-critical challenges. As the leading database for modern applications, MongoDB empowers thousands of organizations to harness the power of their data and to drive creativity and efficiency across industries. This series will shine a light on some of those amazing stories. From nimble startups to large enterprises, our customers are transforming data management, analytics, and application development with MongoDB's flexible schema, scalability, and robust cloud services. What do I mean? Picture retailers like Rent the Runway improving customer experiences with real-time analytics, fintech companies such as Koibanx speeding up and securing transaction processes, and healthcare companies like Novo Nordisk optimizing the path to regulatory approvals. With MongoDB, every developer and organization can fully tap into the potential of their most valuable resource: their data. So please read on—and stay tuned for more in this blog series!—to learn about the ingenuity of the MongoDB customer community, and how they’re pushing the boundaries of what's possible. Lombard Odier Lombard Odier , a Swiss bank with a legacy dating back to 1796, transformed its application architecture with MongoDB to stay at the forefront of financial innovation. Confronted with the challenge of modernizing its systems amidst rapid digital and AI advancements, the bank leveraged MongoDB’s modernization services and generative AI to streamline its application upgrades. This initiative resulted in up to 60x faster migration of simple code and slashed regression testing from three days to just three hours. By transitioning over 250 applications to MongoDB, including its pivotal portfolio management system, Lombard Odier significantly reduced technical complexity and empowered its developers to focus on next-generation technologies. SonyLIV SonyLIV faced challenges with its over-the-top (OTT) video-streaming platform. Their legacy relational database had poor searchability, complex maintenance, and slow content updates. Critically, it lacked the scalability necessary to support 1.6 million simultaneous users. To power their new CMS— ‘Blitz’—SonyLIV selected MongoDB Atlas’s flexible document model to improve performance and lower search query latency by 98%. Collaborating with MongoDB Professional Services , SonyLIV optimized API latency using MongoDB Atlas Search and Atlas Online Archive , effectively managing over 500,000 content items and real-time updates. With their new high-performing, modern solution in place, SonyLIV can now deliver flawless customer experiences to the world, faster. Swisscom Swisscom , Switzerland's leading telecom and IT service provider, harnessed MongoDB to enrich its banking sector insights with AI. Faced with the challenge of streamlining access to its extensive library of over 3,500 documents, Swisscom utilized MongoDB Atlas and MongoDB Atlas Vector Search capabilities to transform unstructured data into precise, relevant content summaries in seconds. In just four months, Swisscom launched a production-ready platform with improved relevance, concrete answers, and transparency. The project sets a new standard in Swiss banking, and showcases Swisscom's commitment to driving the digital future with advanced AI solutions. Victoria’s Secret Victoria's Secret’s e-commerce platform processes thousands of transactions daily across over 2.5 billion documents on hundreds of on-premises databases. Experiencing high costs and operational constraints with its monolithic architecture, the retailer initially adopted CouchDB but faced challenges like data duplication and limited functionality. In 2023, Victoria's Secret migrated to MongoDB Atlas on Azure , achieving zero downtime while optimizing performance and scalability. Over four months, they successfully migrated more than four terabytes of data across 200 databases, reducing CPU core usage by 75% and achieving a 240% improvement in API performance. The move to MongoDB also allowed the retailer to introduce additional products, like MongoDB Atlas Vector Search, resulting in significant operational efficiencies and cost savings. Video spotlight Before you go, be sure to watch one of our recent customer videos featuring the Danish pharmaceutical giant, Novo Nordisk . Discover how Novo Nordisk leveraged MongoDB and GenAI to reduce the time it takes to produce a Clinical Study Report (CSR) from 12 weeks to 10 minutes.. Want to get inspired by your peers and discover all the ways we empower businesses to innovate for the future? Visit our Customer Success Stories hub to see why these customers, and so many more, build modern applications with MongoDB.

March 18, 2025
Applied

Modernizing Telecom Legacy Applications with MongoDB

The telecommunications industry is currently undergoing a profound transformation, fueled by innovations in 5G networks, the growth of Internet of Things applications, and the rapid rise of AI. To capitalize on these technologies, companies must effectively handle increasing volumes of unstructured data, which now represents up to 90% of all information, while also developing modern applications that are flexible, high-performance, and scalable. However, the telecommunications industry's traditional reliance on relational databases such as PostgreSQL presents a challenge to modernization. Their rigid structures limit adaptability and can lead to decreased performance as table complexity grows. With this in mind, this blog post explores how telecom companies can modernize their legacy applications by leveraging MongoDB’s modern database and its document model. With MongoDB, telecom companies can take advantage of the latest industry innovations while freeing their developers from the burdens of maintaining legacy systems. Navigating legacy system challenges Legacy modernization refers to the process of updating a company’s IT infrastructure to align it with the latest technologies and workflows, and ultimately advancing and securing strategic business goals. For telecom companies, this modernization involves overcoming the limitations of their legacy systems, which hinder adjustment to changing market conditions that demand greater system scalability and availability to run real-time operations. The main drawbacks of legacy technologies like relational databases stem from their design, which wasn’t built to support the data processing capabilities required for modern telecom services. These limitations, as illustrated in Figure 1 below, include rigid data schemas, difficulty handling complex data formats, limited scaling ability, and higher operational costs for maintenance. Figure 1. The limitations of legacy systems. Expanding on these limitations, relational databases depend on a predefined schema, which becomes difficult to modify once established, as changes entail extensive restructuring efforts. In telecommunications, handling growing data volumes from connected devices and 5G networks can rapidly become burdensome and costly due to frequent CPU, storage, and RAM upgrades. Over time, technology lock-in can further escalate costs by hindering the transition to alternative solutions. Altogether, these factors hold back modernization efforts urging telecoms to transform their legacy systems to newer technologies. To overcome these challenges, telecom companies are replacing these legacy systems with modern applications that effectively provide them with greater scalability, enhanced security, and high availability, as shown in Figure 2. However, achieving this transition can be a daunting task for some organizations due to the complexity of current systems, a lack of internal technical expertise, and the hurdles of avoiding downtime. Therefore, before transforming their outdated systems, telecom companies must carefully select the appropriate technologies and formulate a modernization strategy to facilitate this transition. Figure 2. Characteristics of modern applications. Getting onboard with MongoDB Enter MongoDB. The company’s document-oriented database offers a flexible data model that processes any information format, easily adapting to specific application requirements. MongoDB Atlas —MongoDB’s unified, modern database—delivers a robust cloud environment that efficiently manages growing data volumes through its distributed architecture, ensuring seamless connectivity and enhanced performance. Moreover, as telecom providers prioritize cybersecurity and innovation, MongoDB includes robust security measures—comprising encryption, authentication, authorization, and auditing—to effectively protect sensitive information and ensure regulatory compliance. Additionally, leveraging MongoDB’s document model with built-in Atlas services like Vector Search , Atlas Charts , and Stream Processing allows telecommunications organizations to streamline advanced industry cases, including single customer view, AI integrations, and real-time analytics. Figure 3. Core MongoDB modernization features for Modernization. Recognizing these benefits, leading telecom companies like Nokia , Swisscom , and Vodafone have successfully modernized their applications with MongoDB. However, selecting the right technology is only part of the modernization process. In order to ensure a successful and effective modernization project, organizations should establish a comprehensive modernization strategy. This process typically follows one of three following paths: Data-driven modernization: this approach transfers all data from the legacy system to the new environment and then migrates applications. Application-driven modernization (all-or-nothing): this approach executes all reads and writes for new applications in the new data environment from the start, but leaves the business to decide when to retire existing legacy applications. Iterative modernization (one-step-at-a-time): this approach blends the previous paths, starting with the modernization of the least complex applications and incrementally moving forward into more complex applications. Read this customer story to learn more about telecoms migrating to MongoDB. With this overview complete, let's dive into the migration process by examining the iterative modernization of a telecom billing system. Modernizing a telecom billing system Telecom billing systems often consist of siloed application stacks segmented by product lines like mobile, cable, and streaming services. This segmentation leads to inefficiencies and overly complex architectures, highlighting the need to simplify these structures. With this in mind, imagine a telecom company that has decided to modernize its entire billing system to boost performance and reduce complexity. In the initial stage, telecom developers can assess the scope of the modernization project, scoring individual applications based on technical sustainability and organizational priorities. Applications with high scores undergo further analysis to estimate the re-platforming effort required. Later on, a cross-functional team selects the first component to migrate to MongoDB, initiating the billing system modernization. This journey then follows the steps outlined in Figure 4: Figure 4. The modernization process. First, developers analyze legacy systems by examining the codebase and the underlying architecture of the chosen billing system. Then, developers create end-to-end tests to ensure the application functions correctly when deployed. Later, developers design an architecture that incorporates managerial expectations of the desired application. Next, developers rewrite and recode the legacy application to align with the document model and develop APIs for MongoDB interaction. Following this, developers conduct user tests to identify and resolve any existing application bugs. Finally, developers migrate and deploy the modernized application in MongoDB, ensuring full functionality. Throughout this process, developers can leverage MongoDB Relational Migrator to streamline the transition. Relational Migrator helps developers with data mapping and modeling, SQL object conversion, application code generation, and data migration—corresponding to steps three, four, and five. Additionally, telecom companies can accelerate modernization initiatives by leveraging MongoDB Professional Services for dedicated, tailored end-to-end migration support. Our experts work closely with you to provide customized assistance, from targeted technical support and development resources to strategic guidance throughout the entire project. Building on this initial project, telecom companies can progressively address more complex applications, refining their approach to support a long-term modernization strategy. Next steps By revamping legacy applications with MongoDB, telecom companies can improve their operations and gain a competitive edge with advanced technology. This shift allows telcos to apply the latest innovations and free developers from the burdens of maintaining legacy systems. Start your journey to migrate core telecom applications to MongoDB Atlas, by visiting our telecommunications solutions page to learn more. If you would like to discover how to upgrade your TELCO legacy systems with MongoDB, discover how to start with the following resources: Visit our professional services to learn more about MongoDB Consulting YouTube: Relational Migrator Explained in 3 minutes White paper: Unleash Telco Transformation with an Operational Data Layer White paper: Modernization: What’s Taking So Long?

March 17, 2025
Applied

ZEE5: A Masterclass in Migrating Microservices to MongoDB Atlas

ZEE5 is a leading Indian over-the-top (OTT) video-streaming platform that delivers streamed content via Internet-connected devices. The platform offers a wide variety of content—movies, TV shows, web series, and original programming—across multiple genres and languages. Owned by Zee Entertainment Enterprises Limited , ZEE5 produces over 260 hours of content daily, with a monthly active user base of more than 119.5 million users across 190 countries. ZEE5’s operations and customer satisfaction are dependent on its backend infrastructure being robust and scalable to handle immense traffic and complex workflows. In order to future-proof its infrastructure and to maintain its competitive edge, the company needed to streamline operations and enhance its database management capabilities. This included the migration of its entire OTT platform, including a total of 100+ microservices and 80+ databases to Google Cloud. Pramod Prakash, Senior Vice President of Engineering at ZEE5, was on the stage of MongoDB.local Bangalore in 2024 . He shared insights into how ZEE5 managed this migration without hindering performance or disrupting its services. “It was a massive project which required a very carefully orchestrated migration plan,” said Prakash. Massive migration, zero downtime: Challenge accepted ZEE5’s team embarked on an ambitious journey to migrate a total of 40+ microservices (out of its 100+ microservices) to MongoDB Atlas . These were previously running on the Community Edition of MongoDB and on other NoSQL databases. One of the challenges of this migration was to ensure continuous data flow for the platform’s 119.5 million streaming users. To do so, Prakash and his team created multiple environments using a change data capture tool . This ensured continuous replication of data so the user experience would not be impacted. “We had to build four environments: dev, QA [Quality Assurance], UAT [User Acceptance Testing], and production,” explained Prakash. “We needed to keep testing and verifying each environment, and then finally enter the production phase when we migrated the data and moved the traffic.” The approach involved migrating production data twice: first for testing and then for the final cutover. This was to minimize any data loss. ZEE5 used MongoDB Atlas’ integrated tools mongosync and mongomirror . The tools helped achieve an essential goal: avoiding any downtime. “We migrated this entire mammoth application with zero downtime!” said Prakash. “We have not stopped ZEE5’s operations at all.” “The second important thing is the performance: you want to be 100% sure that the entire scale and peak traffic will work seamlessly within the new cloud environment,” added Prakash. ZEE5 relied on MongoDB Professional Services (PS)’s support. The PS team helped architect and plan the entire migration strategy. They also accompanied Prakash’s team step by step to ensure there would be no unexpected disruptions. The production environment was built and tested rigorously before the final migration to ensure seamless performance at peak traffic levels. “We iterated until we were 100% sure that the new environment was ready to take up ZEE5’s peak traffic. Functionally, it was all perfect,” said Prakash. The power of the Atlas platform According to Prakash, the power of MongoDB Atlas lies in the fact that it offers a fully managed platform. “There is no maintenance overhead at all,” he said. “All upgrades happen automatically without any downtime. We are also leveraging auto-scaling capabilities and point-in-time recovery.” All of this enables efficient handling of varying traffic loads without manual intervention. Additionally, data recovery capabilities are enhanced, and most importantly, the engineering team can prioritise application development rather than operational maintenance. As of February 2025, MongoDB Atlas supports a total of seven key use cases at ZEE5: payments, subscriptions, plans and coupons, video engineering, Zee Music (users’ preferences and playlists), content metadata, and the platform’s communication engine (SMS and email notifications). Looking ahead, ZEE5 is working on more use cases powered by MongoDB. For example, the company is looking to completely migrate their master data source for content metadata to MongoDB Atlas. ZEE5 is also considering relying on MongoDB Atlas to support and enhance its search and recommendations capabilities. Interested in learning how MongoDB is powering other companies applications? Head over to our customer case studies hub to read the latest stories. Visit our product page to learn more about MongoDB Atlas .

March 11, 2025
Applied

Ready to get Started with MongoDB Atlas?

Start Free