Applications

Customer stories, use cases, and experiences of MongoDB

Customer Service Expert Wati.io Scales Up on MongoDB

Wati.io is a software-as-a-service (SaaS) platform that empowers businesses to develop conversation-driven strategies to boost growth. Founded by CEO Ken Yeung in 2019, Wati started as a chatbot solution for large enterprises, such as banks and insurance companies. However, over time, Yeung and his team noticed a growing need among small and medium-sized businesses (SMBs) to manage customer conversations more effectively. To address this need, Wati used MongoDB Atlas and built a solution based on the WhatsApp Business API. It enables businesses to manage and personalize conversations with customers, automate responses, improve commerce functions, and enhance customer engagement. Speaking at MongoDB.local Hong Kong in September 2024, Yeung said, “The current solutions on the market today are not good enough. Especially for SMBs [that] don’t have the same level of resources as enterprises to deal with the number of conversations and messages that need to be handled every day.” Supporting scale: From MongoDB Community Edition to MongoDB Atlas “From the beginning, we relied on MongoDB to handle high volumes of messaging data and enable businesses to manage and scale their customer interactions efficiently,” said Yeung. Wati originally used MongoDB Community Edition , as the company saw the benefits of a NoSQL model from the beginning. As the company grew, it realized it needed a scalable infrastructure, so Wati transitioned to MongoDB Atlas. “When we started reaching the 2 billion record threshold, we started having some issues. Our system slowed down, and we were not able to scale it,” said Yeung. Atlas has now become an essential part of Wati’s infrastructure, helping the company store and process millions of messages each month for over 10,000 customers in 165 countries. “Transitioning to a new platform—MongoDB Atlas—seamlessly was critical because our messaging system needs to be on 24/7,” said Yeung. Wati collaborated closely with the MongoDB Professional Services and MongoDB Support teams, and in a few months it was able to rearchitect the deployment and data model for future growth and demand. The work included optimizing Wati’s database by breaking it down into clusters. Wati then focused on extracting connections, such as conversations, and dividing and categorizing data within the clusters—for example, qualifying data as cold or hot based on the read and write frequencies. This architecture underpins the platform’s core features, including automated customer engagement, lead qualification, and sales management. Deepening search capabilities with MongoDB Atlas Search For Wati’s customers, the ability to search through conversation histories and company documents to retrieve valuable information is a key function. This often requires searching through millions of records to rapidly find answers so that they can respond to customers in real-time. By using MongoDB Atlas Search , Wati improved its search capabilities, ultimately helping its business customers perform more advanced analytics and improve their customer service agents’ efficiency and customer reporting. “[MongoDB] Atlas Search is really helpful because we don’t have to do a lot of technical integration, and minimal programming is required,” said Yeung. Looking ahead: Using AI and integrating more channels Wati expects to continue collaborating with MongoDB to add more features to its platform and keep innovating at speed. The company is currently exploring to build more AI capabilities of Wati KnowBot , as well as how it can expand its integration with other conversation platforms and channels such as Instagram and Facebook. To learn more about MongoDB Atlas, visit our product page . To get started with MongoDB Atlas Search, visit the Atlas Search product page .

November 25, 2024
Applied

Hanabi Technologies Uses MongoDB to Power AI Assistant, Hana

For all the hype surrounding generative AI, cynics tend to view the few real-world implementations as little more than “fancy chatbots.” But for Abhinav Aggarwal, CEO of Hanabi Technologies , the idea of a generative AI-powered bot that is more than just an assistant was intriguing. “I’d been using ChatGPT since it launched,” said Aggarwal. “That got me thinking: How could we make a chatbot that was like a team member?” And with that concept, Hana was born. The problem with bots “Most generative AI chatbots do not act like people; they wait for a command and give a response,” said Aggarwal. “We wanted to create a human-like chatbot that would proactively help people based on what they wanted—automating reminders, for example, or fetching time zones from your calendar to correctly schedule meetings.” Hanabi’s flagship product, Hana, is an AI assistant designed to enhance team collaboration within Google Chat, working in concert with Google Workspace and its suite of products. “Our target customers are smaller companies of between 10 and 50 people. At this size you’re not going to build your own agent from scratch,” he said. Hana integrates with Google APIs to deliver a human-like assistant that chimes in with helpful interventions, such as automatically setting reminders and making sure meetings are booked in the right time zone for each participant. “Hana is designed to bring AI to smaller companies and help them collaborate in a space where they are already working—Google Workspace,” Aggarwal explained. The MongoDB Atlas solution For Hana to act like a member of the team, Hanabi needed to process massive amounts of data to support advanced features like retrieval-augmented generation (RAG) for better information retrieval across Google Docs and many other sources. And with a rapidly growing user base of over 600 organizations and 17,000+ installs, Hanabi also required a secure, scalable, and high-performing data storage solution. MongoDB Atlas provided a flexible document model, built-in vector database, and scalable cloud-based infrastructure, freeing Hanabi engineers to build new features for Hana rather than focusing on rote tasks like data extract, transform, and load processes or manual scaling and provisioning. Now, MongoDB Atlas handles a variety of responsibilities: Scalability and security: MongoDB Atlas’s auto-scaling and automatic backup features have enabled Hanabi to seamlessly grow its user base without the need for manual database management. RAG: MongoDB Atlas plays a critical role in Hana’s RAG functionality. The platform enables Hanabi to split Google Docs into small sections, create embeddings, and store these sections in Atlas’s vector database. Development Processes: According to Aggarwal, MongoDB’s flexibility in managing changing schemas has been essential to the company’s fast-paced development cycle. Data Visualization: Using MongoDB Atlas Charts has enabled Hanabi to create comprehensive dashboards for real-time data visualization. This has helped the team track usage, set reminders, and optimize performance without needing to build a manual dashboard. Impact and results With MongoDB Atlas, Hanabi can successfully scale Hana to meet the demands of its rapidly expanding user base. The integration is also enabling Hana to offer powerful features like automatic interactions with customers, advanced information retrieval from Google Docs, and manually added memory snippets, making it an essential tool for teams around the world. Next steps Hanabi plans to continue integrating more tools into Hana while expanding its reach to personal Gmail users. The company is also rolling out a new automatic-interaction feature, further enhancing Hana’s ability to proactively assist users without direct commands. MongoDB Atlas remains a key component of Hanabi’s stack, alongside Google Kubernetes Engine, NestJS, and LangChain, enabling Hanabi to focus on innovating to improve the customer experience. Tech Stack MongoDB Atlas Google Kubernetes Engine NestJS LangChain Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free MongoDB Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com.

November 21, 2024
Applied

3 Ways MongoDB EA Azure Arc Certification Serves Customers

One reason more than 50,000 customers across industries choose MongoDB is the freedom to run anywhere—across major cloud providers, on-premises in data centers, and in hybrid deployments. This is why MongoDB is always working to meet customers where they are. For example, many customers choose MongoDB Atlas (which is available in more than 115 cloud regions across major cloud providers) for a fully managed experience. Other customers choose MongoDB Enterprise Advanced (EA) to self-manage their database deployments to meet specific on-premises or hybrid requirements. To that end, we’re pleased to announce that MongoDB EA is one of the first certified Microsoft Azure Arc-enabled Kubernetes applications, which provides customers even more choice of where and how they run MongoDB. Customer adoption of Azure Arc has grown by leaps and bounds. This new certification, and the launch of MongoDB EA as an Arc-enabled Kubernetes application on Azure Marketplace , means that more customers will be able to leverage the unparalleled security, availability, durability, and performance of MongoDB across environments with the centralized management of their Kubernetes deployments. We are very excited to have MongoDB available for our customers on the Azure Marketplace. By extending Azure Arc’s management capabilities to your MongoDB deployments, customers gain the benefit of centralized governance, enhanced security, and deeper insights into database performance. Azure Arc makes hybrid database management with MongoDB efficient and consistent. Collaboration between MongoDB and Microsoft represents an opportunity for many of our customers to further accelerate their digital transformation when building enterprise-class solutions with Azure Arc. Christa St Pierre, Partner Group Manager, Azure Edge Devices, Microsoft Here are three ways the launch of MongoDB EA on Azure Marketplace for Arc-enabled Kubernetes applications gives customers greater flexibility. 1. MongoDB EA supports multi-Kubernetes cluster deployments, simplifies management MongoDB Enterprise Advanced seamlessly integrates market-leading MongoDB capabilities along with robust enterprise support and tools for self-managed deployments at any scale. This powerful solution includes advanced automation, comprehensive auditing, strong authentication, reliable backup, and insightful monitoring capabilities, all of which work together to ensure security compliance and operational efficiency for organizations of any size. The relationship between MongoDB and Kubernetes is one of strong synergy. With Kubernetes, MongoDB EA really can run anywhere, such as a single deployment spanning on-premises and more than one public cloud Kubernetes cluster. Customers can use the MongoDB Enterprise Kubernetes Operator, a key component of MongoDB Enterprise Advanced, to simplify the management and automation of self-managed MongoDB deployments in Kubernetes. This includes tasks like creating and updating deployments, managing backups, and integrating with various Kubernetes services. The ability of the MongoDB Enterprise Kubernetes Operator to deploy and manage MongoDB deployments that span multiple Kubernetes clusters significantly enhances resilience, improves disaster recovery, and minimizes latency by allowing data to be co-located closer to where it is needed, ensuring optimal performance and reliability. 2. Azure Arc complements MongoDB EA, providing centralized management While MongoDB Enterprise Advanced is already among a select group of databases capable of operating across multiple Kubernetes clusters , it is now also supported in Azure Arc-enabled Kubernetes environments. Azure Arc enables the standardized management of Kubernetes clusters across various environments—including in Azure, on-premises, and even other clouds—while harnessing the power of Azure services. Azure Arc accomplishes this by extending the Azure control plane to standardize security and governance across a wide range of resources and locations. For instance, organizations can centrally monitor all of the Azure Arc-enabled Kubernetes clusters using Azure Monitor for containers , or they can enforce threat protection at scale using Microsoft Defender for Kubernetes. This centralized control significantly reduces the complexity of managing Kubernetes clusters running anywhere, as customers can oversee all resources and apply consistent security and compliance policies across their hybrid environment. 3. Customers can leverage the resilience of MongoDB EA and the centralized governance of Azure Arc Together, these solutions empower organizations to build robust applications across a wide array of environments, whether on-premises or in multi-cloud settings. The combination of MongoDB Enterprise Advanced and the MongoDB Enterprise Operator simplifies the deployment of MongoDB across Kubernetes clusters, allowing organizations to fully leverage enhanced resilience and geographic distribution that surpasses the capabilities of a single Kubernetes cluster. Azure Arc further enhances this synergy by providing centralized management for all of these Kubernetes clusters, regardless of where they are running; for customers running entirely in the public cloud, we recommend using MongoDB’s fully managed developer data platform, MongoDB Atlas. If you’re interested in learning more, we invite you to explore the Azure Marketplace listing for MongoDB Enterprise Advanced for Arc-enabled Kubernetes applications. Please note that aside from use for evaluation and development purposes, this offering requires the purchase of a MongoDB Enterprise Advanced subscription. For licensing inquiries, we encourage you to reach out to MongoDB at https://www.mongodb.com/contact to secure your license and to begin harnessing the full potential of these powerful solutions.

November 19, 2024
Applied

Accelerating MongoDB Migration to Azure with Microsoft Migration Factory

Migrating MongoDB workloads from on-premises solutions or other cloud platforms to MongoDB Atlas on Azure has never been simpler, thanks to Microsoft’s Cloud Migration Factory (CMF). This newly created program is perfect for organizations using MongoDB Enterprise Advanced or Community Edition who are ready to modernize. By transitioning to MongoDB Atlas —an integrated suite of data and applications services—customers can simplify their database management, enhance performance, and reduce operational complexities, unlocking new potential and value from their data. Why the Microsoft Cloud Migration Factory (CMF)? The Microsoft CMF offers hands-on delivery for eligible workloads to accelerate customer journeys on Azure at no cost. With repeatable best practices, robust tools, structured processes, and a skilled resource pool, the Microsoft CMF delivery model mitigates technical risk and accelerates deployments with optimized architectures to maximize platform benefits. The MongoDB Migration Factory, meanwhile, is a comprehensive program designed to help organizations migrate their existing databases to MongoDB. This program provides a structured approach, tools, and best practices to ensure a smooth and efficient migration process. Microsoft CMF is partnering with MongoDB Migration Factory to jointly deliver migrations of MongoDB Enterprise Advanced or Community Edition deployments to MongoDB Atlas on Azure in a secure, optimized, and customer-focused way. This comprehensive migration approach enables businesses to leverage Azure for their MongoDB-based solutions with speed, confidence, best practices, and minimal disruption risk at an optimized cost. “This joint delivery offering from Microsoft Cloud Migration Factory (CMF) and MongoDB Migration Factory is designed to accelerate AI transformation priorities for our customers by driving the migrations to MongoDB Atlas on Azure with speed and quality,” said Rashida Hodge, Corporate Vice President of Azure Data and AI at Microsoft. “We have delivered thousands of customer engagements with the CMF model across all Azure workloads, making it a proven approach for accelerating cloud journeys with Microsoft-owned delivery.” Why MongoDB Atlas on Azure? MongoDB Atlas on Azure combines MongoDB’s robust document data platform with Azure’s scalability and advanced cloud services, making it ideal for high-performance applications. Offering features like automatic scaling, high availability, and comprehensive security, MongoDB Atlas on Azure supports diverse workloads, including transaction processing, in-app analytics, and full-text search. Integrations with Azure services—including Azure Synapse Analytics, Microsoft Fabric, and Power BI—enhance MongoDB Atlas’s analytics and visualization capabilities, and compliance with standards like HIPAA and GDPR ensures data privacy, enabling organizations to focus on innovation in a secure, scalable environment. Figure 1: MongoDB Atlas on Azure Integrations ecosystem Migrating MongoDB Community Edition or Enterprise Advanced to MongoDB Atlas on Azure Migrating from MongoDB Community Edition or MongoDB Enterprise Advanced to MongoDB Atlas on Azure offers numerous benefits, including enhanced scalability, security, and operational efficiency. MongoDB Atlas is a fully managed, cloud-based solution that simplifies database management by handling tasks like automatic scaling, high availability, and data backup. Leveraging Azure’s infrastructure, Atlas provides integrated services such as Azure Active Directory for improved authentication and identity management, and global cloud coverage to reduce latency by deploying clusters closer to users. MongoDB Atlas on Azure also includes robust security features like encryption at rest and in transit, network isolation, and advanced access controls, meeting compliance standards. These features are often difficult to implement in a self-managed environment. Additionally, Atlas offers advanced monitoring and automated tuning tools for optimizing database performance and resource usage, helping to reduce costs over time. For organizations considering migration to MongoDB Atlas, Microsoft CMF offers end-to-end guidance, providing a clear roadmap for every stage of the migration process, from initial validation to post-migration testing. With flexible migration paths that cater to a range of needs, Microsoft CMF supports live migrations using tools like mongosync and offline migrations with MongoDB’s native tools, enabling everything from minimal-downtime transitions to complete re-hosting. Best of all, Microsoft CMF is a complimentary service, which means that organizations don’t need to worry about budgets and can focus on the transition to MongoDB Atlas on Azure. In collaboration with MongoDB Professional Services, the CSX team leveraged MongoDB and Microsoft Migration Factory to migrate a mission-critical railroad transportation app quickly and seamlessly with zero downtime. John Maio, Department Head, Enterprise Data & Analytics at CSX Getting started Microsoft CMF’s structured approach guides organizations through each critical milestone to ensure a smooth migration process. For those interested in migrating their MongoDB setup to Azure, contact MongoDB today to take advantage of this free migration opportunity and experience the ease of MongoDB Atlas on Azure with Microsoft CMF support.

November 19, 2024
Applied

MongoDB Database Observability: Integrating with Monitoring Tools

This post is the final in a three-part series on leveraging database observability. Welcome back to our series on Leveraging Database Observability! Our previous post showcased a real-world use case highlighting how MongoDB Atlas’s observability tools effectively tackle database performance challenges. Whether you’re a developer, DBA, or DevOps engineer, our mission is to empower you to harness the full potential of your data through our observability suite . Integrating Atlas metrics with your central enterprise observability tools can simplify your operations. By seamlessly working with popular observability tools, our approach helps teams streamline workflows and enhance visibility across systems. Integrating MongoDB Atlas with third-party monitoring tools MongoDB’s developer data platform combines all essential data services for building modern applications within a unified experience. Our purpose-built observability tools for Atlas environments offer automatic monitoring and optimization, guiding diagnostics tailored specifically for MongoDB. Additionally, we extend Atlas metrics into your existing enterprise observability stack, enabling seamless integration without replacing your current tools. This creates a consolidated, single-pane view that unifies Atlas telemetry with other tech and application metrics, ensuring comprehensive visibility into both database and full-stack performance. This integration empowers you to monitor, receive alerts, and make data-driven decisions within your existing workflows, driving greater efficiency. Below is a quick guide to modifying integration settings through the Atlas UI and the popular integrations we support: Navigate to the Project Integrations page in Atlas. Choose the organization and project you want to configure from the navigation bar. On the Project Integrations page, select the third-party services you’d like to integrate. Configure the chosen services with the required API keys and regions. Critical integrations for your observability platform With Atlas’s Datadog and Prometheus integrations, you can send critical MongoDB metrics to these platforms, empowering detailed, real-time monitoring. Through Datadog , you can track database operation counts, query efficiency, and resource usage, ideal for pinpointing bottlenecks and managing resources. Similarly, Prometheus enables you to monitor essential metrics like query times, connection rates, and memory usage, supporting flexible tracking of database health and performance. Both integrations facilitate proactive detection of issues, alert configuration for resource thresholds, and a cohesive view of Atlas data when visualized in Grafana. Atlas’s integration with PagerDuty streamlines incident management by sending metrics like performance alerts, billing anomalies, and security events directly to PagerDuty. This integration records incidents automatically, notifies teams upon alerts, and supports two-way syncing, ensuring resolved alerts in Atlas are reflected in PagerDuty. It enables efficient incident response and resource allocation to maintain system stability. With Atlas integrations for Microsoft Teams and Slack, you can route key metrics—such as query latency, disk usage, and throughput—to these channels for timely updates. Teams can use these insights for real-time performance monitoring, incident response, and collaboration. Notifications through these platforms ensure your team stays informed on database performance, storage health, and user activity changes as they occur. Use case: Centralized observability with MongoDB Atlas, Datadog, and Slack Let’s walk through a hypothetical scenario for ShopSmart, an e-commerce company that leverages MongoDB Atlas to manage its product catalog and customer data. As traffic surges, the DevOps team faces challenges in monitoring application performance and database health effectively. To tackle these challenges, the team leverages MongoDB Atlas’ integration with Datadog and Slack, creating a powerful observability ecosystem. Integrating MongoDB Atlas with Datadog: The team pushes key MongoDB Atlas metrics into Datadog, such as query performance, connection counts, and Atlas Vector Search metrics. With Datadog, they can visualize these metrics and correlate overall MongoDB performance with their other applications. Out-of-the-box monitors and dedicated dashboards allow the team to track metrics like throughput, average read/write latency, and current connections. This visibility helps pinpoint bottlenecks in real time, ensuring optimal database performance and improving overall application responsiveness. Setting up alerts in Datadog: The team configures alerts for critical metrics like high query latency and increased error rates. When thresholds are breached, Datadog instantly notifies the team. This proactive approach allows the team to address potential performance issues before they impact customers. Integrating Datadog with Slack: To ensure fast communication, alerts are sent directly to the dedicated Slack channel, “ShopSmart-Alerts.” This integration fosters seamless collaboration, enabling the team to discuss and resolve issues in real-time. With these integrations, ShopSmart’s engineering team can monitor performance quickly and address issues efficiently. The unified observability approach enhances operational efficiency, improves the customer experience, and supports ShopSmart’s competitive edge in the e-commerce industry. By leveraging MongoDB Atlas, Datadog, and Slack, the team ensures scalable performance and drives continuous innovation. Conclusion MongoDB Atlas empowers developers and organizations to achieve unparalleled observability and control over their database environments. By seamlessly integrating with central enterprise observability tools, Atlas enhances your ability to monitor performance metrics and ensures you can do so within your existing workflows. This means you can focus on building modern applications confidently, knowing you have the insights and alerts necessary to maintain optimal performance. Embrace the power of MongoDB Atlas and transform your approach to database management—because your applications can thrive when your data is observable. And that wraps up our Leveraging Database Observability series! We hope you learned something new and found value in these discussions. Sign up for MongoDB Atlas , our cloud database service, to see database observability in action. To dive deeper and expand your knowledge, check out this learning byte for more insights on the MongoDB observability suite and how it can enhance your database performance.

November 14, 2024
Applied

MongoDB Helps Asian Retailers Scale and Innovate at Speed

More retailers across ASEAN are looking to the document database model to support the expansion of their businesses and respond quickly to ever-more-rapidly changing customer demands. Here are two stories shared during our MongoDB.local events in Indonesia and Malaysia in September 2024. Simplicity and offline availability: EasyEat empowers merchants to optimize dining experiences with MongoDB Atlas EasyEat delivers a software-as-a-service (SaaS) point-of-sale (POS) system tailored for restaurants. It simplifies daily operations, optimizes costs, and enhances customer satisfaction for merchants that provide food delivery and pickup services. The platform launched in 2020, and in less than 4 years it has grown to serve over 1,300 merchants and over four million consumers across Malaysia and Indonesia. Speaking at MongoDB.local Kuala Lumpur in September 2024 , Deepanshu Rawat, Engineering Manager at EasyEat, explained how MongoDB Atlas empowered EasyEat to rapidly scale its operations across both the merchant POS and consumer applications. EasyEat’s move from a SQL database to MongoDB Atlas also delivered greater flexibility, enabling faster product development and ease of use for the engineering team. For EasyEat, MongoDB Atlas is more than just a database. The retailer is making full use of the developer data platform’s unique features, including: Analytics node: EasyEat must regularly provide reports to its merchants. These queries tend to be complex, taking significant time to process and putting an excessive load on the system. “With MongoDB Atlas’s analytics node , we are able to process those heavy queries without it impacting our daily operations,” said Rawat. Atlas Triggers: EasyEat uses this feature to perform a range of asynchronous operations. “Using Atlas Triggers helps us optimize the performance of our applications,” said Rawat. MongoDB Atlas Search: EasyEat has started using MongoDB Atlas Search to execute faster and more efficient searches as its platform’s user base grows. “Atlas Search enables us to make searches in our user application very smooth, and on our end, we don’t face any delay or latency issues,” said Rawat. In addition, EasyEat is exploring a few other capabilities on MongoDB, including online archiving . The company is also considering how it can use generative AI via MongoDB Atlas Vector Search to build a personalized recommendations engine. From 10 seconds to 1: Alfamart drives 1,000% efficiency using MongoDB Atlas Alfamart is a leading retailer with over 19,000 stores across Indonesia and the Philippines. It serves 18.1 million customers and handles approximately 4.6 million retail transactions daily. Speaking at MongoDB.local Jakarta in September 2024 , Alfamart’s Chief Technology Officer, Bambang Setyawan Djojo, shared insights into how the company has used MongoDB Atlas to sustain massive scale and to power its digital transformation. The 2015-2020 period was critical for Alfamart. It was in the midst of rapid expansion and had an ambitious digital transformation agenda. In early 2020, as the COVID-19 pandemic began, Alfamart’s offline transactions plummeted while its online transactions soared. “The growth of online transactions was not linear but exponential,” said Setyawan Djojo. “This was the moment: We knew we needed the tools to adapt quickly and go to market fast. This is when we decided to look for a new database.” With its previous SQL database, Alfamart struggled to handle the growing data load, particularly during peak hours. MongoDB Atlas’s flexible document database model delivered greater efficiency for Alfamart’s team of 350 developers. It also smoothly accommodated Alfamart’s need for sudden and significant upscaling. “Fast processing times are critical to keep our customers happy,” said Setyawan Djojo. “It used to take us 10 seconds to scan members during peak hours, but with MongoDB, it is now below one second.” Setyawan Djojo added, “MongoDB helped us eliminate a lot of downtime compared to our previous SQL database.” MongoDB Atlas’s auto-scaling capabilities were a game changer for Alfamart. “MongoDB can automatically scale up and down depending on the usage of resources and performance. So during peak times, the database can scale up, and once the transaction peak is passed, it can scale back down,” said Setyawan Djojo. Looking ahead, Alfamart plans to continue exploring the potential of the MongoDB Atlas platform to further increase productivity, efficiency, and flexibility. Visit our solutions page to learn more about how MongoDB is helping retailers innovate worldwide. Check out our quick-start guide to get started with MongoDB Atlas Vector Search today. Visit our product page to learn more about MongoDB Atlas Search .

November 12, 2024
Applied

MongoDB: Powering Digital Natives

Today's rapidly evolving digital landscape is dominated by digital native companies, driving innovation . These are companies born in the digital age and who operate through digital channels with a business model enabled by technology and data. They are not only adept at using technology but are also reshaping the way software is developed and deployed. This article delves into the challenges and opportunities facing digital natives in modern application development, with a particular focus on the complexities of managing data. We’ll explore how the right data platform can empower your digital native organization to build high-quality software faster, adapt to changing market demands, and unlock the full potential of your business. Strong foundations: The four pillars of tech-fueled growth for digital natives Achieving explosive growth requires a strong foundation built on specific principles, which empower rapid scaling and success. Here, we explore the four key pillars that fuel tech-driven growth for digital natives: Product-market fit, fast: As a digital native, you must continuously ship and iterate products to achieve a quick product-market fit. This builds customer trust and captures opportunities before competitors can in an evolving market. Data and AI-driven decisions: You must leverage data to personalize experiences, automate processes, and guide product decisions. A robust data architecture feeds real-time data into AI models, enabling data-driven decisions organization-wide. Balance of freedom and control: Your developers must have the freedom to choose technologies, even as your organization maintains control over the infrastructure to manage risks and costs at scale. Selected technologies must integrate within your overall technology estate. Extensible and open technologies: You must explore disruptive technologies while maintaining existing systems. Freedom from platform and vendor lock-in enables quick adoption of innovations, from current generative AI capabilities to future technological advances. Data: The unsolved challenge in modern application development From cloud platforms and managed services to gen AI code assistants, advancements have transformed how engineering teams build, ship, and run applications: Agile methods and programmatic APIs streamline development, while CI/CD and infrastructure as code automate processes. Containerization, microservices, and serverless architectures enable modularity, while new languages and frameworks boost capabilities. Enhanced logging and monitoring tools provide deep application health insights. Figure 1: Tools and processes to maximize velocity. But none of these advancements address where developers spend most of their time— data . In fact, 73% of developers share time and again that working with data is the hardest part of building an application or feature. So why is data the problem? Traditionally, selecting a database, often an open-source relational one, is the first step in development. However, these databases can struggle with the characteristics of modern data: it’s high volume, unstructured, and constantly evolving. As applications mature and their data demands grow, development teams may encounter challenges with achieving scalability and maintaining service resilience. Some teams turn to NoSQL databases, but even then they find there are limited capabilities, pushing them back to relational databases. As the application gains traction, the business’s appetite for innovation grows, compelling development teams to incorporate an expanding array of database technologies. This results in an architectural sprawl, imposing on teams the challenges of mastering, sustaining, and harmonizing new technologies. Concurrently, the dynamic technology landscape undergoes constant evolution, demanding teams to swiftly adjust. As a result, self-contained, autonomous teams encounter these hurdles recurrently, highlighting the pressing need for streamlined solutions to mitigate complexity and enhance agility. Figure 2: The evolving tech landscape. Data sprawl: A major threat to developer productivity and business agility Data sprawl is slowing everyone down. The more systems we add, the harder it is for developers to keep up. Each new database brings its own unique language, format, and way of working. This creates a huge headache for managing everything—from buying new systems to making sure they all work together securely. It’s a constant battle to keep data accessible, consistent, and backed up across all these different platforms. Figure 3: Teams building on separate stacks leads to data sprawl and manageability issues across the organization It compromises every single one of the four outcomes your technology foundation should be providing, yielding the opposite results: Missed opportunities, lost customers: Fragmented development experiences consume time as engineers struggle with multiple technologies, frameworks, and extract, transform, and load mechanisms for duplicating data between systems. This slows down releases, degrades digital product quality, and impedes engineers from achieving product-market fit and effective competition. Flying blind: With your operational data siloed across multiple systems, you lack the data foundations necessary to use live data in shaping customer experiences or reacting to market changes. This is because you are unable to feed reliable, consistent, real-time data into your AI models to take action within the flow of the application or to provide the business with up-to-the-second visibility into operations. High attrition, high costs: Complex data architecture impacts development team culture, leading to siloed knowledge, inefficient collaboration, and decreased developer satisfaction. This complexity also consumes substantial resources in maintaining existing systems by diverting resources from new projects that are vital for business competition in new markets. Disruption from new technologies: Dependence on any one cloud provider can stifle innovation for development teams by restricting access to the latest technologies. Developers are confined to the tools and services offered by a single provider, hindering their ability to explore and integrate new, potentially more efficient, or advanced technologies. Speed: A unified developer experience for building high-quality software faster In today’s digital world, speed is king. Your customers expect seamless experiences, but clunky applications leave them frustrated. But traditional databases can be a bottleneck, struggling to keep pace with your ever-evolving data and slowing down development. The future of data is here, and it’s flexible: a data platform built for digital natives . It leverages a flexible document model, letting you store and work with your data exactly how you need it. This eliminates rigid structures and complex migrations, freeing your developers to focus on what matters—building amazing applications faster. Flexible document data models empower developers to handle today’s rapidly evolving application data ( 80%+ unstructured) that relational databases struggle with. MongoDB documents are richly typed, boosting developer productivity by eliminating the need for lengthy schema migrations when implementing new features. Developers get to use their preferred tools and languages. Through its drivers and integrations, MongoDB supports all of the most popular programming languages, frameworks, integrated development environments, and AI-code assistance tools. MongoDB scales! It starts small and scales globally. Built for elasticity and horizontal scaling, it handles massive workloads without app changes. Figure 4: A unified developer experience, integrating all necessary data services for building sophisticated modern applications Introducing MongoDB Atlas : a fully-managed cloud database built for the modern developer. It enables the integration of real-time data from devices with AI capabilities (through vector embeddings and large language models ) to personalize user experiences. Stream processing empowers constant data analysis, while in-app analytics provides real-time insights without needing separate data warehouses, all while automatically managing data movement and storage for cost-effectiveness. MongoDB Atlas simplifies database management with the following: Easy deployment via UI, API, CLI, Kubernetes, and infrastructure as code tools. Automated operations for cost-effective performance and real-time monitoring. MongoDB Atlas customer success stories: Development with speed, scale, and efficiency Delivery Hero Delivery Hero, a global leader in online food delivery, leverages MongoDB Atlas to power its rapid service. Founded in 2011, Delivery Hero now serves millions of customers in over 70 countries through brands like PedidosYa, foodpanda, and Glovo. Having replaced its legacy SQL database, Delivery Hero optimized operations and bolstered performance by using MongoDB Atlas. By leveraging MongoDB Atlas Search, Delivery Hero revolutionized its search functionality, ensuring a seamless user experience for its extensive customer base through simplified indexing and real-time data accuracy. MongoDB’s scalability has empowered Delivery Hero to manage over 100 million products in its catalog without encountering latency issues, enabling the company to expand its services while maintaining peak performance. This agility, coupled with MongoDB’s cost-effectiveness, has enabled Delivery Hero to swiftly adapt to evolving customer demands, solidifying its position in the fiercely competitive delivery market. MongoDB Atlas Search was a game changer. We ran a proof of concept and discovered how easy it is to use. We can index in one click, and because it’s a feature of MongoDB, we know data is always up-to-date and accurate. Andrii Hrachov, Principal Software Engineer, Delivery Hero Read the full customer story to learn more. Coinbase Coinbase, a prominent cryptocurrency exchange boasting 245,000 ecosystem partners and managing assets worth $273 billion , trusts MongoDB to handle its extensive data workload. As the company grew, MongoDB scaled seamlessly to accommodate the increased demand. To further improve performance in the fast-paced crypto world, Coinbase partnered with MongoDB to develop a system that significantly accelerated data transfer to reporting tools, reducing processing time from days to a mere 5-6 hours. This near real-time data access enables Coinbase to rapidly analyze trends and make informed decisions, maintaining a competitive edge in the ever-evolving crypto landscape. Watch Coinbase's full session at MongoDB.local Austin, 2024 to learn more. MongoDB: Your flexible platform for digital growth With MongoDB, you can freely explore, experiment, develop, and deploy according to your digital-native business needs. If you would like to learn more about how MongoDB can empower your digital-native business to conquer market trends, visit: Innovate With AI: The Future Enterprise Application-Driven Intelligence: Defining the Next Wave of Modern Apps AI-Driven Real-Time Pricing with MongoDB and Vertex AI

November 7, 2024
Applied

Gamuda Puts AI in Construction with MongoDB Atlas

Gamuda Berhad is a leading Malaysian engineering and construction company with operations across the world, including in Australia, Taiwan, Singapore, Vietnam, the United Kingdom, and more. The company is known for its innovative approach to construction through the use of cutting-edge technology. Speaking at MongoDB.local Kuala Lumpur in August 2024 , John Lim, Chief Digital Officer at Gamuda said: “In the construction industry, AI is increasingly being used to analyze vast amounts of data, from sensor readings on construction equipment to environmental data that impacts project timelines.” One of Gamuda’s priorities is determining how AI and other tools can impact the company’s methods for building large projects across the world. For that, the Gamuda team needed the right infrastructure, with a database equipped to handle the demands of modern AI-driven applications. MongoDB Atlas fulfilled all the requirements and enabled Gamuda to deliver on its AI-driven goals. Why Gamuda chose MongoDB Atlas “Before MongoDB, we were dealing with a lot of different databases and we were struggling to do even simple things such as full-text search,” said Lim. “How can we have a tool that's developer-friendly, helps us scale across the world, and at the same time helps us to build really cool AI use cases, where we're not thinking about the infrastructure or worrying too much about how things work but are able to just focus on the use case?” After some initial conversations with MongoDB, Lim’s team saw that MongoDB Atlas could help it streamline its technology stack, which was becoming very complex and time consuming to manage. MongoDB Atlas provided the optimal balance between ease of use and powerful functionality, enabling the company to focus on innovation rather than database administration. “I think the advantage that we see is really the speed to market. We are able to build something quickly. We are fast to meet the requirements to push something out,” said Lim. Chi Keen Tan, Senior Software Engineer at Gamuda, added, “The team was able to use a lot of developer tools like MongoDB Compass , and we were quite amazed by what we can do. This [ability to search the items within the database easily] is just something that’s missing from other technologies.” Being able to operate MongoDB on Google Cloud was also a key selling point for Gamuda: “We were able to start on MongoDB without any friction of having to deal with a lot of contractual problems and billing and setting all of that up,” said Lim. How MongoDB is powering more AI use cases Gamuda uses MongoDB Atlas and functionalities such as Atlas Search and Vector Search to bring a number of AI use cases to life. This includes work implemented on Gamuda’s Bot Unify platform, which Gamuda built in-house using MongoDB Atlas as the database. By using documents stored in SharePoint and other systems, this platform helps users write tenders quicker, find out about employee benefits more easily, or discover ways to improve design briefs. “It’s quite incredible. We have about 87 different bots now that people across the company have developed,” Lim said. Additionally, the team has developed Gamuda Digital Operating System (GDOS), which can optimize various aspects of construction, such as predictive maintenance, resource allocation, and quality control. MongoDB’s ability to handle large volumes of data in real-time is crucial for these applications, enabling Gamuda to make data-driven decisions that improve efficiency and reduce costs. Specifically, MongoDB Atlas Vector Search enables Gamuda’s AI models to quickly and accurately retrieve relevant data, improving the speed and accuracy of decision-making. It also helps the Gamuda team find patterns and correlations in the data that might otherwise go unnoticed. Gamuda’s journey with MongoDB Atlas is just beginning as the company continues to explore new ways to integrate technology into its operations and expand to other markets. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page.

October 22, 2024
Applied

Empower Innovation in Insurance with MongoDB and Informatica

For insurance companies, determining the right technology investments can be difficult, especially in today's climate where technology options are abundant but their future is uncertain. As is the case with many large insurers, there is a need to consolidate complex and overlapping technology portfolios. At the same time, insurers want to make strategic, future-proof investments to maximize their IT expenditures. What does the future hold, however? Enter scenario planning. Using the art of scenario planning, we can find some constants in a sea of uncertain variables, and we can more wisely steer the organization when it comes to technology choices. Consider the following scenarios: Regulatory disruption: A sudden regulatory change forces re-evaluation of an entire market or offering. Market disruption: Vendor and industry alliances and partnerships create disruption and opportunity. Tech disruption: A new CTO directs a shift in the organization's cloud and AI investments, aligning with a revised business strategy. What if you knew that one of these three scenarios was going to play itself out in your company but weren’t sure which one? How would you invest now to prepare for one of the three? At the same time that insurers are grappling with technology choices, they’re also facing clashing priorities: Running the enterprise: supporting business imperatives and maintaining health and security of systems. Innovating with AI: maintaining a competitive position by investing in AI technologies. Optimizing spend: minimizing technology sprawl, technical debt, and maximizing business outcomes. Data modernization What is the common thread among all these plausible future scenarios? How can insurers apply scenario planning principles while bringing diverging forces into alignment? There is one constant in each scenario, and that’s the organization’s data—if it’s hard to work with, any future scenario will be burdened by this fact. One of the most critical strategic investments an organization can make is to ensure data is easy to work with. Today, we refer to this as data modernization, which involves removing the friction that manifests itself in data processing, ensuring data is current, secure, and adaptable. For developers, who are closest to the data, this means enabling them with a seamless and fully integrated developer data platform along with a flexible data model. In the past, data models and databases would remain unchanged for long periods. Today, this approach is outdated. Consolidation creates a data model problem, resulting in a portfolio with relational, hierarchical, and file-based data models—or, worst of all, a combination of all three. Add to this the increased complexity that comes with relational models, including supertype-subtype conditional joins and numerous data objects, and you can see how organizations wind up with a patchwork of data models and overly complicated data architecture. A document database, like MongoDB Atlas , stores data in documents and is often referred to as a non-relational (or NoSQL) database. The document model offers a variety of advantages and specifically excels in data consolidation and agility: Serves as the superset of all other data model types (relational, hierarchical, file-based, etc.) Consolidates data assets into elegant single-views, capable of accommodating any data structure, format, or source Supports agile development, allowing for quick incorporation of new and existing data Eliminates the lengthy change cycles associated with rigid, single-schema relational approaches Makes data easier to work with, promoting faster application development By adopting the document model, insurers can streamline their data operations, making their technology investments more efficient and future-proof. The challenges of making data easier to work with include data quality. One significant hurdle insurers continue to face is the lack of a unified view of customers, products, and suppliers across various applications and regions. Data is often scattered across multiple systems and sources, leading to discrepancies and fragmented information. Even with centralized data, inconsistencies may persist, hindering the creation of a single, reliable record. For insurers to drive better reporting, analytics, and AI, there's a need for a shared data source that is accurate, complete, and up-to-date. Centralized data is not enough; it must be managed, reconciled, standardized, cleansed, and enriched to maintain its integrity for decision-making. Mastering data management across countless applications and sources is complex and time-consuming. Success in master data management (MDM) requires business commitment and a suite of tools for data profiling, quality, and integration. Aligning these tools with business use cases is essential to extract the full value from MDM solutions, although the process can be lengthy. Informatica’s MDM solution and MongoDB Informatica’s MDM solution has been developed to answer the key questions organizations face when working with their customer data: “How do I get a 360-degree view of my customer, partner and & supplier data?” “How do I make sure that my data is of the highest quality?” The Informatica MDM platform helps ensure that organizations around the world can confidently use their data and make business decisions based on it. Informatica’s entire MDM solution is built on MongoDB Atlas , including its AI engine, Claire. Figure 1: Everything you need to modernize the practice of master data management. Informatica MDM solves the following challenges: Consolidates data from overlapping and conflicting data sources. Identifies data quality issues and cleanses data. Provides governance and traceability of data to ensure transparency and trust. Insurance companies typically have several claim systems that they’ve amassed over the years through acquisitions, with each one containing customer data. The ability to relate that data together and ensure it’s of the highest quality enables insurers to overcome data challenges. MDM capabilities are essential for insurers who want to make informed decisions based on accurate and complete data. Below are some of the different use cases for MDM: Modernize legacy systems and processes (e.g. claims or underwriting) by effectively collecting, storing, organizing, and maintaining critical data Improve data security and improve fraud detection and prevention Effective customer data management for omni-channel engagement and cross- or up-sell Data management for compliance, avoiding or predicting in advance any possible regulatory issues Given we already leverage the performance and scale of MongoDB Atlas within our cloud-native MDM SaaS solution and share a common focus on high-value, industry solutions, this partnership was a natural next step. Now, as a strategic MDM partner of MongoDB, we can help customers rapidly consolidate and sunset multiple legacy applications for cloud-native ones built on a trusted data foundation that fuels their mission-critical use cases. Rik Tamm-Daniels, VP of Strategic Ecosystems and Technology at Informatica Taking the next step For insurance companies navigating the complexities of modern technology and data management, MDM combined with powerful tools like MongoDB and Informatica provide a strategic advantage. As insurers face an uncertain future with potential regulatory, market, and technological disruptions, investing in a robust data infrastructure becomes essential. MDM ensures that insurers can consolidate and cleanse their data, enabling accurate, trustworthy insights for decision-making. By embracing data modernization and the flexibility of document databases like MongoDB, insurers can future-proof their operations, streamline their technology portfolios, and remain agile in an ever-changing landscape. Informatica’s MDM solution, underpinned by MongoDB Atlas, offers the tools needed to master data across disparate systems, ensuring high-quality, integrated data that drives better reporting, analytics, and AI capabilities. If you would like to discover more about how MongoDB and Informatica can help you on your modernization journey, take a look at the following resources: Unify data across the enterprise for a contextual 360-degree view and AI-powered insights with Informatica’s MDM solution Automating digital underwriting with machine learning Claim management using LLMs and vector search for RAG

October 22, 2024
Applied

Built With MongoDB: Buzzy Makes AI Application Development More Accessible

AI adoption rates are sky-high and showing no signs of slowing down. One of the driving forces behind this explosive growth is the increasing popularity of low- and no-code development tools that make this transformative technology more accessible to tech novices. Buzzy , an AI-powered no-code platform that aims to revolutionize how applications are created, is one such company. Buzzy enables anyone to transform an idea into a fully functional, scalable web or mobile application in minutes. Buzzy developers use the platform for a wide range of use cases, from a stock portfolio tracker to an AI t-shirt store. The only way the platform could support such diverse applications is by being built upon a uniquely versatile data architecture. So it’s no surprise that the company chose MongoDB Atlas as its underlying database. Creating the buzz Buzzy’s mission is simple but powerful: to democratize the creation of applications by making the process accessible to everyone, regardless of technical expertise. Founder Adam Ginsburg—a self-described husband, father, surfer, geek, and serial entrepreneur—spent years building solutions for other businesses. After building and selling an application that eventually became the IBM Web Content Manager, he created a platform allowing anyone to build custom applications quickly and easily. Buzzy initially focused on white-label technology for B2B applications, which global vendors brought to market. Over time, the platform evolved into something much bigger. The traditional method of developing software, as Ginsburg puts it, is dead. Ginsburg observed two major trends that contributed to this shift: the rise of artificial intelligence (AI) and the design-centric approach to product development exemplified by tools like Figma. Buzzy set out to address two major problems. First, traditional software development is often slow and costly. Small-to-medium-sized business (SMB) projects can take anywhere from $50,000 to $250,000 and nine months to complete. Due to these high costs and lengthy timelines, many projects either fail to start or run out of resources before they’re finished. The second issue is that while AI has revolutionized many aspects of development, it isn’t a cure-all for generating vast amounts of code. Generating tens of thousands of lines of code using AI is not only unreliable but also lacks the security and robustness that enterprise applications demand. Additionally, the code generated by AI often can’t be maintained or supported effectively by IT teams. This is where Buzzy found a way to harness AI effectively, using it in a co-pilot mode to create maintainable, scalable applications. Buzzy’s original vision was focused on improving communication and collaboration through custom applications. Over time, the platform’s mission shifted toward no-code development, recognizing that these custom apps were key drivers of collaboration and business effectiveness. The Buzzy UX is highly streamlined so even non-technical users can leverage the power of AI in their apps. Initially, Buzzy's offerings were somewhat rudimentary, producing functional but unpolished B2B apps. However, the platform soon evolved. Instead of building their own user experience (UX) and user interface (UI) capabilities, Buzzy integrated with Figma, giving users access to the design-centric workflow they were already familiar with. The advent of large language models (LLMs) provided another boost to the platform, enabling Buzzy to accelerate AI-powered development. What sets Buzzy apart is its unique approach to building applications. Unlike traditional development, where code and application logic are often intertwined, Buzzy separates the "app definition" from the "core code." This distinction allows for significant benefits, including scalability, maintainability, and better integration with AI. Instead of handing massive chunks of code to an AI system—which can result in errors and inefficiencies—Buzzy gives the AI a concise, consumable description of the application, making it easier to work with. Meanwhile, the core code, written and maintained by humans, remains robust, secure, and high-performing. This approach not only simplifies AI integration but also ensures that updates made to Buzzy’s core code benefit all customers simultaneously, an efficiency that few traditional development teams can achieve. Flexible platform, fruitful partnership The partnership between Buzzy and MongoDB has been crucial to Buzzy’s success. MongoDB’s Atlas developer data platform provides a scalable, cost-effective solution that supports Buzzy’s technical needs across various applications. One of the standout features of MongoDB Atlas is its flexibility and scalability, which allows Buzzy to customize schemas to suit the diverse range of applications the platform supports. Additionally, MongoDB’s support—particularly with new features like Atlas Vector Search —has allowed Buzzy to grow and adapt without complicating its architecture. In terms of technology, Buzzy’s stack is built for flexibility and performance. The platform uses Kubernetes and Docker running on Node.js with MongoDB as the database. Native clients are powered by React Native, using SQLLite and Websockets for communication with the server. On the AI side, Buzzy leverages several models, with OpenAI as the primary engine for fine-tuning its AI capabilities. Thanks to the MongoDB for Startups program , Buzzy has received critical support, including Atlas credits, consulting, and technical guidance, helping the startup continue to grow and scale. With the continued support of MongoDB and an innovative approach to no-code development, Buzzy is well-positioned to remain at the forefront of the AI-driven application development revolution. A Buzzy future Buzzy embodies the spirit of innovation in its own software development lifecycle (SDLC). The company is about to release two game-changing features that are going to take AI driven App development to the next level: Buzzy FlexiBuild, which will allow users to build more complex applications using just AI prompts, and Buzzy Automarkup, which will allow Figma users to easily mark up screens, views, lists, forms, and actions with AI in minutes. Ready to start bringing your own app visions to life? Try Buzzy and start building your application in minutes for Free. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start guide .

October 18, 2024
Applied

Unlocking Seamless Data Migrations to MongoDB Atlas with Adiom

As enterprises continue to scale, the need for powerful, seamless data migration tools becomes increasingly important. Adiom , founded by industry veterans with deep expertise in data mobility and distributed systems, is addressing this challenge head-on with its open-source tool, dsync. By focusing on high-stakes, production-level migrations, Adiom has developed a solution that works effortlessly with MongoDB Atlas and makes large-scale migrations to it from NoSQL databases faster, safer, and more predictable. The real migration struggles Enterprises often approach migrations with apprehension, and for good reason. When handling massive datasets powering mission-critical services or user-facing applications, even small mistakes can have significant consequences. Adiom understands these challenges deeply, particularly when migrating to MongoDB Atlas. Here are a few of the common pain points that enterprises face: Time-consuming processes: Moving large datasets involves extensive planning, testing, and iteration. What’s more, enterprises need migrations that are repeatable and can handle the same dataset efficiently multiple times—something traditional tools often struggle to provide. Risk management: From data integrity issues to downtime during the migration window, the stakes are high. Tools that worked for smaller datasets and in lower-tier environments no longer meet the requirements. Custom migration scripts often introduce unforeseen risks, while other databases come with their own unique limitations. Cost overruns: Enterprises frequently encounter hidden migration costs—whether it's the need to provision special infrastructure, reworking application code for compatibility with migration plans, or paying SaaS vendors by the row. These complications can balloon the overall migration budget or send the project into the approval death spiral. To make things even more complicated, the pains feed into each other. The longer the project takes, the more risks need to be accounted for, the longer the planning and testing, and the bigger the cost. Adiom’s dsync: Power and simplicity in one tool Dsync was built with these challenges in mind. Designed specifically for large production workloads, dsync enables enterprises to handle complex migrations more easily, lowering the hurdles that typically slow down the process, reducing risks and uncertainty. Here’s why dsync stands out: Ease of deployment: Starting with dsync is incredibly simple. All it takes is downloading a single binary—there’s no need for specialized infrastructure, and it runs seamlessly on VMs or Docker. Users can monitor migrations through the command line or a web interface, giving flexibility depending on the team’s preferences. Resilience and Safety: dsync is not only efficient, but it’s also resumable. Should a migration be interrupted, there’s no need to start over. This means that migrations can continue smoothly from where they left off, reducing the risk of downtime and minimizing the complexity of the process. Verification: dsync is designed to protect the integrity of migrated data. Dsync features embedded data verification mechanisms that automatically check for consistency between the source and destination databases after migration. Security: dsync doesn't store data, doesn't send it outside the organization other than to the designated destination, and supports network encryption. No hidden costs: As an open-source tool, dsync eliminates the need to onboard expensive SaaS solutions or purchase licenses in the early stages of the process. It operates independently of third-party vendors, giving enterprises flexibility and control over their migrations without the additional financial burden. Enhancing MongoDB customers' experiences For MongoDB customers, the ability to migrate data quickly and efficiently can be the key to unlocking new products, features, and cost savings. With dsync, Adiom provides a solution that can accelerate migrations, reduce risks, and enable enterprises to leverage MongoDB Atlas without the usual headaches. Faster time-to-market: By significantly accelerating migrations, dsync allows companies to take advantage of MongoDB Atlas offerings and integrations sooner, offering a direct path to quicker returns on investment. Self-service and support: Many migrations can be handled entirely in-house, thanks to dsync’s intuitive design. However, for organizations that need additional guidance, Adiom offers support and has partnered with MongoDB Professional Services and PeerIslands to offer comprehensive coverage during the migration process. Five compelling advantages of migrating to MongoDB Flexible schema: MongoDB’s schema-less design reduces development time by up to 30% by allowing you to change data structures. Scalability: You can scale MongoDB to multiple petabytes of data seamlessly using sharding. High performance: MongoDB helps to improve read and write speeds by up to 50% compared to traditional databases. Expressive Query API: Its advanced querying capabilities reduce query writing time and increase execution efficiency by 70%. Partner Ecosystem: MongoDB’s strong partner ecosystem helps with service integrations, AI capabilities, purpose-built solutions, and other significant competitive differentiators. Conclusion Dsync is more than just a migration tool—it’s a powerful engine that abstracts away the complexity of managing large datasets across different systems. By seamlessly tying together initial data copying, change-data-capture, and all the nuances of large-scale migrations, dsync lets enterprises focus on building their future, not on the logistics of data transfer. For those interested in technical details, some of those logistics and nuances can be found in our CEO’s blog . With Adiom and dsync, enterprises no longer have to choose between performance, correctness, or ease of use when planning a migration from another NoSQL database. Dsync provides an enterprise-grade solution that helps to enable faster, more secure, and more reliable migrations. By partnering with MongoDB, Adiom supports you in continuing to innovate without being held back by the limitations of legacy databases. Try dsync yourself or contact Adiom for a demo. Head over to our product page to learn more about MongoDB Atlas .

October 15, 2024
Applied

From Chaos to Control: Real-Time Data Analytics for Airlines

Delays are a significant challenge for the airline industry. They disrupt travel plans, erode customer loyalty, and inflict significant financial losses. In an industry built on precision and punctuality, even minor setbacks can have cascading effects. Whether due to adverse weather conditions or unforeseen technical issues, these delays ripple through flight schedules, affecting both passengers and operations managers. While neither group is typically at fault, the ability to quickly reallocate resources and return to normal operations is crucial. To mitigate these disruptions and restore passenger trust, airlines must have the tools and strategies to quickly identify delays and efficiently reallocate resources. This blog explores how a unified platform with real-time data analysis can be a game-changer in this regard especially in saving costs. The high cost of delays Delays from disruptions, like weather events or crew unavailability, pose major challenges for the airline industry. These delays have significant financial impact according to some studies, costing European airlines on average € 4,320 per hour per flight . They also create operational challenges like crew disruptions and reduced airplane availability, leading to further delays, which is known in the industry as delay propagation. To address these challenges, airlines have traditionally focused on optimizing their pre-flight planning processes. However, while planning is crucial, effective recovery strategies are equally essential for minimizing the impact of disruptions. Unfortunately, many airlines have underinvested in recovery systems, leaving them ill-prepared to respond to unexpected events. The consequences of this imbalance include: Delay propagation: Initial delays can cascade, causing widespread schedule disruptions. Financial and operational damage: Increased costs and inefficiencies strain airline resources. Customer dissatisfaction: Poor disruption management leads to negative passenger experiences. The power of real-time data analysis In response to the significant challenges posed by flight delays, a real-time unified platform offers a powerful solution designed to enhance how airlines manage disruptions. Event-driven architectural approach The diagram below showcases an event-driven architecture that can be used to build a robust and decoupled platform that supports real-time data flow between microservices. In an event-driven architecture, services or components communicate by producing and consuming events, which is why this architecture relies on Pub/Sub (messaging middleware) to manage data flows. Moreover, MongoDB’s flexible document model and ability to handle high volumes of data make it ideal for event-driven systems. Combining these features with PubSub’s, this approach proves to offer a powerful solution for modern applications that require scalability, flexibility, and real-time processing. Figure 1: Application architecture In this architecture, the blue line in the diagram shows the operational data flow. The data simulation is triggered by the application’s front-end and is initialized in the FastAPI microservice. The microservice, in turn, starts publishing airplane sensor data to the custom Pub/Sub topics, which forwards these data to the rest of the architecture components, such as cloud functions, for data transformation and processing. The data is processed in each microservice, including the creation of analytical data, as shown by the green lines in the diagram. Afterward, data is introduced in MongoDB and fetched from the application to provide the user with organized, up-to-date information regarding each flight. This leads to more precise and detailed analysis of real-time data for flight operations managers. As a result, new and improved opportunities for resource reallocation can be explored, helping to minimize delays and reduce associated costs for the company. Microservice overview As mentioned earlier, the primary goal is to create an event-driven, decoupled architecture founded on MongoDB and Google Cloud services integrations. The following components contribute to this: FastAPI: Serves as the main data source, generating data for analytical insights, predictions, and simulation. Telemetry data: Pulls and transforms operational data published in the PubSub topic in real-time, storing it in a MongoDB time series collection for aggregation and optimization. Application data: Subscribed to a different PubSub topic, this service acknowledges static operational data, including initial route, recalculated route, and disruption status. Therefore, this service will only be triggered provided any of the previous fields are altered. Finally, this data is updated in its MongoDB collection accordingly. Vertex AI integration—analytical data flow: A cloud function triggered by PubSub messages that executes data transformations and forwards data to the Vertex AI deployed machine learning (ML) model. Predictions are then stored in MongoDB. MongoDB: A flexible, scalable, and real-time data solution Building a unified real-time platform for the airline industry requires efficient management of massive, diverse datasets. From aircraft sensor data to flight cost calculations, data processing and management are central to operations. To meet these demands, the platform needs a flexible data platform capable of handling multiple data types and integrating with various systems. This enables airlines to extract valuable insights from their data and develop features that improve operations and the passenger experience. Real-time data processing is a must-have feature. This allows airlines to receive immediate alerts about delays, minimizing disruptions and ensuring smooth operations. In fast-paced airport environments, where every minute counts, real-time data processing is indispensable. For example, integrating MongoDB with Google Cloud's Vertex AI allows for the real-time processing and storage of airplane sensor data, transforming it into actionable insights. Business benefits This solution provides real-time access to critical flight data, enabling efficient cost management and operational planning. Immediate access to this information allows flight operation managers to plan ahead, reallocate existing resources, or even initiate recovery procedures in order to diminish the consequences of the identified delay. Moreover, its ML model customization ensures adaptability to various use cases. Regarding the platform’s long-term sustainability, it has been purposely designed to integrate highly reliable and scalable products in order to excel in three key standards: Scalability The platform’s compatibility with both horizontal and vertical scaling is clearly demonstrated by its integral design. The decoupled architecture illustrates how this solution is divided into different components—and therefore instances—that work together as a cohesive whole. Vertical scalability can be achieved by simply increasing the computing power allocated to the designed Vertex AI model, if needed. Availability The decoupled architecture exemplifies the central importance of availability in any project’s design. Using different tracks to introduce operational and analytical data into the database allows us to handle any issues in a way that remains unnoticeable to end users. Latency Optimizing the connections between components and integrations within the product is key to achieving the desired results. Using PubSub as our asynchronous messaging service helps minimize unnecessary delays and avoid holding resources needlessly. Get started! To sum up, this blog has explored how MongoDB can be integrated into an airline flight management system, offering significant benefits in terms of cost savings and enhanced customer experience. Check out our AI resource page to learn more about building AI-powered apps with MongoDB, and try out the demo yourself via this repo . To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page .

October 15, 2024
Applied

Ready to get Started with MongoDB Atlas?

Start Free