MongoDB Applied
Customer stories, use cases and experience
Fusing MongoDB and Databricks to Deliver AI-Augmented Search
With customers' attention more and more dispersed across channels, platforms, and devices, the retail industry rages with the relentless competition. The customer’s search experience on your storefront is the cornerstone of capitalizing on your Zero Moment of Truth, the point in the buying cycle where the consumer's impression of a brand or product is formed. Imagine a customer, Sarah, eager to buy a new pair of hiking boots. Instead of wandering aimlessly through pages and pages of search results, she expects to find her ideal pair easily. The smoother her search, the more likely she is to buy. Yet, achieving this seamless experience isn't a walk in the park for retailers. Enter the dynamic duo of MongoDB and Databricks. By equipping their teams with this powerful tech stack, retailers can harness the might of real-time in-app analytics. This not only streamlines the search process but also infuses AI and advanced search functionalities into e-commerce applications. The result? An app that not only meets Sarah's current expectations but anticipates her future needs. In this blog, we’ll help you navigate through what are the main reasons to implement an AI-augmented search solution by integrating both platforms. Let’s embark on this! A solid foundation for your data model For an e-commerce site built around the principles of an Event Driven and MACH Architecture , the data layer will need to ingest and transform data from a number of different sources. Heterogeneous data, such as product catalog, user behavior on the e-commerce front-end, comments and ratings, search keywords, and customer lifecycle segmentation- all of this is necessary to personalize search results in real time. This increases the need for a flexible model such as in MongoDB’s documents and a platform that can easily take in data from a number of different sources- from API, CSV, and Kafka topics through the MongoDB Kafka Connector . MongoDB's Translytical capabilities, combining transactional (OLTP) and analytical (OLAP) offer real-time data processing and analysis, enabling you to simplify your workloads while ensuring timely responsiveness and cost-effectiveness. Now the data platform is servicing the operational needs of the application- what about adding in AI? Combining MongoDB with Databricks, using the MongoDB Spark Connector can allow you to train your models with your operational data from MongoDB easily and to trigger them to run in real-time to augment your application as the customer is using it. Centralization of heterogeneous data in a robust yet flexible Operational Data Layer The foundation of an effective e-commerce data layer lies in having a solid yet flexible operational data platform, so the orchestrating of ML models to run at specific timeframes or responding to different events, enabling crucial data transformation, metadata enrichment, and data featurization becomes a simple, automated task for optimizing search result pages and deliver a frictionless purchasing process. Check out this blog for a tutorial on achieving near real-time ingestion using the Kafka Connector with MongoDB Atlas, and data processing with Databricks Spark User Defined Functions. Adding relevance to your search engine results pages To achieve optimal product positioning on the Search Engine Results Page (SERP) after a user performs a query, retailers are challenged with creating a business score for their products' relevance. This score incorporates various factors such as stock levels, competitor prices, and price elasticity of demand. These business scores are complex real-time analyses calibrated against so many factors- it’s a perfect use case for AI. Adding AI-generated relevance to your SERPs can accurately predict and display search results that are most relevant to users' queries, leading to higher engagement and increased click-through rates, while also helping businesses optimize their content based on the operational context of their markets. The ingestion into the MongoDB Atlas document-based model laid the groundwork for this challenge, and leveraging the MongoDB Apache Spark Streaming Connector companies can persist their data into Databricks, taking advantage of its capabilities for data cleansing and complex data transformations, making it the ideal framework for delivering batch training and inference models. Diagram of the full architecture integrating MongoDB Atlas and Databricks for an e-commerce store, real-time analytics, and search MongoDB App Services act as the mortar of our solution, achieving an overlap of the intelligence layer in an event-driven way, making it not only real-time but also cost-effective and rendering both your applications and business processes nimble. Make sure to check out this GitHub repository to understand in depth how this is achieved. Data freshness Once that business score can be calculated comes the challenge of delivering it over the search feature of your application. With MongoDB Atlas native workload isolation, operational data is continuously available on dedicated analytics nodes deployed in the same distributed cluster, and exposed to analysts within milliseconds of being stored in the database. But data freshness is not only important for your analytics use cases, combining both your operational data with your analytics layer, retailers power in-app analytics and build amazing user experiences across your customer touch points. Considering MongoDB Atlas Search 's advanced features such as faceted search, auto-complete, and spell correction, retailers rest assured of a more intuitive and user-friendly search experience not only for their customers but for their developers, as it minimizes the tax of operational complexity as all these functionalities are bundled in the same platform. App-driven analytics is a competitive advantage against traditional warehouse analytics Additionally, the search functionality is optimized for performance, enabling businesses to handle high search query volumes without compromising user experience. The business score generated from the AI models trained and deployed with Databricks will provide the central point to act as a discriminator over where in the SERPs any of the specific products appear, rendering your search engine relevance fueled and securing the delivery of a high-quality user experience. Conclusion Search is a key part of the buying process for any customer. Showing customers exactly what they are looking for without investing too much time in the browsing stage reduces friction in the buying process, but as we’ve seen it might not be so easy technically. Empower your teams with the right tech stack to take advantage of the power of real-time in-app analytics with MongoDB and Databricks. It’s the simplest way to build AI and search capabilities into your e-commerce app, to respond to current and future market expectations. Check out the video below and this GitHub repository for all the code needed to integrate MongoDB and Databricks and deliver a real-time machine-learning solution for AI-augmented Search.
Why Queryable Encryption Matters to Developers and IT Decision Makers
Enterprises face new challenges in protecting data as modern applications constantly change requirements. There are new technologies, advances in cryptography, regulatory constraints, and architectural complexities. The threat landscape and attack techniques are also changing, making it harder for developers to be experts in data protection. Client-side field level encryption , sometimes referred to as end-to-end encryption, provides another layer of security that enables enterprises to protect sensitive data. Although client-side encryption fulfills many modern requirements, architects, and developers face challenges in implementing these solutions to protect their data efficiently for several reasons: Multiple cryptographic tools to choose from — Identifying the relevant libraries, selecting the appropriate encryption algorithms, configuring the selected algorithms, and correctly setting up the API for interaction are some of the challenges around tools. Encryption key management challenges — how and where to store the encryption keys, how to manage access, and how to manage key lifecycle such as rotation and revocation. Customize application(s) — Developers might have to write custom code to encrypt, decrypt, and query the data requiring widespread application changes. With Queryable Encryption now generally available, MongoDB helps customers protect data throughout its data lifecycle — data is encrypted at the client side and remains encrypted in transit, at rest, and in use while in memory, in logs, and backups. Also, MongoDB is the only database provider that allows customers to run rich queries on encrypted data, just like they can on unencrypted data. This is a huge advantage for customers as they can query and secure the data confidently. Why does Queryable Encryption matter to IT decision-makers and developers? Here are a few reasons: Security teams within enterprises deal with protecting their customers’ sensitive data — financial records, personal data, medical records, and transaction data. Queryable Encryption provides a high level of security — by encrypting sensitive fields from the client side, the data remains encrypted while in transit, at rest, and in use and is only ever decrypted back at the client. With Queryable Encryption, customers can run expressive queries on encrypted data using an industry-first fast, encrypted search algorithm. This allows the server to process and retrieve matching documents without the server understanding the data or why the document should be returned. Queryable Encryption was designed by the pioneers of encrypted search with decades of research and experience in cryptography and uses NIST-standard cryptographic primitives such as AES-256, SHA2, and HMACs. Queryable Encryption allows a faster and easier development cycle — developers can easily encrypt sensitive data without making changes to their application code by using language-specific drivers provided by MongoDB. There is no crypto experience required and it’s intuitive and easy for developers to set up and use. Developers need not be cryptography experts to encrypt, format, and transmit the data. They don't have to figure out how to use the right algorithms or encryption options to implement a secure encryption solution. MongoDB has built a comprehensive encryption solution including key management. Queryable Encryption helps enterprises meet strict data privacy requirements such as HIPAA, GDPR, CCPA, PCI, and more using strong data protection techniques. It offers customer-managed and controlled keys. The MongoDB driver handles all cryptographic operations and communication with the customer-provisioned key provider . Queryable Encryption supports AWS KMS, Google Cloud KMS, Azure Key Vault, and KMIP-compliant key providers. MongoDB also provides APIs for key rotation and key migration that customers can leverage to make key management seamless. ** Equality query type is supported in 7.0 GA *With automation encryption enabled For more information on Queryable Encryption, refer to the following resources: Queryable Encryption documentation Queryable Encryption FAQ Download drivers Queryable Encryption Datasheet
How MongoDB and Alibaba Cloud are Powering the Era of Autonomous Driving
The emergence of autonomous driving technologies is transforming how automotive manufacturers operate, with data taking center stage in this transformation. Manufacturers are now not only creators of physical products but also stewards of vast amounts of product and customer data. As vehicles transform into connected vehicles, automotive manufacturers are compelled to transform their business models into software-first organizations. The data generated by connected vehicles is used to create better driver assistance systems and paves the way for autonomous driving applications. It has to be noted that the journey toward autonomous vehicles is not just about building reliable vehicles but harnessing the power of connected vehicle data to create a new era of mobility that seamlessly integrates cutting-edge software with vehicle hardware. The ultimate goal of autonomous vehicle makers is to produce cars that are safer than human-driven vehicles. Since 2010, investors have poured over 200 billion dollars into autonomous vehicle technology. Even with this large amount of investment, it is very challenging to create fully autonomous vehicles that can drive safer than humans. Some experts estimate that the technology to achieve level 5 autonomy is about 80% developed but the last 20% will be extremely hard to achieve and will take a lot of time to perfect. Unusual events such as extreme weather, wildlife crossings, and highway construction are still enigmas for many automotive companies to solve. The answer to these challenges is not straightforward. AI-based image and object recognition still has a long way to go to deal with uncertainties on the road. However, one thing is certain, automotive manufacturers need to make use of data captured by radar, LiDAR, camera systems, and the whole telemetry system in the vehicle in order to train their AI models better. A modern vehicle is a data powerhouse. It constantly gathers and processes information from onboard sensors and cameras. The Big Data generated as a result presents a formidable challenge, requiring robust storage and analysis capabilities. Additionally, this time series data needs to be analyzed in real-time and decisions have to be made instantaneously in order to guarantee safe navigation. Furthermore, ensuring data privacy and security is also a hurdle to cross since self-driving vehicles need to be shielded from cyber attacks as such an attack can cause life-threatening events. The development of high-definition (HD) maps to help the vehicle ‘see’ what is on the road also poses technical challenges. Such maps are developed using a combination of different data sources such as Global Navigation Satellite Systems (GNSS), radar, IMUs, cameras, and LiDAR. Any error in any one of these systems aggregates and ultimately impacts the accuracy of the navigation. It is required to have a data platform in the middle of the data source (vehicle systems) and the AI platform to accommodate and consolidate this diverse information while keeping this data secure. The data platform should be able to preprocess this data as well as add additional context to it before using it to train or run the AI modules such as object detection, semantic segmentation, and path planning. MongoDB can play a significant role in addressing above mentioned data-related challenges posed by autonomous driving. The document model is an excellent way to accommodate diverse data types such as sensor readings, telematics, maps, and model results. New fields to the documents can be added at run time, enabling the developers to easily add context to the raw telemetry data. MongoDB’s ability to handle large volumes of unstructured data makes it suitable for the constant influx of vehicle-generated information. MongoDB is not only an excellent choice for data storage but also provides comprehensive data pre-processing capabilities through its aggregation framework. Its support for time series window functions allows data scientists to produce calculations over a sorted set of documents. Time series collections also dramatically reduce storage costs. Column compression significantly improves practical compression, reduces the data's overall storage on disk, and improves read performance. MongoDB offers robust security features such as role-based access control, encryption at rest and in transit, comprehensive auditing, field-level redaction and encryption, and down to the level of client-side field-level encryption that can help shield sensitive data from potential cyber threats while ensuring compliance with data protection regulations. For challenges related to effectively storing and querying HD maps, MongoDB’s geospatial features can aid in querying location-based data and also combining the information from maps with telemetry data fulfilling the continuous updates and accuracy requirements for mapping. Furthermore, MongoDB's horizontal scaling or sharding allows for the seamless expansion of storage and processing capabilities as the volume of data grows. This scalability is essential for handling the data streams generated by fleets of self-driving vehicles. During the research and development of autonomous driving projects, scalable infrastructure is required to quickly and steadily collect and process massive data. In such projects, data is generated at the terabyte level every day. To meet these needs, Alibaba Cloud provides a solution that integrates data collection, transmission, storage, and computing. In this solution, the data collected daily by sensors can be simulated and collected using Alibaba Cloud Lightning Cube and sent to the Object Storage Service (OSS) . Context is added to this data using a translator and then this contextualized information can be pushed to MongoDB to train models. MongoDB and Alibaba Cloud recently announced a four-year extension to their strategic global partnership that has seen significant growth since being announced in 2019. Through this partnership, automotive manufacturers can easily set up and use MongoDB-as-a-service-AsparaDB for MongoDB from Alibaba Cloud’s data centers globally. Figure 1: Data collection and model training data link with MongoDB on Alibaba Cloud. When the vehicle is on the road, the telemetry data is captured through an MQTT gateway, converted into Kafka, and then pushed into MongoDB for data storage and archiving. This data can be used for various applications such as real-time status updates for engine and battery, accident analysis, and regulatory reporting. Figure 2: Mass Production vehicles data link with MongoDB on Alibaba Cloud For a company that is looking to build autonomous driving assistance systems, Alibaba Cloud and ApsaraDB for MongoDB is an excellent technology partner to have. ApsaraDB for MongoDB can handle TBs of diverse sensor data from cars on a daily basis, which doesn't conform to a fixed format. MongoDB provides reliable and highly available data storage for this heterogenous data enabling companies to rapidly expand their system within minutes resulting in time savings when processing and integrating autonomous driving data. By leveraging Alibaba Cloud's ApsaraDB for MongoDB, the R&D team can focus on innovation rather than worrying about data storage and scalability, contributing to faster innovation in the field of autonomous driving. In summary, MongoDB's flexibility, versatility, scalability, real-time capabilities, and strong security framework make it well-suited to address the multifaceted data requirements and challenges that autonomous driving presents. By efficiently managing and analyzing the Big Data generated, MongoDB and Alibaba Cloud are paving the path toward reliable and safe self-driving technology. To learn more about MongoDB’s role in the automotive industry, please visit our manufacturing and automotive webpage .
Building AI with MongoDB: Unlocking Value from Multimodal Data
One of the most powerful capabilities of AI is its ability to learn, interpret, and create from input data of any shape and modality. This could be structured records stored in a database to unstructured text, computer code, video, images, and audio streams. Vector embeddings are one of the key AI enablers in this space. Encoding our data as vector embeddings dramatically expands the ability to work with this multimodal data. We’ve gone from depending on data scientists training highly specialized models just a few years ago to developers today building general-purpose apps incorporating NLP and computer vision. The beauty of vector embeddings is that data that is unstructured and therefore completely opaque to a computer can now have its meaning and structure inferred and represented via these embeddings. Using a vector store such as Atlas Vector Search means we can search and compute unstructured and multimodal data in the same way we’ve always been able to with structured business data. Now we can search for it using natural language, rather than specialized query languages. Considering that 80%+ of the data that enterprises create every day is unstructured, we start to see how vector search combined with LLMs and generative AI opens up new use cases and revenue streams. In this latest round-up of companies building AI with MongoDB, we feature three examples who are doing just that. The future of business data: Unlocking the hidden potential of unstructured data In today's data-driven world, businesses are always searching for ways to extract meaningful insights from the vast amounts of information at their disposal. From improving customer experiences to enhancing employee productivity, the ability to leverage data enables companies to make more informed and strategic decisions. However, most of this valuable data is trapped in complex formats, making it difficult to access and analyze. That's where Unstructured.io comes in. Imagine an innovative tool that can take all of your unstructured data – be it a PDF report, a colorful presentation, or even an image – and transform it into an easily accessible format. This is exactly what Unstructured.io does. They delve deep, pulling out crucial data, and present it in a simple, universally understood JSON format. This makes your data ready to be transformed, stored and searched in powerful databases like MongoDB Atlas Vector Search . What does this mean for your business? It's simple. By automating the data extraction process, you can quickly derive actionable insights, offering enhanced value to your customers and improving operational efficiencies. Unstructured also offers an upcoming image-to-text model. This provides even more flexibility for users to ingest and process nearly any file containing natural language data. And, keep an eye out for notable upgrades in table extraction – yet another step in ensuring you get the most from your data. Unstructured.io isn't just a tool for tech experts. It's for any business aiming to understand their customers better, seeking to innovate, and looking to stay ahead in a competitive landscape. Unstructured’s widespread usage is a testament to its value – with over 1.5 million downloads and adoption by thousands of enterprises and government organizations. Brian Raymond, the founder and CEO of Unstructured.io, perfectly captures this synergy, saying, “As the world’s most widely used natural language ingestion and preprocessing platform, partnering with MongoDB was a natural choice for us. This collaboration allows for even faster development of intelligent applications. Together, we're paving the way businesses harness their data.” MongoDB and Unstructured.io are bridging the gap between data and insights, ensuring businesses are well-equipped to navigate the challenges of the digital age. Whether you’re a seasoned entrepreneur or just starting, it's time to harness the untapped potential of your unstructured data. Visit Unstructured.io to get started with any of their open-source libraries. Or join Unstructured’s community Slack and explore how to seamlessly use your data in conjunction with large language models. Making sense of complex contracts with entity extraction and analysis Catylex is a revolutionary contract analytics solution for any business that needs to extract and optimize contract data. The company’s best-in-class contract AI automatically recognizes thousands of legal and business concepts out-of-the-box, making it easy to get started and quickly generate value. Catylex’s AI models transform wordy, opaque documents into detailed insights revealing rights, obligations, risks, and commitments associated with the business, its suppliers, and customers. The insights generated can be used to accelerate contract review and to feed operational and risk data into core business systems (CLMs, ERPs, etc.) and teams. Documents are processed using Catylex’s proprietary extraction pipeline that uses a combination of various machine learning/NLP techniques (custom Named Entity Recognition, Text Classification) and domain expert augmentation to parse documents into an easy-to-query ontology. This eliminates the need for end users to annotate data or train any custom models. The application is very intuitive and provides easy-to-use controls to Quality Check the system-extracted data, search and query using a combination of text and concepts, and generate visualizations across portfolios. You can try all of this for free by signing up for the “Essentials'' version of Catylex . Catylex leverages a suite of applications and features from the MongoDB Atlas developer data platform . It uses the MongoDB Atlas database to store documents and extract metadata due to its flexible data model and easy-to-scale options, and it uses Atlas Search to provide end users with easy-to-use and efficient text search capabilities. Features like highlighting within Atlas Search add a lot of value and enhance the user experience. Atlas Triggers are used to handle change streams and efficiently relay information to various parts within the Catylex application to make it event-driven and scalable. Catylex is actively evaluating Atlas Vector Search. Bringing together vector search alongside keyword search and database in a single, fully synchronized, and flexible storage layer, accessed by a single API, will simplify development and eliminate technology sprawl. Being part of the MongoDB AI Innovators Program gives Catylex’s engineers direct access to the product management team at MongoDB, helping to share feedback and receive the latest product updates and best practices. The provision of Atlas credits reduces the costs of experimenting with new features. Co-marketing initiatives help build visibility and awareness of the company’s offerings. Harness Generative AI with observed and dark data for customer 360 Dataworkz enables enterprises to harness the power of LLMs with their own proprietary data for customer applications. The company’s products empower businesses to effortlessly develop and implement Retrieval-Augmented Generation (RAG) applications using proprietary data, utilizing either public LLM APIs or privately hosted open-source foundation models. The emergence of hallucinations presents a notable obstacle in the widespread adoption of Gen AI within enterprises. Dataworkz streamlines the implementation of RAG applications enabling Gen AI to reference its origins, consequently enhancing traceability. As a result, users can easily use conversational natural language to produce high-quality, LLM-ready, customer 360 views powering chatbots, Question-Answering systems, and summarization services. Dataworkz provides connectors for a vast array of customer data sources. These include back-office SaaS applications such as CRM, Marketing Automation, and Finance systems. In addition, leading relational and NoSQL databases, cloud object stores, data warehouses, and data lake houses are all supported. Dataflows, aka composable AI-enabled workflows, are a set of steps that users combine and arrange to perform any sort of data transformation – from creating vector embeddings to complex JSON transformations. Users can describe data wrangling tasks in natural language, have LLMs orchestrate the processing of data in any modality, and merge it into a “golden” 360-degree customer view. MongoDB Atlas is used to store the source document chunks for this customer's 360-degree view and Atlas Vector Search is used to index and query the associated vector embeddings. The generation of outputs produced by the customer’s chosen LLM is augmented with similarity search and retrieval powered by Atlas. Public LLMs such as OpenAI and Cohere or privately hosted LLMs such as Databricks Dolly are also available. The integrated experience of the MongoDB Atlas database and Atlas Vector Search simplifies developer workflows. Dataworkz has the freedom and flexibility to meet their customers wherever they run their business with multi-cloud support. For Dataworkz, access to Atlas credits and the MongoDB partner ecosystem are key drivers for becoming part of the AI Innovators program. What's next? If you are building AI-enabled apps on MongoDB, sign up for our AI Innovators Program . We’ve had applicants from all industries building for a huge diversity of new use cases. To get a flavor, take a look at earlier blog posts in this series: Building AI with MongoDB: First Qualifiers includes AI at the network edge for computer vision and augmented reality; risk modeling for public safety; and predictive maintenance paired with Question-Answering systems for maritime operators. Building AI with MongoDB: Compliance to Copilots features AI in healthcare along with intelligent assistants that help product managers specify better products and help sales teams compose emails that convert 2x higher. Finally, check out our MongoDB for Artificial Intelligence resources page for the latest best practices that get you started in turning your idea into AI-driven reality.
A Powerful Platform for Parents and Educators
When I created the first versions of OWNA , I started with a target customer: my wife. When my children were entering childcare my wife and I realized we had little visibility of what was happening during the day. When I arrived to pick up my child, I often forgot to ask for the stats of the day – things like whether they had eaten, if they had napped, the number of nappy changes, and other information. My wife would ask me, and I wouldn’t have a clue because I’d forgotten to look at the paper-based report that detailed whether my child had eaten, the number of nappy changes, and whether they had napped. Starting from that foundation, I asked lots of questions and learned that childcare centers face many challenges. The problem wasn’t a lack of intent on the part of the staff at the childcare center. They simply lacked the tools to do this in an effective way that didn’t get in the way of the work they were doing. That led me to pivot from a parent-centric view to a broader one. Having started the initial development on MongoDB's document database , I was able to scale and iterate as I had a platform that could grow and be easily adapted. OWNA started as a tool for one childcare center and has now evolved and covers the full gamut of services that childcare centers offer. From that single center, OWNA is now used in over 2,500 childcare centers across Australia, and we have created localized versions for North America and Europe. How to create an app that meets challenging compliance requirements and offers flexibility to meet diverse needs When I started this journey, I looked at how information was recorded and managed at my local childcare center. Almost everything was on paper. Parents want to be able to easily access the information educators are recording and educators and the centers themselves need to store that data and make sure they meet compliance obligations. Paper-based records are costly to store, difficult to search and centers are subject to regulatory obligations to maintain records. With childcare centers moving toward electronic systems, we also solved another problem – the sprawl of disjointed applications centers used. We learned that there was a lot of switching between apps and copying data to ensure information was synchronized across applications. OWNA is a one-stop shop for childcare centers. It enables them to record and share everything from meals and nappy changes, manage staff and rosters, capture documents, images, and video, and support back-office operations with comprehensive Customer Relationship Management (CRM) and payment platforms. By listening carefully to the needs of educators and parents, we developed OWNA to meet the requirements of both groups. MongoDB Atlas enabled OWNA to scale and adapt to new customer needs MongoDB has been foundational to OWNA’s success. We needed a database that was easy to set up, used few system resources, and didn’t get in the way as we added features. MongoDB met those needs with flexible data structures without compromising performance. One of the key benefits of building on the MongoDB foundation is the ability to adapt the database to meet new customer needs. For example, when it came to recording when children ate, teachers initially recorded a simple yes or no in a field. However, we were able to change that field type, on the fly, into a field that allowed educators to enter how much of a meal was eaten. That change was important to parents and gave educators the ability to communicate more clearly with parents and carers. As the app’s popularity grew, we wanted to ensure OWNA was secure, scalable, and resilient. While MongoDB’s self-managed database was a great platform for us to start our journey with OWNA, as we grew we needed something to enable the business to scale and free up even more developer time. It was at this point that we started looking at MongoDB Atlas, as the managed service meant almost all of the operational and management burden was either completely removed or reduced to a few clicks. Moving to Atlas gave us the power to not only scale the application to more clients but also increase our developer productivity which meant we could focus our efforts on building an even better app. We could devote resources to development and customer support rather than managing the database. This shift enabled OWNA to scale more effectively and because of the superior business continuity with increased uptime and better resilience, it had a direct positive impact on our customers too. MongoDB Atlas lets us take advantage of multiple cloud providers. In our case, we use Microsoft Azure and Google Cloud Platform depending on the region or service we’re looking for. MongoDB Atlas enables global growth and expansion of OWNA's services The platform we’ve built on is now powering our next wave of innovation and development. For example, we’re launching the Family Marketplace – an online store for parents and educators. They’ll be able to order supplies such as nappies, stationery, craft supplies, and other essentials directly from OWNA. MongoDB Atlas will be the foundation and we'll use MongoDB Search so that users can find products and receive recommendations to make it easy for educators and parents to find the items they need. Using MongoDB Atlas Search eliminates the need for Owna to run a separate search system alongside the database. This simplifies the architecture and helps developers focus on value rather than managing data integration and syncing. The entire process will be handled within OWNA. Goods will be delivered directly to the center. For parents, this eliminates squeezing trips to shops between drop-offs, pick-ups, and work. The story for us doesn’t stop with OWNA. We’re also creating two new apps that are built on MongoDB. ERLY is a workforce management tool that enables small businesses to manage recruitment, rosters, payroll, and other key activities. And, by listening to educators that use OWNA, we learned that there was a desire for an app where qualified childcare workers could offer their services as babysitters. That led to the development of Nurture – a service that connects parents to babysitters. MongoDB’s tools let us develop apps with less code. The apps we create are easy to maintain and we can develop new features faster than with other platforms. The development and growth of OWNA has, from the first moment, been powered by MongoDB. The ability to quickly develop apps and features, easily maintain the apps and deploy them either on-premises, using hybrid infrastructure, or wholly on the cloud has enabled OWNA to grow and expand globally. Kheang Ly is the founder & CTO of OWNA. Overseeing the entire OWNA operations and building the best and most innovative platform. Learn more about OWNA .
Building AI with MongoDB: How VISO TRUST is Transforming Cyber Risk Intelligence
Since announcing MongoDB Atlas Vector Search preview availability back in June, we’ve seen rapid adoption from developers building a wide range of AI-enabled apps. Today we're going to talk to one of these customers. VISO TRUST puts reliable, comprehensive, actionable vendor security information directly in the hands of decision-makers who need to make informed risk assessments. The company uses a combination of state-of-the-art models from OpenAI, Hugging Face, Anthropic, Google, and AWS, augmented by vector search and retrieval from MongoDB Atlas. We sat down with Pierce Lamb, Senior Software Engineer on the Data and Machine Learning team at VISO TRUST to learn more. Tell us a little bit about your company. What are you trying to accomplish and how that benefits your customers or society more broadly? VISO TRUST is an AI-powered third-party cyber risk and trust platform that enables any company to access actionable vendor security information in minutes. VISO TRUST delivers the fast and accurate intelligence needed to make informed cybersecurity risk decisions at scale for companies at any maturity level. Our commitment to innovation means that we are constantly looking for ways to optimize business value for our customers. VISO TRUST ensures that complex business-to-business (B2B) transactions adequately protect the confidentiality, integrity, and availability of trusted information. VISO TRUST’s mission is to become the largest global provider of cyber risk intelligence and become the intermediary for business transactions. Through the use of VISO TRUST, customers will reduce their threat surface in B2B transactions with vendors and thereby reduce the overall risk posture and potential security incidents like breaches, malicious injections, and more. Today VISO TRUST has many great enterprise customers like InstaCart, Gusto, and Upwork and they all say the same thing: 90% less work, 80% reduction in time to assess risk, and near 100% vendor adoption. Because it’s the only approach that can deliver accurate results at scale, for the first time, customers are able to gain complete visibility into their entire third-party populations and take control of their third-party risk. Describe what your application does and what role AI plays in it The VISO TRUST Platform approach uses patented, proprietary machine learning and a team of highly qualified third-party risk professionals to automate this process at scale. Simply put, VISO TRUST automates vendor due diligence and reduces third-party at scale. And security teams can stop chasing vendors, reading documents, or analyzing spreadsheets. Figure 1: VISO TRUST is the only SaaS third-party cyber risk management platform that delivers the rapid security intelligence needed for modern companies to make critical risk decisions early in the procurement process VISO TRUST Platform easily engages third parties, saving everyone time and resources. In a 5-minute web-based session third parties are prompted to upload relevant artifacts of the security program that already exists and our supervised AI – we call Artifact Intelligence – does the rest. Security artifacts that enter VISO’s Artifact Intelligence pipeline interact with AI/ML in three primary ways. First, VISO deploys discriminator models that produce high-confidence predictions about features of the artifact. For example, one model performs artifact classification, another detects organizations inside the artifact, another predicts which pages are likely to contain security controls, and more. Our modules reference a comprehensive set of over 25 security frameworks and use document heuristics and natural language processing to analyze any written material and extract all relevant control information. Secondly, artifacts have text content parsed out of them in the form of sentences, paragraphs, headers, table rows, and more; these text blobs are embedded and stored in MongoDB Atlas to become part of our dense retrieval system. This dense retrieval system performs retrieval-augmented generation (RAG) using MongoDB features like Atlas Vector Search to provide ranked context to large language model (LLM) prompts. Thirdly, we use RAG results to seed LLM prompts and chain together their outputs to produce extremely accurate factual information about the artifact in the pipeline. This information is able to provide instant intelligence to customers that previously took weeks to produce. VISO TRUST’s risk model analyzes your risk and delivers a complete assessment that provides everything you need to know to make qualified risk decisions about the relationship. In addition, the platform continuously monitors and reassesses third-party vendors to ensure compliance. What specific AI/ML techniques, algorithms, or models are utilized in your application? For our discriminator models, we research the state-of-the-art pre-trained models (typically narrowed by those contained in HuggingFace’s transformers package) and perform fine-tuning of these models using our dataset. For our dense retrieval system, we use MongoDB Atlas Vector Search which internally uses the Hierarchical Navigable Small Worlds algorithm to retrieve similar embeddings to embedded text content. We have plans to perform a re-ranking of these results as well. For our LLM system, we have experimented with GPT3.5-turbo, GPT4, Claude 1 & 2, Bard, Vertex, and Bedrock. We blend a variety of these based on our customer's accuracy, latency, and security needs. Can you describe other AI technologies used in your application stack? Some of the other frameworks we use are HuggingFace transformers, evaluate, accelerate, and Datasets, PyTorch, WandB, and Amazon Sagemaker. We have a library for ML experiments (fine-tuning) that is custom-built, a library for workflow orchestration that is custom-built, and all of our prompt engineering is custom-built. Why did you choose MongoDB as part of your application stack? Which MongoDB features are you using and where are you running MongoDB? The VISO TRUST Platform relies on effective solutions and tools like MongoDB's distinctive attributes to fulfill specific objectives. MongoDB supports our platform's mechanism to engage third parties efficiently, employing both AI and human oversight to automate the assessment of security artifacts at scale. The fundamental value proposition of MongoDB – a robust document database – is why we originally chose it. It was originally deployed as a storage/retrieval mechanism for all the factual information our artifact intelligence pipeline produces about artifacts. While it still performs this function today, it has now become our “vector/metadata database.” MongoDB executes fast ranking of large quantities of embedded text blobs for us while Atlas provides us with all the ease-of-use of a cloud-ready database. We use both the Atlas search index visualization, and the query profiler visualization daily. Even just the basic display of a few documents in collections often saves time. Finally, when we recently backfilled embeddings across one of our MongoDB deployments, Atlas would automatically provision more disk space for large indexes without us needing to be around which was incredibly helpful. What are the benefits you've achieved by using MongoDB? I would say there are two primary benefits that have greatly helped us with respect to MongoDB and Atlas. First, MongoDB was already a place where we were storing metadata about artifacts in our system; with the introduction of Atlas Vector Search now we have a comprehensive vector/metadata database – that’s been battle-tested over a decade – that solves our dense retrieval needs. No need to deploy a new database we have to manage and learn. Our vectors and artifact metadata can be stored right next to each other. Second, Atlas has been helpful in making all the painful parts of database management easy. Creating indexes, provisioning capacity, alerting slow queries, visualizing data, and much more have saved us time and allowed us to focus on more important things. What are your future plans for new applications and how does MongoDB fit into them? Retrieval-augmented generation is going to continue to be a first-class feature of our application. In this regard, the evolution of Atlas Vector Search and its ecosystem in MongoDB will be highly relevant to us. MongoDB has become the database our ML team uses, so as our ML footprint expands, our use of MongoDB will expand. Getting started Thanks so much to Pierce for sharing details on VISO TRUST’s AI-powered applications and experiences with MongoDB. The best way to get started with Atlas Vector Search is to head over to the product page . There you will find tutorials, documentation, and whitepapers along with the ability to sign up for MongoDB Atlas. You’ll just be a few clicks away from spinning up your own vector search engine where you can experiment with the power of vector embeddings and RAG. We’d love to see what you build, and are eager for any feedback that will make the product even better in the future!
The Challenges and Opportunities of Processing Streaming Data
Let’s consider a fictitious bank that has a credit card offering for its customers. Transactional data might land in their database from various sources such as a REST API call from a web application or from a serverless function call made by a cash machine. Regardless of how the data was written to the database, the database performed its job and made the data available for querying by the end-user or application. The mechanics are database-specific but the end goal of all databases is the same. Once data is in a database the bank can query and obtain business value from this data. In the beginning, their architecture worked well, but over time customer usage grew and the bank found it difficult to manage the volume of transactions. The company decides to do what many customers in this scenario do and adopts an event-streaming platform like Apache Kafka to queue these event data. Kafka provides a highly scalable event streaming platform capable of managing large data volumes without putting debilitating pressure on traditional databases. With this new design, the bank could now scale supporting more customers and product offerings. Life was great until some customers started complaining about unrecognized transactions occurring on their cards. Customers were refusing to pay for these and the bank was starting to spend lots of resources figuring out how to manage these fraudulent charges. After all, by the time the data gets written into the database, and the data is batch loaded into the systems that can process the data, the user's credit card was already charged perhaps a few times over. However, hope is not lost. The bank realized that if they could query the transactional event data as it's flowing into the database they might be able to compare it with historical spending data from the user, as well as geolocation information, to make a real-time determination if the transaction was suspicious and warranted further confirmation by the customer. This ability to continuously query the stream of data is what stream processing is all about. From a developer's perspective, building applications that work with streaming data is challenging. They need to consider the following: Different serialization formats: The data that arrives in the stream may contain different serialization formats such as JSON, AVRO, Protobuf or even binary. Different schemas: Data originating from a variety of sources may contain slightly different schemas. Fields like CustomerID could be customerId from one source or CustID in another and a third could not even use the field. Late arriving data: The data itself could arrive late due to network latency issues or being completely out of order. Operational complexity: Developers need to be concerned with reacting to application state changes like failed connections to data sources and how to efficiently scale the application to meet the demands of the business. Security: In larger enterprises, the developer usually doesn’t have access to production data. This makes troubleshooting and building queries from this data difficult. Stream processing can help address these challenges and enable real-time use cases, such as fraud detection, hyper-personalization, and predictive maintenance, that are otherwise difficult or extremely costly to overcome. While many stream processing solutions exist, the flexibility of the document model and the power of the aggregation framework are naturally well suited to help developers with the challenges found with complex event data. Discover MongoDB Atlas Stream Processing Check out the MongoDB Atlas Stream Processing announcement blog post. Request private preview access to Atlas Stream processing Learn more about Atlas Stream Processing and request access to participate in the private preview once it opens to developers. New to MongoDB? Get started for free today by signing up for MongoDB Atlas .
Building AI with MongoDB: From Compliance to Copilots
There has been a lot of recent reporting on the desire to regulate AI. But very little has been made of how AI itself can assist with regulatory compliance. In our latest round-up of qualifiers for the MongoDB AI Innovators Program, we feature a company who are doing just that in one of the world’s most heavily regulated industries. Helping comply with regulations is just one way AI can assist us. We hear a lot about copilots coaching developers to write higher-quality code faster. But this isn’t the only domain where AI-powered copilots can shine. To round out this blog post, we provide two additional examples – a copilot for product managers that helps them define better specifications and a copilot for sales teams to help them better engage customers. We launched the MongoDB AI Innovators Program back in June this year to help companies like these “build the next big thing” in AI. Whether a freshly minted start-up or an established enterprise, you can benefit from the program, so go ahead and sign up. In the meantime, let's explore how innovators are using MongoDB for use cases as diverse as compliance to copilots. AI-powered compliance for real-time healthcare data Inovaare transforms complex compliance processes by designing configurable AI-driven automation solutions. These solutions help healthcare organizations collect real-time data across internal and external departments, creating one compliance management system. Founded 10 years ago and now with 250 employees, Inovaare's comprehensive suite of HIPAA-compliant software solutions enables healthcare organizations across the Americas to efficiently meet their unique business and regulatory requirements. They can sustain audit readiness, reduce non-compliance risks, and lower overall operating costs. Inovaare uses classic and generative AI models to power a range of services. Custom models are built with PyTorch while LLMs are built with transformers from Hugging Face and developed and orchestrated with LangChain . MongoDB Atlas powers the models’ underlying data layer. Models are used for document classification along with information extraction and enrichment. Healthcare professionals can work with this data in multiple ways including semantic search and the company’s Question-Answering chatbot. A standalone vector database was originally used to store and retrieve each document’s vector embeddings as part of in-context model prompting. Now Inovaare has migrated to Atlas Vector Search . This migration helps the company’s developers build faster through tight vector integration with the transactional, analytical, and full-text search data services provided by the MongoDB Atlas platform . Next-generation healthcare compliance platform from Inovaare. The platform provides AI-powered health plan solutions with continuous monitoring, regulatory reporting, and business intelligence. Inovaare also uses AI agents to orchestrate complex workflows across multiple healthcare business processes, with data collected from each process stored in the MongoDB Atlas database. Business users can visualize the latest state of healthcare data with natural language questions translated by LLMs and sent to Atlas Charts for dashboarding. Inovaare selected MongoDB because its flexible document data model enables the company's developers to store and query data of any structure. This coupled with Atlas’ HIPAA compliance, end-to-end data encryption, and the freedom to run on any cloud – supporting almost any application workload – helps the company innovate and release with higher velocity and lower cost than having to stitch together an assortment of disparate databases and search engines. Going forward, Inovaare plans to expand into other regions and compliance use cases. As part of MongoDB’s AI Innovators Program, the company’s engineers get to work with MongoDB specialists at every stage of their journey. The AI copilot for product managers The ultimate goal of any venture is to create and deliver meaningful value while achieving product-market fit. Ventecon 's AI Copilot supports product managers in their mission to craft market-leading products and solutions that contribute to a better future for all. Hundreds of bots currently crawl the Internet, identifying and processing over 1,000,000 pieces of content every day. This content includes details on product offerings, features, user stories, reviews, scenarios, acceptance criteria, and issues through market research data from target industries. Processed data is stored in MongoDB. Here it is used by Ventecon’s proprietary NLP models to assist product managers in generating and refining product specifications directly within an AI-powered virtual space. Patrick Beckedorf, co-founder of Ventecon says “Product data is highly context-specific and so we have to pre-train foundation models with specific product management goals, fine-tune with contextual product data, include context over time, and keep it up to date. In doing so, every product manager gets a digital, highly contextualized expert buddy.” Currently, vector embeddings from the product data stored in MongoDB are indexed and queried in a standalone vector database. As Beckedorf says, the engineering team is now exploring a more integrated approach. “The complexity of keeping vector embeddings synchronized across both source and vector databases, coupled with the overhead of running the vector store ties up engineering resources and may affect indexing and search performance. A solid architecture therefore provides opportunities to process and provide new knowledge very fast, i.e. in Retrieval-Augmented Generation (RAG), while bottlenecks in the architecture may introduce risks, especially at scale. This is why we are evaluating Atlas Vector Search to bring source data and vectors together in a single data layer. We can use Atlas Triggers to call our embedding models as soon as new data is inserted into the MongoDB database. That means we can have those embeddings back in MongoDB and available for querying almost immediately.” For Beckedorf, the collaboration with data pioneers and the co-creation opportunities with MongoDB are the most valuable aspects of the AI Innovators Program. AI sales email coaching: 2x reply rates in half the time Lavender is an AI sales email coach. It assists users in real-time to write better emails faster. Sales teams who use Lavender report they’re able to write emails in less time and receive twice as many replies. The tool uses generative AI to help compose emails. It personalizes introductions for each recipient and scores each email as it is being written to identify anything that hurts the chances of a reply. Response rates are tracked so that teams can monitor progress and continuously improve performance using data-backed insights. OpenAI’s GPT LLMs along with ChatGPT collaboratively generate email copy with the user. The output is then analyzed and scored through a complex set of business logic layers built by Lavender’s data science team, which yield industry-leading, high-quality emails. Together, the custom and generative models help write subject lines, remove jargon and fix grammar, simplify unwieldy sentences, and optimize formatting for mobile devices. They can also retrieve the recipient’s (and their company’s) latest publicly posted information to help personalize and enrich outreach. MongoDB Atlas running on Google Cloud backs the platform. Lavender’s engineers selected MongoDB because of the flexibility of its document data model. They can add fields on-demand without lengthy schema migrations and can store data of any structure. This includes structured data such as user profiles and response tracking metrics through to semi and unstructured email copy and associated ML-generated scores. The team is now exploring Atlas Vector Search to further augment LLM outputs by retrieving similar emails that have performed well. Storing, syncing, and querying vector embeddings right alongside application data will help the company’s engineers build new features faster while reducing technology sprawl. What's next? We have more places left in our AI Innovators Program , but they are filling up fast, so sign up directly on the program’s web page. We are accepting applications from a diverse range of AI use cases. To get a flavor of that diversity, take a look at our blog post announcing the first program qualifiers who are building AI with MongoDB . You’ll see use cases that take AI to the network edge for computer vision and Augmented Reality (AR), risk modeling for public safety, and predictive maintenance paired with Question-Answering systems for maritime operators. Also, check out our MongoDB for Artificial Intelligence resources page for the latest best practices that get you started in turning your idea into AI-driven reality.
Powerful Generative AI Innovation Accelerates Discovery of New Molecules
Since 2018, MongoDB and Google Cloud have collaborated to revolutionize the way companies interact with their data, providing an unrivaled experience in Google Cloud regions around the world through a strategic partnership. By delivering MongoDB's popular Atlas developer data platform and deep integrations with Google's data cloud to customers, the two companies are empowering businesses to create applications at scale with unprecedented data richness, all available through the Google Cloud Marketplace . This strategic partnership is bearing fruit. In the chemical industry, for example, users are now combining AI and data mining techniques using MongoDB Atlas with Google Clouds Foundation Models to accelerate the discovery of new molecules and make the process more environmentally friendly. The next big step in generative AI Developers can experience the powerful capabilities of MongoDB Atlas Vector Search and Google Cloud foundation models to quickly and easily build applications with AI-powered features to enable highly personalized and engaging end-user experiences. Vertex AI provides the text embedding API to generate embeddings from customer data stored in MongoDB Atlas. Atlas Vector Search is a fully managed service that simplifies the process of effectively indexing this high-dimensional embedding data within MongoDB and being able to perform fast vector similarity searches. This combined with Google’s PaLM can be used to create advanced functionality like semantic search, classification, outlier detection, AI-powered chatbots, and text summarization - enabling developers to quickly build and scale next-generation applications. With Atlas Vector Search, developers can build intelligent applications powered by generative AI over any type of data. MongoDB Atlas Vector search combined with Google Cloud foundation models integrates the operational database, vector search, and LLMs into a single, unified, and fully managed platform. You can find out more about using Vector Search with Google Cloud foundation models from our information hub . Accelerating the discovery of new molecules with generative AI Every day brings new innovations in generative AI, and Vector Search is no exception. Developers are now using Google Cloud foundation models and MongoDB Vector Search to bring inventive applications to growing industries. In one example, MongoDB Partner Exafluence is utilizing AI and data mining techniques with MongoDB Atlas Vector Search and foundation models from Google Cloud to help joint customer Anupam Rasayan discover new molecules. India-based Anupam Rasayan is one of the leading companies engaged in the custom synthesis and manufacturing of specialty chemicals. The new platform, called Exf ChemXpert, includes configurable components for a wide variety of applications in the chemical industry, such as property prediction to help design new molecules, chemical reaction optimization to make developing molecules more environmentally friendly, and novel drug discovery to develop new treatments in the pharmaceutical industry. The home page for EXF ChemXpert, a one-stop platform that's accelerating the discovery of new molecules According to Anand Desai, Managing Director of Anupam Rasayan, this powerful new integration shows great promise. "In the world of chemistry, LLMs are potential game-changers for day-to-day product research and optimization of reaction mechanisms and operations," Desai said. "It can speed up new product and process innovations, reduce usage factors and costs, and push R&D to new heights beyond conventional methods. This transformation driven by new generative AI-powered tools will bring in a shift from conventional perspectives and benefit the chemical industry at large." In the Retrosynthesis Planner, the user asks a question about how to synthesize the chemical, acrylamide. Next steps Generative AI represents a significant opportunity for developers to create new applications and experiences and to add real business value for customers. For more about building applications on MongoDB Atlas with Google Cloud foundation models, including demos where you can see generative AI in action with MongoDB Atlas on Google Cloud, visit this special information hub . To get started running MongoDB on Google Cloud, visit the Google Marketplace .
How to Enhance Inventory Management with Real-Time Data Strategies
In the competitive retail landscape, having the right stock in the right place at the right time is crucial. However, the retail industry faces significant challenges in achieving this goal. In 2022, unsold stock in the US surged by a staggering $78 billion, reaching approximately $740 billion—a shocking 12 percent increase . Without a single view of inventory, retailers struggle to compete with new market disruptors offering customers omnichannel experiences. Retailers who get stock management right can move to distributed supply chains, leveraging stock across online and in-store platforms to distribute inventory quickly and react to shifting buying patterns. With effective access to the data, retailers speed up workforce efficiency and allow for automation. In this blog, we will explore how inventory management affects customer experiences, effective stock management for accurate demand forecasting, and workforce productivity. Building a single view of inventory to enhance customer experience Modern retail consumers expect seamless omnichannel experiences, like the ability to view product availability online and pick it up at a nearby store the next day. They will gravitate toward retailers that prioritize their need for convenience and speed. The difficulty in delivering these features often stems from the lack of a centralized inventory hub, i.e. operating with separate inventories for online and in-store. Combining data from diverse sources, including vendor solutions, RDBMS databases, and files, becomes a complex task that hampers the ability to achieve an accurate real-time view of stock availability. It also extends the time to market for new features, requiring redundant and customized development efforts across different channels. This lack of adaptability impacts the retailer's ability to offer customer-centric features, putting them at a disadvantage compared to their competitors. To track inventory in real-time and improve visibility and consistency across multiple channels and locations, MongoDB’s document data model is a powerful choice. Using the document model, data types can be combined easily, making it more flexible for handling diverse product data. Its intuitive design enables developers to iterate on the data model at the same pace as the rest of the code base, without downtime for schema changes. This agility accelerates the implementation of new features and functionalities that can be built on top of a single view of inventory, like real-time stock availability, and buying online and picking up in-store the next day. Figure 1: Enabling buy online and pick up in-store through single-view inventory By leveraging a single view of inventory, retailers can accelerate the development of superior customer experiences, securing a competitive edge in the retail industry. Effective stock management with real-time analytics Now that the retailer can see and understand inventory levels across their organization in one place, they can begin to manage stock more effectively. This enables retailers to move to a more complex distributed supply chain and activate the use of real-time analytics or AI. In a traditional retailer without a centralized inventory management system, the complexity of mixing stock between channels was too difficult in a segmented data landscape, leading to waste through dead stock in stores while others or online channels have an insufficient supply of the same item. With a single view of inventory, items can be moved around in a way that makes sense for the business. Online orders destined for in-store pick-up might be packed using in-store items. Dead stock on a shelf might be available online. Stores can move stock between themselves in an intelligent manner. The added complexity does come with more complex decision-making. It's vital to be able to ask difficult questions about the inventory management system and get answers in real-time. Rather than move data to a different analytical platform and get answers a day later, retailers are looking to do real-time analysis to make important stock allocation decisions in real-time. Next, retailers tackle demand forecasting and bring intelligence into stock allocation. This is where a translytical data platform comes in. Its distributed architecture means analytical workloads can run on a real-time analytics node. This approach eliminates the need for additional systems such as separate analytics platforms and reduces the lag associated with transferring data. The aggregation framework, MongoDB’s advanced processing pipeline can then be used to ask complex analytical questions and get results back to the user in real-time. For example, retailers can easily see which products are the most popular or the most likely to run out of stock soon or understand when a product rapidly sells out in one store if this is a trend or tied to a specific event like a sports game. This insight can guide smart decisions on redistributing products to get them in front of the customer who is most likely to buy. Figure 2: Inventory real-time analytics This architecture could also be leveraged to feed AI or machine learning models. The more complex the supply chain becomes, the more retailers are turning to cutting-edge technology to gain further insight. Demand forecasting is a great use case for AI as there can be a vast amount of possible factors and results. With MongoDB, retailers are integrating AI systems so they can access real-time data, enhancing their accuracy and responsiveness. This synergy enables businesses to streamline their supply chains. Boost workforce efficiency through an event-driven solution A successful inventory management strategy also contributes to improving workforce efficiency. The lack of real-time updates brings on inefficient inventory tracking procedures that result in errors, such as excess or unavailable goods, and hinder customer orders, leading to dissatisfaction among staff and customers alike. As the business grows and sales volume increases, the ability to process large amounts of real-time data becomes increasingly important. A future-proof, scalable, and flexible architecture supporting the tools that empower your workforce, can make a difference when retailers face a peak in demand or decide to expand their business. The central question retailers face is, "How can businesses enhance workforce efficiency in their inventory operations? The key lies in using event-driven architectures for managing inventory systems. MongoDB is a great fit for this approach, offering features like Change Streams , Triggers , and the Kafka Connector . Take for example the scenario seen in Figure 3; a customer purchases a t-shirt in-store. The Point of Sale device then instantly updates the product stock. If stock runs low, this change is instantly sent to the store manager app through Change Streams to alert the store manager. To automate the re-ordering process, MongoDB Triggers can be set up to trigger a function that would perform complex actions in response to the event, like automatically reordering products. Figure 3: Event-driven architecture for inventory management Today, when an influencer mentions a particular item, it can fly off the shelves at an unforeseen pace. Thanks to automation enabled by event-driven architectures, such situations become opportunities, not challenges. As soon as that item goes unexpectedly out of stock, the system triggers an automatic reorder, ensuring that your shelves are replenished in real-time. This rapid response eliminates the need for manual intervention, freeing up your store manager to focus on more value-add activities. Instead of spending hours every day reordering items, they can now dive into more engaging tasks, like interacting with customers, providing personalized recommendations, and exploring innovative stock decisions. This isn't just a theoretical advantage. A prime example comes from MongoDB’s work with 7-Eleven . By implementing a custom inventory management app, 7-Eleven streamlined its operations across 10,000 stores in the U.S. and Canada. With event-driven functionality, 7-Eleven store employees can now seamlessly manage transactions, sales, and inventory through mobile devices, eradicating the need for manual updates and improving overall workforce efficiency. Closing the loop for a future-proof inventory management strategy Effective inventory management strategies are vital in the evolving retail landscape. By providing a consistent single-view inventory, retailers can enhance customer experiences and gain a competitive edge. With efficient stock management capabilities, they can optimize their inventory levels, reducing costs and improving profitability. And by embracing event-driven solutions, retailers can boost workforce efficiency, enabling data-driven decision-making and streamlining processes through automation.
Empowering Automotive Developers for the Road Ahead
MongoDB 7.0 is here, and companies across industries are benefiting from being early adopters of cutting-edge data platform technology. Let’s take a closer look at the automotive industry specifically, and how many of MongoDB’s new features and capabilities can revolutionize the way automotive developers build, iterate, and scale their applications. In the fast-changing automotive landscape, development teams face the challenge of delivering compelling user experiences faster and smarter than ever before. MongoDB's developer data platform becomes a vital tool for developers striving to innovate quickly and efficiently, supporting a wide range of application use cases while streamlining development and ensuring optimal performance. MongoDB Atlas Stream Processing MongoDB Atlas Stream Processing , coming soon in private preview, will be a game-changing advantage for the automotive industry, offering real-time data insights and rapid responses to critical events. As vehicles generate an ever-increasing stream of sensor data, this capability enables automotive developers to process, analyze, and act upon data in real-time. Manufacturers and fleet management companies can monitor vehicle health, track performance, and optimize maintenance schedules on the fly, while proactive safety measures and anomaly detection ensure utmost safety for drivers and passengers. Moreover, MongoDB Atlas Stream Processing enables developers to unlock the potential of connected car applications, making real-time data processing imperative for intelligent navigation, personalized infotainment services, and efficient route planning . MongoDB Atlas Vector Search MongoDB Atlas Vector Search , currently in public preview, holds immense potential for revolutionizing the automotive industry. By utilizing vector representations of unstructured data such as audio, images, and text, MongoDB Atlas enables developers to store, index, and query data based on similarities in high-dimensional vector spaces alongside operational data. For the automotive industry, this means unlocking a world of possibilities in data analysis, anomaly detection, and predictive maintenance. In fact, as mentioned in the MongoDB.local Chicago keynote , a top 10 auto manufacturer leveraged Vector Search to enable engine diagnostics based on engine audio. Watch the video below to learn more. Atlas Vector Search empowers automotive developers to create smarter, data-driven applications that deliver more relevant and accurate insights, ultimately enhancing the driving experience and safety for all. MongoDB Atlas Vector Search allows manufacturers to query and qualify possible equipment and product failure causes and get AI-generated recommendations on how to adjust operational parameters and extend the life of their equipment and products. The automotive industry thrives on innovation and efficiency, and Atlas Vector Search opens new avenues for optimizing vehicle performance, predicting maintenance needs, and enhancing overall user experiences on the road. MongoDB Relational Migrator In the ever-evolving automotive industry, legacy relational databases often pose challenges in scalability, flexibility, and performance. Relational databases are very prevalent in the manufacturing industry and hinder innovation due to rigid data models and limited scalability. MongoDB Relational Migrator addresses these pain points by assisting with several critical steps in the path to modernization for automotive developers. By migrating data from common relational databases to MongoDB, automotive companies can break free from the limitations of legacy systems and embrace the full potential of a NoSQL database . This migration process streamlines data transfer, offers valuable data modeling recommendations, and empowers developers to refactor applications quickly and efficiently. Embracing MongoDB's flexible document data model optimizes performance, scales applications effortlessly, and unlocks the potential for real-time analytics, enabling the industry to stay ahead in the race for innovation. MongoDB Relational Migrator becomes a catalyst for driving transformative change in the automotive sector, enabling faster and more efficient data processing for mission-critical applications and paving the way for sophisticated AI-driven solutions. As automotive companies embrace data-driven insights and strive to deliver unparalleled user experiences, MongoDB Relational Migrator empowers the industry to leverage the full potential of NoSQL databases, enabling automotive applications to zoom ahead in the fast lane of innovation. MongoDB 7.0 promises to be a game-changer for developers across industries , empowering them to build innovative, scalable, and secure applications that drive the future. With the power of MongoDB, developers can accelerate their journey toward automotive innovation and build the vehicles and experiences of tomorrow. Watch the full MongoDB.local lineup to learn more .
Understanding the Costs of Serverless Architecture: Will it Save You Money?
As the digital landscape evolves, developers are constantly on the lookout for innovative ways to optimize their applications and deliver seamless user experiences. One approach that has gained popularity over the years is serverless architecture. By abstracting away server management and scaling concerns, serverless promises increased development efficiency, reduced operational overhead, and potential cost savings. However, before diving headfirst into this paradigm shift, it's crucial to understand the tradeoffs and costs associated with serverless architecture to know if it’s the right fit for your use case and budget requirements. What is serverless architecture? Let's first briefly review what serverless architecture entails. In traditional setups, developers manage servers, infrastructure provisioning, and scaling. By contrast, serverless architecture allows developers to focus solely on the business logic for their applications without worrying about the underlying infrastructure. Instead, the service providers handle the server provisioning and scaling dynamically based on the application's demand. There are a variety of technologies and services that now fit the serverless model, including function-as-a-service (FaaS), API gateways, object storage, and even databases. Understanding the cost model of serverless When it comes to pricing, serverless solutions follow a usage-based pricing model where you “only pay for what you use”. This means, instead of fixed monthly fees for maintaining servers, you only pay for the actual computing resources used during the execution of your code. The primary cost factors for serverless solutions can vary slightly by service but they all typically meter on some form of the following: Compute resources: The compute needed to execute and service your application workload. Memory or storage allocation: The amount of memory allocated or overall data size being stored. Data transfer: The data is transferred in and out. Cost comparison: Serverless vs. provisioned infrastructure To determine whether serverless will save you money, you must evaluate your application's specific requirements and usage patterns. Serverless architecture can be cost-effective in certain scenarios, but it might not be the optimal choice for every use case. Generally, with traditional provisioned infrastructure, you are going to have to deal with initial upfront costs even before there is traffic to your application. Which means you will likely have much more capacity than you need to operate. The same cycle is repeated over time as your application grows and requires more resources to scale – you scale up to a server that is much more than you actually need. Serverless on the other hand removes the upfront cost and the risk of over-provisioning for your workload requirements, since it will simply scale as needed and you will only pay for what you use. However, not all applications scale linearly, so for both new and more established applications where you may be considering using serverless, it’s important to think about your usage patterns and requirements before going down this path. Here's a breakdown of cost considerations depending on your applications requirements and traffic patterns: Low and Variable Workloads: If your application experiences irregular traffic patterns or low user demand, serverless can be highly cost-effective. You won't have to pay for idle server time, as the service provider automatically scales down to zero when there's no traffic. High Burst Traffic: Serverless excels in handling sudden spikes in traffic. Provisioned infrastructure may require overprovisioning to handle peak loads, incurring unnecessary costs during normal usage. Predictable Workloads: In cases of steady, predictable workloads, provisioned infrastructure with reserved instance capacity might be more cost-effective than serverless. Short-Lived Tasks: For tasks that execute quickly and don't require significant resources, serverless can be more cost-efficient. provisioned servers might incur higher costs due to minimum capacity or billing requirements. Long-Running Tasks: If your application frequently executes tasks that run for extended periods, serverless may end up being more expensive in the long run. In these scenarios, provisioned infrastructure may be the more cost-effective option. Optimizing costs in serverless architecture Because serverless solutions are charged based on usage, ensuring you have proper optimizations in place is not only important for performance but also to keep costs as low as possible. It’s important to make sure you are considering best practices for implementation so the service runs smoothly and can scale as efficiently as possible. This can mean different things depending on the type of serverless service you are using. If you are using a function-as-a-service platform like AWS Lambda that may mean allocating the right amount of memory for your function or controlling the invocation frequency to minimize invocations. Or if you are using a serverless database like MongoDB Atlas, that may mean modeling your data and structuring your queries in a certain way to minimize the size of the data being read from or written to the database. Regardless of the service, you should familiarize yourself with any best practices before jumping right in. Choosing the right solution for your needs Serverless architecture offers developers a powerful way to streamline development and focus on building applications without worrying about infrastructure management - providing benefits far beyond cost savings alone. For certain use cases with varying workloads and short-lived tasks, serverless can indeed be an option to save you money. However, it's crucial to assess your application's specific requirements and usage patterns to determine if serverless is the right fit for your needs. By understanding the cost model, comparing it with provisioned infrastructure, and implementing the proper cost optimization strategies, you can make an informed decision that aligns with your development goals and budget. Get started with MongoDB Atlas MongoDB Atlas gives developers flexibility with both serverless and provisioned database deployments available to address your workload requirements, regardless of your app's traffic patterns or budget constraints. Try Serverless on MongoDB Atlas today .