Artificial Intelligence

Building AI-powered Apps with MongoDB

How Tavily Uses MongoDB to Enhance Agentic Workflows

As AI agents grow in popularity and are used in increasingly mission-critical ways, preventing hallucinations and giving agents up-to-date context is more important than ever. Context can come from many sources—prompts, documents, proprietary internal databases, and the internet itself. Among these sources, the internet stands out as uniquely valuable, a best-in-class resource for humans and LLMs alike due to its massive scale and constant updates. But how can large language models (LLMs) access the latest and greatest information from the internet? Enter Tavily , one of the companies at the heart of this effort. Tavily provides an easy way to connect the web to LLMs, giving them the answers and context they need to be even more useful. MongoDB had the opportunity to sit down with Rotem Weiss, CEO of Tavily, and Eyal Ben Barouch, Tavily’s Head of Data and AI, to talk about the company’s history, how Tavily uses MongoDB, and the future of agentic workflows. Tavily’s origins Tavily began in 2023 with a simple but powerful idea. "We started with an open source project called GPT Researcher ," Weiss said. "It did something pretty simple—go to the web, do some research, get content, and write a report." That simplicity struck a chord. The project exploded, getting over 20,000 GitHub stars in under two years, signaling to the team that they had tapped into something developers desperately needed. The viral success revealed a fundamental gap in how AI systems access information. "So many use cases today require real-time search, whether it's from the web or from your users," Weiss noted. "And that is basically RAG (retrieval-augmented generation) ." "Developers are slowly realizing not everything is semantic, and that vector search alone cannot be the only solution for RAG," Weiss said. Indeed, for certain use cases, vector stores benefit from further context. This insight, buttressed by breakthrough research around CRAG (Corrective RAG) , pointed toward a future where systems automatically turn to the web to search when they lack sufficient information. Solving the real-time knowledge problem Consider the gap between static training data and our dynamic reality. Questions like "What is the weather today?" or "What was the score of the game last night?" require an injection of real-time information to accurately answer. Tavily's system fills this gap by providing AI agents with fresh, accurate data from the web, exactly when they need it. The challenge Tavily addresses goes beyond information retrieval. “Even if your model ‘knows’ the answer, it still needs to be sent in the right direction with grounded results—using Tavily makes your answers more robust,” Weiss explained. The new internet graph Weiss envisions a fundamental shift in how we think about the architecture of the web. "If you think about the new internet, it’s a fundamentally different thing. The internet used to be between people—you would send emails, you would search websites, etc. Now we have new players, the AI agents, who act as new nodes on the internet graph." These new nodes change everything. As they improve, AI agents can perform many of the same actions as humans, but with different needs and expectations. "Agents want different things than people want," Weiss explained. "They want answers; they don't need fancy UIs and a regular browser experience. They need a quick, scalable system to give them answers in real time. That's what Tavily gives you." The company's focus remains deliberately narrow and deep. "We always want to stick to the infrastructure layer compared to our competitors, since you don't know where the industry is going," Weiss said. "If we focus on optimizing the latency, the accuracy, the scalability, that's what is going to win, and that's what we're focused on." Figure 1. The road to insightful responses for users with TavilyHybridClient. MongoDB: The foundation for speed and scale To build their infrastructure, Tavily needed a database that could meet their ambitious performance requirements. For Weiss, the choice was both practical and personal. "MongoDB is the first database I ever used as a professional in my previous company," he said. "That's how I started, and I fell in love with MongoDB. It's amazing how flexible it is–it's so easy to implement everything." The document model, the foundation upon which MongoDB is built, allowed Tavily to build and scale an enterprise-grade solution quickly. But familiarity alone didn't drive the decision. MongoDB Atlas had the performance characteristics Tavily required. "Latency is one of the things that we always optimize for, and MongoDB delivers excellent price performance," Tavily’s Ben Barouch explained. "The performance is much more similar to a hot cache than a cold cache. It's almost like it's in memory!" The managed service aspect proved equally crucial. "MongoDB Atlas also saves a lot of engineering time," Weiss noted. In a fast-moving startup environment, MongoDB Atlas enabled Weiss to focus on building Tavily and not worry about the underlying data infrastructure. "Today, companies need to move extremely fast, and at very lean startups, you need to only focus on what you are building. MongoDB allows Tavily to focus on what matters most, our customers and our business." Three pillars of success The Tavily team highlighted three specific MongoDB Atlas characteristics that have become essential to their operations: Vector search : Perhaps most importantly for the AI era, MongoDB's vector search capabilities allow it to be "the memory for agents." As Weiss put it, "The only place where a company can have an edge is their proprietary data. Every company can access the best models, every company can search the web, every company can have good agent orchestration. The only differentiation is utilizing your internal, proprietary data and injecting it in the fastest and most efficient way to the prompt." MongoDB, first with Atlas Vector Search and now with Hybrid Search , has effective ways of giving agents performant context, setting them apart from those built with other technologies. Autoscaling : "Our system is built for a very fast-moving company, and we need to scale in a second," Weiss continued. "We don't need to waste time each week making changes that are done automatically by MongoDB Atlas." Monitoring : "We have other systems where we need to do our own monitoring with other cloud providers, and it's a lot of work that MongoDB Atlas takes care of for us," Weiss explained. "MongoDB has great visibility." Betting on proven innovation Tavily has been impressed with the way MongoDB has kept a finger on the pulse of the evolving AI landscape and added features accordingly. “I believed that MongoDB would be up to date quickly, and I was right," Weiss said. "MongoDB quickly thought about vector search, about other features that I needed, and got them in the product. Not having to bolt-on a separate vector database and having those capabilities natively in Atlas is a game changer for us." Ben Barouch emphasized the strategic value of MongoDB’s entire ecosystem, including the community built around the database: "When everyone's offering the same solutions, they become the baseline, and then the things that MongoDB excels at, things like reliability and scalability, are really amplified. The community, especially, is great; MongoDB has excellent developer relations, so learning and using MongoDB is very easy." The partnership between MongoDB and Tavily extends beyond technology to trust. "In this crazy market, where you have new tools every two hours and things are constantly changing, you want to make sure that you're choosing companies you trust to handle things correctly and fast," Weiss said. "I want a vendor where if I have feedback, I'm not afraid to say it, and they will listen." Looking ahead: The multi-agent future As Tavily continues building the infrastructure for AI agents to search the web, Weiss sees the next evolution already taking shape. "The future is going to be thinking about combining these one, two, three, four agents into a workflow that makes sense for specific use cases and specific companies. That will be the new developer experience." This vision of orchestrated AI workflows represents just the beginning. With MongoDB Atlas providing the scalable, reliable foundation they need, Tavily is positioning itself at the center of a fundamental shift in how information flows through our digital world. The internet welcomed people first, then connected them in revolutionary ways. Now, as AI agents join the network, companies like Tavily are building the infrastructure to ensure this next chapter of digital evolution is both powerful and accessible. With MongoDB as their foundation, they're not just adapting to the future—they're building it. Interested in building with MongoDB Atlas yourself? Try it today ! Use Tavily for working memory in this MongoDB tutorial . Explore Tavily’s Crawl to RAG example.

August 5, 2025
Artificial Intelligence

Automotive Document Intelligence with MongoDB Atlas Search

Picture two scenarios happening simultaneously across the automotive industry: In a service bay, a technician searches frantically through multiple systems for the correct procedure to address an unfamiliar warning code. They need safety warnings, torque specifications, and part numbers—immediately. Instead, they’re lost in hundreds of PDF pages, risking safety violations and extending repair times. Meanwhile, a customer sits at home, trying to understand a dashboard warning light. They search their owner’s manual PDF, scroll through forums, and eventually call the dealership—waiting on hold just to ask a simple question about whether they can drive safely to their appointment. Both scenarios represent massive inefficiencies in how automotive documentation is stored, accessed, and delivered. With technician shortages costing shops over $60,000 monthly per unfilled position , and 67% of customers preferring self-service options , the industry faces a critical gap between information availability and accessibility. We prototyped a solution that shows how you can transform static automotive manuals into intelligent, searchable knowledge bases using MongoDB Atlas . By combining flexible document storage with semantic search capabilities, you can create platforms that serve both technicians seeking repair procedures and customers looking for quick answers. Building intelligent documentation systems Automotive technical documentation presents unique challenges. Most existing systems have fixed, unchangeable data formats designed primarily for compliance rather than usability. These systems often vary across locations, lack integration with user profiles, and don’t support rapid data access. Organizations need to build custom ingestion pipelines that can process diverse documentation formats and create intelligent, searchable content. Success requires linking each interaction to user identity and storing information that supports immediate, personalized engagement. MongoDB’s flexible document model enables developers to create highly enriched documentation chunks that go far beyond simple text storage. Each document can contain the original content alongside extensive metadata, including source references, safety classifications, procedural hierarchies, user permissions, version control, and contextual relationships. As your organizational needs evolve, you can add new fields and metadata structures without schema migrations or downtime, enabling documentation systems to adapt to changing business needs. An alternative—or complementary—approach is using contextualized chunk embedding models like voyage-context-3 . Instead of relying on manual metadata or context augmentation, this model generates vector embeddings that inherently capture full-document context for each chunk. It leads to higher retrieval accuracy, reduces sensitivity to chunking strategy, and simplifies the pipeline with no downstream changes. Whether you choose a metadata-rich approach, an embedding-first strategy, or both, MongoDB supports it all. Figure 1. Document processing pipeline. This flexibility proves essential when organizations have multiple documentation sources in different formats. Custom processing pipelines can normalize content from various systems while preserving the unique metadata and relationships that make each source valuable. MongoDB’s document structure naturally accommodates this complexity, storing structured technical specifications alongside unstructured procedural text and user interaction history—all queryable through a single interface. Using a unified search that understands context MongoDB Atlas provides three complementary search capabilities that work together to deliver intelligent responses: MongoDB Atlas Search handles precise queries like part numbers and error codes. Technicians searching for a specific part number instantly find relevant diagnostic procedures, while customers typing “coolant warning light” get clear explanations. MongoDB Atlas Vector Search understands intent and context. A customer asking “Why is my engine making a clicking noise?” finds relevant content even without using technical terminology. This approach enables semantic understanding of automotive diagnostic information, enabling queries to match meaning rather than exact keywords. Hybrid search with $rankFusion combines both approaches, ensuring users find information whether they use technical terms or natural language: { $rankFusion: { input: { pipelines: { textSearch: { $search: ... }, vectorSearch: { $vectorSearch: ... } } }, combination: { weights: { textSearch: 1, vectorSearch: 1 } } } } Setting up scalable architecture for dual-purpose knowledge delivery The same MongoDB knowledge base serves both technicians and customers through tailored interfaces. Technicians access detailed procedures with safety warnings, technical specifications, and shop management system integration, while customers receive plain-language explanations, severity assessments, and service scheduling integration. Figure 2. MongoDB Atlas servicing both the technician interface and the customer portal. Custom-built processing pipelines can transform thousands of manual pages across multiple languages. MongoDB Atlas deployments can handle billions of documents while maintaining subsecond query performance. MongoDB Atlas Search and MongoDB Atlas Vector Search work together across this rich metadata, ensuring that whether users search for an error code or “Why won’t my car start?,” the system uses all available context to return relevant results quickly. Having a real-world impact When organizations replace static manuals with an AI-ready documentation platform, the upside reveals itself almost immediately: Customers find answers faster and adopt apps more readily, technicians spend less time hunting for information and more time generating revenue, and compliance teams rest easier knowing that critical warnings and audit trails live right inside every workflow. Iron Mountain’s new InSight Digital Experience Platform (DXP) , built on MongoDB Atlas and MongoDB Atlas Vector Search, is a great example of these benefits in action. By turning mountains of unstructured physical and digital content into searchable, structured data, Iron Mountain gives its customers powerful semantic search, context-aware recommendations, and AI-driven workflow automation—all while meeting strict regulatory requirements. Whether a user is looking for the latest repair bulletin, a decades-old loan document, or a region-specific compliance record, InSight DXP surfaces the right information instantly and tailors the guidance to each user’s expertise level. Transform your technical documentation today The automotive industry faces a clear inflection point. With McKinsey projecting $80 billion in automotive software market value by 2030 and technician shortages reaching crisis levels, organizations that modernize their documentation systems from a cost center into a competitive advantage will capture disproportionate value. Ready to revolutionize how your organization manages technical knowledge? Explore our automotive solutions and get started with MongoDB Atlas Vector Search today . Visit the MongoDB AI Learning Hub to learn more about building AI applications with MongoDB.

August 4, 2025
Artificial Intelligence

Fine-tune MongoDB Deployments with AppMap’s AI Tools and Diagrams

In a rapidly changing landscape, organizations that adapt for growth, efficiency, and competitiveness will be best positioned to succeed. Central to this effort is the continuous fine-tuning and troubleshooting of existing deployments, enabling companies to deliver high-performance applications that meet their business requirements. Yet, navigating application components often leads to long development cycles and high costs. Developers spend valuable time deciphering various programming languages, frameworks, and infrastructures to optimize their systems. They may have to work with complicated, intertwined code, which makes updates difficult. Moreover, older architectures increase information overload with no institutional memory to understand current workloads. To help organizations overcome these challenges, AppMap partnered with MongoDB Atlas to fine-tune MongoDB deployments and achieve optimal performance, enabling developers to build more modern and efficient applications. The AppMap solution empowers developers with AI-driven insights and interactive diagrams that clarify application behavior, decode complex application architectures, and streamline troubleshooting. This integration delivers personalized recommendations for query optimization, proper indexing, and better database interactions. Complementing these capabilities, MongoDB Atlas offers the flexibility, performance, and security essential for building resilient applications and advancing AI-powered experiences. AppMap’s technology stack Founded in 2020 by CEO Elizabeth Lawler, AppMap empowers developers to visualize, understand, and optimize application behavior. By analyzing applications in action, AppMap delivers precise insights into interactions and performance dynamics, recording APIs, functions, and service behaviors. This information is then presented as interactive diagrams, as shown in Figure 1, which can be easily searched and navigated to streamline the development process. Figure 1. Interactive diagram for a MongoDB query. As shown below, AppMap also features Navie, an AI assistant. Navie offers customers advanced code architecture analysis and customized recommendations, derived from capturing application behavior at runtime. This rich data empowers Navie to deliver smarter suggestions, assisting teams in debugging complex issues, asking contextual questions about unfamiliar code, and making more informed code changes. Figure 2. The AppMap Navie AI assistant. With these tools, AppMap improves the quality of the code running with MongoDB, helping developers better understand the flow of their apps. Using AppMap in a MongoDB application Imagine that your team has developed a new e-commerce application running on MongoDB. But you're unfamiliar with how this application operates, so you'd like to gain insights into its behavior. In this scenario, you decide to analyze your application using AppMap by executing the node package with your standard run command. npx appmap-node npm run dev With this command, you use your application just like you normally would. But now every time your app communicates through an API, it will create records. These records are used to create diagrams that help you see and understand how your application works. You can look at these diagrams to get more insights into your app's behavior and how it interacts with the MongoDB database. Figure 3. Interaction diagram for an e-commerce application. Next, you can use the Navie AI assistant to receive tailored insights and suggestions for your application. For instance, you can ask Navie to identify the MongoDB commands your application uses and to provide advice on optimizing query performance. Navie will identify the workflow of your application and may propose strategies to refine database queries, such as reindexing for improved efficiency or adjusting aggregation framework parameters. Figure 4. Insights provided by the Navie AI assistant. With this framework established, you can seamlessly interact with your MongoDB application, gain insights into its usage, enhance its performance, and achieve quicker time to market. Enhancing MongoDB apps with AppMap Troubleshooting and optimizing your MongoDB applications can be challenging, due to the complexity of related microservices that run your services. AppMap facilitates this process by providing in-depth insights into your application behavior with an AI-powered assistant, helping developers better understand your code. With faster root cause analysis and deeper code understanding, businesses can boost developer productivity, improve application performance, and enhance customer satisfaction. These benefits ultimately lead to greater agility and a stronger competitive position in the market. Enhance your development experience with MongoDB Atlas and AppMap . To learn more about how to fine-tune apps with MongoDB, check out the best practices guide for MongoDB performance and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving partner ecosystem.

July 30, 2025
Artificial Intelligence

Introducing voyage-context-3: Focused Chunk-Level Details with Global Document Context

Note to readers: voyage-context-3 is currently available through the Voyage AI API directly. For access, sign up for Voyage AI . TL;DR : We’re excited to introduce voyage-context-3, a contextualized chunk embedding model that produces vectors for chunks that capture the full document context without any manual metadata and context augmentation, leading to higher retrieval accuracies than with or without augmentation. It’s also simpler, faster, and cheaper, and is a drop-in replacement for standard embeddings without downstream workflow changes, also reducing chunking strategy sensitivity. On chunk-level and document-level retrieval tasks, voyage-context-3 outperforms OpenAI-v3-large by 14.24% and 12.56%, Cohere-v4 by 7.89% and 5.64%, Jina-v3 late chunking by 23.66% and 6.76%, and contextual retrieval by 20.54% and 2.40%, respectively. It also supports multiple dimensions and multiple quantization options enabled by Matryoshka learning and quantization-aware training, saving vectorDB costs while maintaining retrieval accuracy. For example, voyage-context-3 (binary, 512) outperforms OpenAI-v3-large (float, 3072) by 0.73% while reducing vector database storage costs by 99.48%—virtually the same performance at 0.5% of the cost. We’re excited to introduce voyage-context-3, a novel contextualized chunk embedding model, where chunk embedding encodes not only the chunk's own content, but also captures the contextual information from the full document. voyage-context-3 provides a seamless drop-in replacement for standard, context-agnostic embedding models used in existing retrieval-augmented generation (RAG) pipelines, while offering improved retrieval quality through its ability to capture relevant contextual information. Compared to both context-agnostic models with isolated chunking (e.g., OpenAI-v3-large, Cohere-v4) as well as existing methods that add context and metadata to chunks, including overlapping chunks and attaching metadata, voyage-context-3 delivers significant gains in retrieval performance while simplifying the tech stack. On chunk-level (retrieving the most relevant chunk) and document-level retrieval (retrieving the document containing the most relevant chunk), voyage-context-3 outperforms on average: OpenAI-v3-large and Cohere-v4 by 14.24% and 12.56%, and 7.89% and 5.64%, respectively. Context augmentation methods Jina-v3 late 1 chunking and contextual retrieval 2 by 23.66% and 6.76%, and 20.54% and 2.40%, respectively. voyage-3-large by 7.96% and 2.70%, respectively. Chunking challenges in RAG Focused detail vs global context. Chunking—breaking large documents into smaller segments, or chunks—is a common and often necessary step in RAG systems. Originally, chunking was primarily driven by the models’ limited context window (which is significantly extended by, e.g., Voyage’s models lately). More importantly, it allows the embeddings to contain precise fine-grained information about the corresponding passages, and as a result, allows the search system to pinpoint precisely relevant passages. However, this focus can come at the expense of a broader context. Finally, without chunking, users must pass complete documents to downstream large language models (LLMs), driving up costs as many tokens may be irrelevant to the query. For instance, if a 50-page legal document is vectorized into a single embedding, detailed information—such as the sentence “All data transmissions between the Client and the Service Provider’s infrastructure shall utilize AES-256 encryption in GCM mode”—is likely to be buried or lost in the aggregate. By chunking the document into paragraphs and vectorizing each one separately, the resulting embeddings can better capture localized details like “AES-256 encryption.” However, such a paragraph may not contain global context—such as the Client’s name—which is necessary to answer queries like “What encryption methods does Client VoyageAI want to use?” Ideally, we want both focused detail and global context—without tradeoffs . Common workarounds—such as chunk overlaps, context summaries using LLMs (e.g., Anthropic’s contextual retrieval), or metadata augmentation—can introduce extra steps into an already complex AI application pipeline. These steps often require further experimentation to tune, resulting in increased development time and serving cost overhead. Introducing contextualized chunk embeddings We’re excited to introduce contextualized chunk embeddings that capture both focused detail and global context. Our model processes the entire document in a single pass and generates a distinct embedding for each chunk. Each vector encodes not only the specific information within its chunk but also coarse-grained, document-level context, enabling richer and more semantically aware retrieval. The key is that the neural network sees all the chunks at the same time and decides intelligently what global information from other chunks should be injected into the individual chunk embeddings. Full document automatic context aware: Contextualized chunk embeddings capture the full context of the document without requiring the user to manually or explicitly provide contextual information. This leads to improved retrieval performance compared to isolated chunk embeddings, while remaining simpler, faster, and cheaper than other context-augmentation methods. Seamless drop-in replacement and storage cost parity: voyage-context-3 is a seamless drop-in replacement for standard, context-agnostic embedding models used in existing search systems, RAG pipelines, and agentic systems. It accepts the same input chunks and produces vectors with identical output dimensions and quantization—now enriched with document-level context for better retrieval performance. In contrast to ColBERT , which introduces an extensive amount of vectors and storage costs, voyage-context-3 generates the same number of vectors and is fully compatible with any existing vector database. Less sensitive to chunking strategy: While chunking strategy still influences RAG system behavior—and the optimal approach depends on data and downstream tasks—our contextualized chunk embeddings are empirically shown to reduce the system's sensitivity to these strategies, because the model intelligently supplement overly short chunks with global contexts. Contextualized chunk embeddings outperform manual or LLM-based contextualization because neural networks are trained to capture context intelligently from large datasets, surpassing the limitations of ad hoc efforts. voyage-context-3 was trained using both document-level and chunk-level relevance labels, along with a dual objective that teaches the model to preserve chunk-level granularity while incorporating global context. table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Context Preservation Engineering Complexity Retrieval Accuracy Standard Embeddings (e.g., OpenAI-v3-large) None Low Moderate Metadata Augmentation & Contextual Retrieval (e.g., Jina-v3 late chunking) Partial High Moderate-High Contextualized Chunk Embeddings (e.g., voyage-context-3) Full, Principled Low Highest Evaluation details Chunk-level and document-level retrieval For a given query, chunk-level retrieval returns the most relevant chunks, while document-level retrieval returns the documents containing those chunks. The figure below illustrates both retrieval levels across chunks from n documents. The most relevant chunk, often referred to as the “golden chunk,” is bolded and shown in green. Its corresponding parent document is shown in blue. Datasets We evaluate on 93 domain-specific retrieval datasets, spanning nine domains: web reviews, law, medical, long documents, technical documentation, code, finance, conversations, and multilingual, which are listed in this spreadsheet . Every dataset contains a set of queries and a set of documents. Each document consists of an ordered sequence of chunks, which are created by us via a reasonable chunking strategy. As usual, every query has a number of relevant documents with a potential score indicating the degree of relevance, which we call document-level relevance labels and can be used for the evaluation of document-level retrieval. Moreover, each query also has a list of most relevant chunks with relevance scores, which are curated through various ways, including labeling by LLMs. These are referred to as chunk-level relevance labels and are used for chunk-level retrieval evaluation. We also include proprietary real-world datasets, such as technical documentation and documents containing header metadata. Finally, we assess voyage-context-3 across different embedding dimensions and various quantization options, on standard single-embedding retrieval evaluation, using the same datasets as in our previous retrieval-quality-versus-storage-cost analysis . Models We evaluate voyage-context-3 alongside several alternatives, including: OpenAI-v3-large (text-embedding-3-large), Cohere-v4 (embed-v4.0), Jina-v3 late chunking (jina-embeddings-v3), contextual retrieval, voyage-3.5, and voyage-3-large. Metrics Given a query, we retrieve the top 10 documents based on cosine similarities and report the normalized discounted cumulative gain (NDCG@10), a standard metric for retrieval quality and a variant of the recall. Results All the evaluation results are available in this spreadsheet , and we analyze the data below. Domain-specific quality. The bar charts below show the average retrieval quality of voyage-context-3 with full-precision 2048 embeddings for each domain. In the following chunk-level retrieval chart, we can see that voyage-context-3 outperforms all other models across all domains. As noted earlier, for chunk-level retrieval, voyage-context-3 outperforms on average OpenAI-v3-large, Cohere-v4, Jina-v3 late chunking, and contextual retrieval by 14.24%, 7.89%, 23.66%, and 20.54%, respectively. voyage-context-3 also outperforms all other models across all domains in document-level retrieval, as shown in the corresponding chart below. On average, voyage-context-3 outperforms OpenAI-v3-large, Cohere-v4, Jina-v3 late chunking, and contextual retrieval by 12.56%, 5.64%, 6.76%, and 2.40%, respectively. Real-world datasets. voyage-context-3 performs strongly on our proprietary real-world technical documentation and in-house datasets, outperforming all other models. The bar chart below shows chunk-level retrieval results. Document-level retrieval results are provided in the evaluation spreadsheet . Chunking sensitivity . Compared to standard, context-agnostic embeddings, voyage-context-3 is less sensitive to variations in chunk size and delivers stronger performance with smaller chunks. For example, on document-level retrieval, voyage-context-3 shows only a 2.06% variance, compared to 4.34% for voyage-3-large, and outperforms voyage-3-large by 6.63% when using 64-token chunks. Context metadata . We also evaluate performance when context metadata is prepended to chunks. Even with metadata prepended to chunks embedded by voyage-3-large, voyage-context-3 outperforms it by up to 5.53%, demonstrating better retrieval performance without the extra work and resources required to prepend metadata. Matryoshka embeddings and quantization . voyage-context-3 supports 2048, 1024, 512, and 256- dimensional embeddings enabled by Matryoshka learning and multiple embedding quantization options—including 32-bit floating point, signed and unsigned 8-bit integer, and binary precision—while minimizing quality loss. To clarify in relation to the previous figures, the chart below illustrates single-embedding retrieval on documents. Compared with OpenAI-v3-large (float, 3072), voyage-context-3 (int8, 2048) reduces vector database costs by 83% with 8.60% better retrieval quality. Further, comparing OpenAI-v3-large (float, 3072) with voyage-context-3 (binary, 512), vector database costs are reduced by 99.48% with 0.73% better retrieval quality; that’s virtually the same retrieval performance at 0.5% of the cost. Try voyage-context-3 voyage-context-3 is available today! The first 200 million tokens are free. Get started with this quickstart tutorial . You can swap in voyage-context-3 into any existing RAG pipeline you have without requiring any downstream changes. Contextualized chunk embeddings are especially effective for: Long, unstructured documents such as white papers, legal contracts, and research reports. Cross-chunk reasoning , where queries require information that spans multiple sections. High-sensitivity retrieval tasks —such as in finance, medical, or legal domains—where missing context can lead to costly errors. To learn more about building AI applications with MongoDB, visit the MongoDB AI Learning Hub . 1 Jina. “ Late Chunking in Long-Context Embedding Models .” August 22, 2024 2 Anthropic. “ Introducing Contextual Retrieval .” September 19, 2024.

July 23, 2025
Artificial Intelligence

Revolutionizing Inventory Classification with Generative AI

In today's volatile geopolitical environment, the global automotive industry faces compounding disruptions that require a fundamental rethink of data and operations strategy. After decades of low import taxes, the return of tariffs as a tool of economic negotiations has led the global automotive industry to delay model-year transitions and disrupt traditional production and release cycles. As of June 2025, only 3% of US automotive inventory comprises next-model-year vehicles —less than half the number seen at this time in previous years. This severe decline in new-model availability, compounded by a 12.2% year-over-year drop in overall inventory, is pressuring consumer pricing and challenging traditional dealer inventory management. In this environment of constrained supply, better tools are urgently needed to classify and control vehicle, spare part, and raw material inventories for both dealers and manufacturers. Traditionally, dealerships and automakers have relied on ABC analysis to segment and control inventory by value. This widely used method classifies items into Category A, B, or C. For example, Category A items typically represent just 20% of stock but drive 80% of sales, while Category C items might comprise half the inventory yet contribute only 5% to the bottom line. This approach effectively helps prioritize resource allocation and promotional efforts. Figure 1. ABC analysis for inventory classification. While ABC analysis is known for its ease of use, it has been criticized for its focus on dollar usage. For example, not all Category C items are necessarily low-priority, as some may be next-model-year units arriving early or aging stock affected by shifting consumer preferences. Other criteria—such as lead-time, commonality, obsolescence, durability, inventory cost, and order size requirements—have also been recognized as critical for inventory classification. A multi-criteria inventory classification (MCIC) methodology, therefore, adds additional criteria to dollar usage. MCIC can be achieved with methods like statistical clustering or unsupervised machine learning techniques. Yet, a significant blind spot remains: the vast amount of unstructured data that organizations must deal with; unstructured data accounts for an estimated 80% of the world's total. Traditional ABC analysis—and even MCIC—often overlook the growing influence of insights gleaned from unstructured sources like customer sentiment and product reviews on digital channels. But now, valuable intelligence from reviews, social media posts, and dealer feedback can be vectorized and transformed into actionable features using large language models (LLMs). For instance, analyzing product reviews can yield qualitative metrics like the probability of recommending or repurchasing a product, or insights into customer expectations vs. the reality of ownership. This textual analysis can also reveal customers' product perspectives, directly informing future demand. By integrating these signals into inventory classification models, businesses can gain a deeper understanding of true product value and demand elasticity. This fusion of structured and unstructured data represents a crucial shift from reactive inventory management to predictive and customer-centric decision-making. In this blog post, we propose a novel methodology to convert unstructured data into powerful feature sets for augmenting inventory classification models. Figure 2. Transforming unstructured data into features for machine learning models. How MongoDB enables AI-driven inventory classification So, how does MongoDB empower the next generation of AI-driven inventory classification? It all comes down to four crucial steps, and MongoDB provides the robust technology and features to support every single one. Figure 3. Methodology and requirements for gen AI-powered inventory classification. Step 1: Create and store vector embeddings from unstructured data MongoDB Atlas enables modern vector search workflows. Unstructured data like product reviews, supplier notes, or customer support transcripts can be vectorized via embedding models (such as Voyage AI models) and ingested into MongoDB Atlas, where they are stored next to the original text chunks. This data then becomes searchable using MongoDB Atlas Vector Search, which allows you to run native semantic search queries directly inside the database. Unlike solutions that require separate databases for structured and vector data, MongoDB stores them side by side using the flexible document model, enabling unified access via one API. This reduces system complexity, technical debt, and infrastructure footprint—and allows for low-latency semantic searches. Figure 4. Product reviews can be stored as vector embeddings in MongoDB Atlas. Step 2: Design and store evaluation criteria In a gen AI-powered inventory classification system, evaluation criteria are no longer a set of static rules stored in a spreadsheet. Instead, the criteria are dynamic and data-backed, and are generated via an AI agent using structured and unstructured data—and enriched by domain experts using business objectives and constraints. As shown in Figure 5, the criteria for features like “Product Durability” can be defined based on relevant unstructured data stored in MongoDB (product reviews, audit reports) as well as structured data like inventory turnover and sales history. Such criteria are not just instructions or rules, but are knowledge objects with structure and semantic depth. The AI agent uses tools such as generate_criteria and embed_criteria tool and iterates over each product in the inventory. It leverages the LLM to create the criteria definition and uses an embedding model (e.g., voyage-3-large ) to generate embeddings of each definition. MongoDB Atlas is uniquely suited to store these dynamic criteria. Each rule is modeled as a flexible JSON document containing the name of the feature, criteria definition, data sources use, and the embeddings. Since there are different types of products (different car models/makes and different car parts), the documents can evolve over time without requiring schema migrations and be queried and retrieved by the AI agent in real time. MongodB Atlas provides all the necessary tools for this design—a flexible document model database, vector search, and full search tools—that can be leveraged by the AI agent to create the criteria. Figure 5. Unstructured and structured data are used by the AI agent to create criteria for feature generation. Step 3: Create an agentic application to perform transformation based on the criteria In the third step, we have another AI agent that operates over products, criteria, and unstructured data to generate enriched feature sets. This agent iterates over every product and uses MongoDB Atlas Vector Search to find relevant customer reviews to apply the criteria to and calculate a numerical feature score. The new features are added to the original features JSON document in MongoDB. In Figure 6, the agent has created “durability” and “criticality” features from the product reviews. MongoDB Atlas is the ideal foundation for this agentic architecture. Again, it provides the agent the tools it needs for features to evolve, adding new dimensions without requiring schema redesign. This results in an adaptive classification dataset that contains both structured and unstructured data. Figure 6. An AI agent enriches product features with vectorized review data to generate new features. Step 4: Rerun the inventory classification model with new features added As a final step, the inventory classification domain experts can assign or balance weights to existing and new features, choose a classification technique, and rerun inventory classification to find new inventory classes. Figure 7 shows the process where generative AI features are used in the existing inventory classification algorithm. Figure 7. Domain experts can rerun classification after balancing weights. Figure 8 shows the solution in action. The customer satisfaction score is created by LLM a using customer reviews vectorized collection and then utilized in the inventory classification model with a new weight of 0.2. Figure 8. Inventory classification using generative AI. Driving smarter inventory decisions As the automotive industry navigates slowing sales and uneven inventory, traditional inventory classification techniques also need to evolve. Though such techniques provide a solid foundation, they fall short in the face of geopolitical uncertainty, tariff-driven supply shifts, and fast-evolving consumer expectations. By combining structured sales and consumption data with unstructured insights, and enabling agentic AI using MongoDB, the automotive industry can enable a new era of inventory intelligence where products are dynamically classified based on all available data—both structured and unstructured. Clone the GitHub repository if you are interested in trying out this solution yourself. To learn more about MongoDB’s role in the manufacturing industry, please visit our manufacturing and automotive webpage .

July 16, 2025
Artificial Intelligence

Build an AI-Ready Data Foundation with MongoDB Atlas on Azure

It’s time for a database reality check. While conversations around AI usually focus on its immense potential, these advancements are also bringing developers face to face with an immediate challenge: Their organizations’ data infrastructure isn’t ready for AI. Many developers now find themselves trying to build tomorrow’s applications on yesterday’s foundations. But what if your database could shift from bottleneck to breakthrough? Is your database holding you back? Traditional databases were built for structured data in a pre-AI world—they’re simply not designed to handle today’s need for flexible, real-time data processing. Rigid schemas force developers to spend time managing database structure instead of building features, while separate systems for operational data and analytics create costly delays and complexity. Your data architecture might be holding you back if: Your developers spend more time wrestling with data than innovating. AI implementation feels like forcing a square peg into a round hole. Real-time analytics are anything but real-time. Go from theory to practice: Examples of modern data architecture at work Now is the time to rethink your data foundation by moving from rigid to flexible schemas that adapt as applications evolve. Across industries, leading organizations are unifying operational and analytical structures to eliminate costly synchronization processes. Most importantly, they’re embracing databases that speak developers’ language. In the retail sector , business demands include dynamic pricing that responds to market conditions in real-time. Using MongoDB Atlas with Azure OpenAI from Microsoft Azure, retailers are implementing sophisticated pricing engines that analyze customer behavior and market conditions, enabling data-driven decisions at scale. In the healthcare sector , organizations can connect MongoDB Atlas to Microsoft Fabric for advanced imaging analysis and results management, streamlining the flow of critical diagnostic information while maintaining security and compliance. More specifically, when digital collaboration platform Mural faced a 1,700% surge in users, MongoDB Atlas on Azure handled its unstructured application data. The results aligned optimally with modern data principles: Mural’s small infrastructure team maintained performance during massive growth, while other engineers were able to focus on innovation rather than database management. As noted by Mural’s Director of DevOps, Guido Vilariño, this approach enabled Mural’s team to “build faster, ship faster, and ultimately provide more expeditious value to customers.” This is exactly what happens when your database becomes a catalyst rather than an obstacle. Shift from “database as storage” to “database as enabler” Modern databases do more than store information—they actively participate in application intelligence. When your database becomes a strategic asset rather than just a record-keeping necessity, development teams can focus on innovation instead of infrastructure management. What becomes possible when data and AI truly connect? Intelligent applications can combine operational data with Azure AI services. Vector search capabilities can enhance AI-driven features with contextual data. Applications can handle unpredictable workloads through automated scaling. Seamless integration occurs between data processing and AI model deployment. Take the path to a modern data architecture The deep integration between MongoDB Atlas and Microsoft’s Intelligent Data Platform eliminates complex middleware, so organizations can streamline their data architecture while maintaining enterprise-grade security. The platform unifies operational data, analytics, and AI capabilities—enabling developers to build modern applications without switching between multiple tools or managing separate systems. This unified approach means security and compliance aren’t bolt-on features—they’re core capabilities. From Microsoft Entra ID integration for access control to Azure Key Vault for data protection, the platform provides comprehensive security while simplifying the development experience. As your applications scale, the infrastructure scales with you, handling everything from routine workloads to unexpected traffic spikes without adding operational complexity. Make your first move Starting your modernization journey doesn’t require a complete infrastructure overhaul or the disruption of existing operations. You can follow a gradual migration path that prioritizes business continuity and addresses specific challenges. The key is having clear steps for moving from legacy to modern architecture. Make decisions that simplify rather than complicate: Choose platforms that reduce complexity rather than add to it. Focus on developer experience and productivity. Prioritize solutions that scale with your needs. For example, you can begin with a focused proof of concept that addresses a specific challenge—perhaps an AI feature that’s been difficult to implement or a data bottleneck that’s slowing development. Making small wins in these areas demonstrates value quickly and builds momentum for broader adoption. As you expand your implementation, focus on measurable results that matter to your organization. Tracking these metrics—whether they’re developer productivity, application performance, or new capabilities—helps justify further investment and refine your approach. Avoid these common pitfalls As you undertake your modernization journey, avoid these pitfalls: Attempting to modernize everything simultaneously: This often leads to project paralysis. Instead, prioritize applications based on business impact and technical feasibility. Creating new data silos: In your modernization efforts, the goal must be integration and simplification. Adding complexity: remember that while simplicity scales, complexity compounds. Each decision should move you toward a more streamlined architecture, not a more convoluted one. The path to a modern, AI-ready data architecture is an evolution, not a revolution. Each step builds on the last, creating a foundation that supports not just today’s applications but also tomorrow’s innovations. Take the next step: Ready to modernize your data architecture for AI? Explore these capabilities further by watching the webinar “ Enhance Developer Agility and AI-Readiness with MongoDB Atlas on Azure .” Then get started on your modernization journey! Visit the MongoDB AI Learning Hub to learn more about building AI applications with MongoDB.

July 8, 2025
Artificial Intelligence

Unified Commerce for Retail Innovation with MongoDB Atlas

Unified commerce is often touted as a transformative concept, yet it represents a long-standing challenge for retailers—disparate data sources and siloed systems. It’s less of a revolutionary concept and more of a necessary shift to make long-standing problems more manageable. Doing so provides a complete business overview—and enables personalized customer experiences—by breaking down silos and ensuring consistent interactions across online, in-store, and mobile channels. Real-time data analysis enables targeted content and recommendations. Unified commerce boosts operating efficiency by connecting systems and automating processes, reducing manual work, errors, and costs, while improving customer satisfaction. Positive customer experience results in repeat customers, improving revenue, and reducing the cost of customer acquisition. MongoDB Atlas offers a robust foundation for unified commerce, addressing critical challenges within the retail sector and providing capabilities that enhance customer experience, optimize operations, and foster business growth. Figure 1. Customer touchpoints in the retail ecosystem. Retail businesses are shifting to a customer-centric and data-driven approach by unifying the customer journey for a seamless, personalized experience that builds loyalty and growth. While retail has long relied on omnichannel strategies with stores, websites, apps, and social media, these often involve separate systems, causing fragmented experiences and inefficiencies. Unified commerce, integrating physical and digital retail via a unified data platform, is a necessary evolution for retailers facing challenges with diverse platforms and data silos. Cloud-based data architectures, AI, and event-driven processing can overcome these hurdles, enabling enhanced customer engagement, optimized operations, and revenue growth. This integration delivers a frictionless customer experience crucial in today's digital marketplace. Figure 2. Enabling a customer-centric approach with unified commerce. MongoDB Atlas for unified commerce MongoDB Atlas provides a strong foundation for unified commerce, addressing key challenges in the retail sector and offering capabilities that enhance customer experience, optimize operations, and drive business growth. MongoDB's flexible document model allows retailers to consolidate varied data, eliminating data silos. This provides consistent, real-time information across all channels for enhanced customer experiences and better decision-making. In MongoDB diverse data can store without rigid schemas, enabling quick adaptation to changing needs and faster integration of siloed physical and digital systems. Figure 3. Unified customer 360 using MongoDB. Real-world adoption: Lidl , part of Schwarz group, implemented an automatic stock reordering application for branches and warehouses, addressing complex data and high volumes to improve supply chain efficiency through real-time data synchronization. Real-time data synchronization for enhanced Cx In retail, real-time processing of customer interactions is crucial. MongoDB's Change Streams and event-driven architecture allow retailers to capture and react to customer behavior instantly. This enables personalized experiences like dynamic pricing, instant order updates, and tailored recommendations, fostering customer loyalty and driving conversions. Figure 4. Real-time data in the operational data layer for enhanced customer experiences. Atlas change streams and triggers enable real-time data synchronization across retail channels, ensuring consistent inventory information and preventing overselling on both physical and e-commerce platforms. Real-world adoption: CarGurus uses MongoDB Atlas to manage vast amounts of real-time data across its platform and support seamless, personalized user experiences both online and in person. The flexible document model helps them handle diverse data structures required for their automotive marketplace. Scalability & high traffic retail MongoDB Atlas's cloud-native architecture provides automatic horizontal scaling, enabling retailers to manage demand fluctuations like seasonal spikes and product expansions without impacting performance, which is crucial for scaling unified commerce. MongoDB Atlas' auto-scaling and multi-cloud features allow retailers to handle traffic spikes during peak periods(holiday, flash sales) without downtime or performance issues. The platform automatically adjusts resources based on demand, ensuring responsiveness and availability, which is vital for positive customer experiences and maximizing sales. Figure 5. Highly scalable MongoDB Atlas for high-traffic retail. Real-world adoption: Commercetools modernized its composable commerce platform using MongoDB Atlas and MACH architecture and achieved amazing throughput for Black Friday. This demonstrates Atlas's ability to handle high-volume retail events through its scalability features. AI and analytics integration MongoDB Atlas enables retailers to gain actionable insights from unified commerce data by integrating with AI and analytics tools. This facilitates personalized shopping, predictive inventory, and targeted marketing across online and offline channels through data-driven decisions. Personalization is a key driver of customer engagement and conversion in the retail industry. MongoDB Atlas Search , with its full-text and vector search capabilities, enables retailers to deliver intelligent product recommendations, visual search experiences, and AI-powered assistants. By leveraging these advanced search and AI capabilities, retailers can help customers find the products they're looking for quickly and easily, provide personalized recommendations based on their interests and preferences, and create a more intuitive and enjoyable shopping experience. Real-world adoption: L'Oréal improved customer experiences through personalized, inclusive, and responsible beauty across several apps. Retailers on MongoDB Atlas can leverage its unstructured data capabilities, vector search, and AI integrations to create real-time, AI-driven applications. Seamless data integration Atlas offers ETL/CDC connectors and APIs to consolidate diverse retail data into a unified operational layer. This single source of truth combines inventory, customer, transaction, and digital data from legacy systems, enabling consistent omnichannel experiences and eliminating data silos that hinder unified commerce. Figure 6. MongoDB Atlas for unified commerce. Real-world adoption: MongoDB helps global retailers, like Adeo , unify cross-channel data into an operational layer for easy synchronization across online and physical platforms, enabling better customer experiences. Advanced search capabilities MongoDB Atlas provides built-in text and vector search capabilities, enabling retailers to create advanced search experiences for enhanced product discovery and personalization across online and physical channels. Figure 7. Integrated search capabilities in MongoDB. Real-world adoption: MongoDB's data platform with integrated search enables retailers to improve customer experience and unify commerce. Customers like Albertsons use this for both customer-facing and back-office operations. Composable architecture with data mesh principles MongoDB supports a composable architecture that aligns with data mesh principles, enabling retailers to build decentralized, scalable, and self-service data infrastructure. Using a domain-driven design approach, different teams within the organization can manage their own data products (e.g., customers, orders, inventory) as independent services. This approach promotes agility, scalability, and data ownership, allowing teams to innovate and iterate quickly while maintaining data integrity and governance. Figure 7. MongoDB Atlas enables domain-driven design for the retail enterprise data foundation. Global distribution For international retailers using unified commerce, Atlas provides low-latency global data access, ensuring fast performance and data sovereignty compliance across multiple markets. MongoDB Atlas enables retailers to distribute data globally across AWS, Google Cloud, and Azure regions as needed, building distributed and multi-cloud architectures for low-latency customer access worldwide. Figure 8. Serving always-on, globally distributed, write-everywhere apps with MongoDB Atlas global clusters. Use cases: How unified commerce transforms retail Unified commerce streamlines the retail experience by integrating diverse channels into a cohesive system. This approach facilitates customer interactions across online and physical stores, enabling features such as real-time inventory checks, personalized recommendations based on purchase history regardless of the transaction location, and frictionless return processes. The objective is to create a seamless and efficient shopping journey through interconnected and collaborative functionalities using a modern data platform that enables the creation of such a data estate. Always-stocked shelves & knowing what's where: Real-time inventory Offer online ordering with delivery or pickup, providing stock estimates Store staff use real-time inventory to help customers and order, minimizing out-of-stocks Treating customers as individuals is a key aspect of Retail. Retail Enterprises need a unified view of customer data to offer personalized recommendations, offers, and content and offer dynamic pricing based on loyalty and market factors. Engaging customers on their preferred channels with consistent messaging and superior service builds lasting relationships. Seamless order orchestration is crucial, providing flexible fulfillment options (delivery, BOPIS, curbside, direct shipping) and keeping customers informed with real-time updates. Optimizing inventory across stores and warehouses ensures speedy, accurate fulfillment. Along with fulfillment, frictionless returns are vital, offering in-store returns for online purchases, efficient tracking, and immediate refunds. In the digital space, intelligent search and discovery are essential. Advanced search, image-based search, and AI chatbots simplify product discovery and support, boosting conversion rates and brand engagement. Leading retailers leverage MongoDB Atlas for these capabilities, powering AI recommendations, real-time inventory, and seamless omnichannel customer journeys to improve efficiency and satisfaction. The future of unified commerce To remain competitive, retailers should adopt flexible, cloud-based systems. MongoDB Atlas facilitates this transition, enabling unified commerce through real-time data, AI search, and scalable microservices for enhanced customer experiences and innovation. Visit our retail solutions page to learn more about how MongoDB Atlas can accelerate Unified Commerce.

June 26, 2025
Artificial Intelligence

Intellect Design Accelerates Modernization by 200% with MongoDB and Gen AI

It’s difficult to overstate the importance of modernization in the age of AI. Because organizations everywhere rely on software to connect with customers and run their businesses, how well they manage the AI-driven shift in what software does—from handling predefined tasks and following rules, to being a dynamic, problem-solving partner —will determine whether or note they succeed. Companies that want to stay ahead must evolve quickly. But this demands speed and flexibility, and most tech stacks weren’t designed for the continuous adaptation that AI requires. Which is where MongoDB comes in: we provide organizations a structured, proven approach to modernizing critical applications, reducing risk, and eliminating technical debt. Our approach to modernization has already led to successful, speedy, cost-effective migrations—and efficiency gains—for the likes of Bendigo Bank and Lombard Odier . So, I’m delighted to share the story of Intellect Design , one of the world’s largest enterprise fintech companies, which recently completed a project modernizing critical components of its Wealth Management platform using MongoDB and gen AI tools. The company, which works with large enterprises around the world, offers a range of banking and insurance technology products. Intellect’s project with MongoDB led to improved performance and reduced development cycle times and its platform is now better positioned to onboard clients, provide richer customer insights, and to unlock more gen AI use cases across the firm. Alongside those immediate benefits, the modernization effort is the first step in Intellect Design's long-term vision to have its entire application suite seamlessly integrated into a single AI service the company has built on MongoDB: Purple Fabric . This would create a powerful system of engagement for Intellect's customers but would only be possible once these key services have all been modernised. "This partnership with MongoDB has transformed how we approach legacy systems, turning bottlenecks into opportunities for rapid innovation. With this project, we’ve not only modernized our Wealth Management platform, but have unlocked the ability to deliver cutting-edge AI-driven services to clients faster than ever before," said Deepak Dastrala, Chief Technology Officer at Intellect Design. Legacy systems block scaling and innovation Intellect Design’s Wealth Management platform is used by some of the world's largest financial institutions to power key services—including portfolio management, systematic investment plans, customer onboarding, and know-your-customers processes—while also providing analytics to help relationship managers deliver personalized investment insights. However, as Intellect’s business grew in size and complexity, the platform’s reliance on relational databases and a monolithic architecture caused significant bottlenecks. Key business logic was locked in hundreds of SQL stored procedures, leading to batch processing delays of up to eight hours, and limiting scalability as transaction volumes grew. The rigid architecture also hindered innovation and blocked integration with other systems, such as treasury and insurance platforms, reducing efficiency, and preventing the delivery of unified financial services. In the past, modernizing such mission-critical legacy systems was seen as almost impossible —it was too expensive, too slow, and too risky. Traditional approaches relied on multi-year consulting engagements with minimal innovation, often replacing old architecture with equally outdated alternatives. Without modern tools capable of handling emerging workloads like AI, efforts were resource-heavy and prone to stalling, leaving businesses unable to evolve beyond incremental changes. MongoDB’s modernization methodology broke through these challenges with a structured approach, combining an agentic AI platform with modern database capabilities, all enabled by a team of experienced engineers. MongoDB demonstrates AI-driven scalability with Purple Fabric Before modernizing its Wealth Management platform, Intellect Design had already experienced the transformative power of a modern document database: the company began working with MongoDB in 2019, and its enterprise AI platform Purple Fabric is built on MongoDB Atlas . Purple Fabric processes vast amounts of structured and unstructured enterprise data to enable actionable compliance insights and risk predictions—both of which are critical for customers managing assets across geographies. An example of this is IntellectAI’s work with one of the largest sovereign wealth funds in the world, which manages over $1.5 trillion across 9,000 companies. By taking advantage of MongoDB Atlas's flexibility, advanced vector search capabilities, and multimodal data processing, Purple Fabric delivers over 90% accuracy in ESG compliance analyses, scaling operations to analyze data from over 8,000 companies—something legacy systems simply couldn’t achieve. This success demonstrated MongoDB’s ability to handle complex AI workloads and was instrumental in Intellect Design’s decision to adopt MongoDB for the modernization of its Wealth Management platform. Overhauling mission-critical components In February 2025, Intellect Design kicked off a project with MongoDB to modernize mission-critical functionalities within its Wealth Management platform. Areas like customer onboarding, transactions, and batch processing all faced legacy bottlenecks—including slow batch processing times and resource-intensive analytics. With MongoDB’s foundry approach to modernization—in which repeatable processes are used—and AI-driven automation and expert engineering, Intellect Design successfully overhauled these key components within just three months, unlocking new efficiency and scalability across its operations. Unlike traditional professional services or large language model (LLM) code conversion, which focus solely on rewriting code, MongoDB’s approach enables full-stack modernization, reengineering both application logic and data architecture to deliver faster, smarter, and more scalable systems. Through this approach, Intellect Design decoupled business logic from SQL-stored procedures, enabling faster updates, reduced operational complexities, and seamless integration with advanced AI tools. Batch-heavy workflows were optimized using frameworks like LMAX Disruptor to handle high-volume transactional data loads, and MongoDB’s robust architecture supported predictive analytics capabilities to pave the way for richer, faster customer experiences. The modernization project delivered measurable improvements across performance, scalability, and adaptability: With onboarding workflow times reduced by 85%, clients can now access critical portfolio insights faster than ever, speeding their decision-making and investment outcomes. Transaction processing times improved significantly, preparing the platform to accommodate large-scale operations for new clients without delays. Development transformation cycles were completed by as much as 200% faster, demonstrating the efficiency of automating traditionally resource-intensive workflows. This progress gives Intellect Design newfound freedom to connect its Wealth platform to broader systems, deliver cross-functional insights, and compete effectively in the AI era. Speeding insights, improving analytics, and unlocking AI While Intellect Design’s initial project with MongoDB focused on modernizing critical components, the company is now looking to extend its efforts to other essential functionalities within the Wealth platform. Key modules like reporting, analytics workflows, and ad-hoc data insights generation are next in line for modernization, with the goal of improving runtime efficiency for real-world use cases like machine learning-powered customer suggestions and enterprise-grade reporting. Additionally, Intellect Design plans to leverage MongoDB’s approach to modernization across other business units, including its capital markets/custody and insurance platforms, creating unified systems that enable seamless data exchange and AI-driven insights across its portfolio. By breaking free from legacy constraints, Intellect Design is unlocking faster insights, smarter analytics, and advanced AI capabilities for its customers. MongoDB’s modernization approach, tools, and team are the engine powering this transformation, preparing businesses like Intellect Design to thrive in an AI-driven future. As industries continue to evolve, MongoDB is committed to helping enterprises build the adaptive technologies needed to lead—and define—the next era of innovation. To learn more about how MongoDB helps customers modernize without friction—using AI to help them transform complex, outdated systems into scalable, modern systems up to ten times faster than traditional methods—visit MongoDB Application Modernization . Visit the Purple Fabric page for more on how Intellect Design’s Purple Fabric delivers secure, decision-grade intelligence with measurable business impact. For more about modernization and transformation at MongoDB, follow Vinod Bagal on LinkedIn .

June 26, 2025
Artificial Intelligence

MongoDB and deepset Pave the Way for Effortless AI App Creation

Building robust AI-powered applications has often been a complex, resource-intensive process. It typically demands deep technical and domain expertise, significant development effort, and a long time to value. For IT decision-makers, the goal is clear: enable AI innovation to achieve real business outcomes without compromising scalability, flexibility, or performance, and without creating bottlenecks for development teams serving business teams and customers. Solutions from deepset and MongoDB empower organizations to overcome these challenges, enabling faster development, unlocking AI's potential, and ensuring the scalability and resilience required by modern businesses. Breaking barriers in AI development: The real-time data challenge For many industries, real-time data access is critical to unlocking insights and delivering exceptional customer experiences. AI-driven applications rely on seamless retrieval and processing of structured and unstructured data to fuel smarter decision-making, automate workflows, and improve user interactions. For example, in customer service platforms, instant access to relevant data ensures fast and accurate responses to user queries, improving satisfaction and efficiency. And healthcare applications require immediate access to patient records to enable personalized treatment plans that enhance patient outcomes. Similarly, financial systems rely on real-time analysis of market trends and borrower profiles to make smarter investment and credit decisions to stay competitive in dynamic environments. However, businesses often face challenges when scaling AI applications. These challenges include inconsistent data retrieval, where organizations struggle to efficiently query and access data across vast pools of information. Another challenge is complex query resolution, which involves interpreting multi-layered queries to retrieve the most relevant insights and provide smart recommendations. Data security concerns also pose obstacles, as businesses must ensure sensitive information remains protected while maintaining compliance with regulatory standards. Lastly, AI production-readiness is critical, requiring organizations to ensure their AI applications are properly configured and thoroughly tested to support mission-critical decisions and workflows with accuracy, speed, and adaptability to rapid changes in the AI ecosystem or world events. Addressing these challenges is vital for businesses looking to unlock the full potential of AI-powered innovations and maintain a competitive edge. Transformative solution: Deepset RAG expertise meets MongoDB Atlas Vector Search We’re excited to announce a new partnership between deepset and MongoDB. By integrating deepset’s expertise in retrieval-augmented generation (RAG) and intelligent agents with MongoDB Atlas, developers can now more easily build advanced AI-powered applications that deliver fast, accurate insights from large and complex datasets. We're thrilled to partner with MongoDB and build out an integrated end-to-end GenAI solution to speed up the time to value of customers' AI efforts and help solve their complex use cases to deliver key business outcomes. Mark Ghannam, Head of Partnerships, deepset What sets deepset apart is its product and documentation production-readiness, flexibility for solving complex use cases, and its library of ready-to-use templates, which allow businesses to get started fast to quickly deploy common RAG and agent functionalities, reducing the time and effort required for development. For teams needing customization, Haystack provides a modular, object-oriented design that supports drag-and-drop components , utilizing both standard integrations and custom components . This makes it highly accessible, enabling developers to configure workflows according to their specific application needs, without requiring extensive coding knowledge. On top of Haystack, deepset’s AI Platform makes the prototype to production process of building AI applications even faster and more efficient. It extends Haystack’s building block approach to AI application development, with a visual design interface, qualitative user testing, side-by-side configuration/large language model (LLM) testing, integrated debugging, and hallucination scoring, in addition to expert service assistance and support. The platform’s Studio Edition is free for developers to try. Through seamless integration with MongoDB Atlas Vector Search , deepset equips developers with the ability to incorporate advanced RAG and agent capabilities into their compound AI applications easily through the processes described, known as LLM orchestration. Key features enable several transformative possibilities across industries. Intelligent chatbots allow businesses to deliver precise and context-aware customer interactions, significantly enhancing call center efficiency. Automated content tagging optimizes and streamlines workflows in content management systems, enabling faster categorization and discovery of information. Tailored educational, research, and media platforms personalize learning materials, research, and media content based on user questions and preferences, improving engagement and effectiveness while adhering to institution and brand guidelines. Industry-specific planning systems and workflow automations simplify complex processes, such as lending due diligence. By leveraging the deepset framework alongside MongoDB Atlas Vector Search, developers gain a powerful toolkit to optimize the performance, scalability, and user experience of their applications. This collaboration provides tangible benefits across industries like customer service, content management, financial services, education, defense, healthcare, media, and law—all while keeping complexity to a minimum. Data security and compliance: A foundational priority As organizations adopt advanced AI technologies, protecting sensitive data is paramount. MongoDB Atlas and deepset offer robust protections to safeguard data integrity. MongoDB and deepset provide industry-standard security measures such as encryption, access controls, and auditing, along with compliance certifications like ISO 27001, SOC 2, and CSA STAR. These measures help ensure that sensitive data is handled with care and that client information remains secure, supporting businesses in meeting their regulatory obligations across different sectors. Incorporating MongoDB Atlas into AI solutions allows enterprises using deepset's RAG and Agent capabilities to confidently manage and protect data, ensuring compliance and reliability while maintaining operational excellence. Shaping the future of AI-powered innovation The partnership between MongoDB and deepset is more than a collaboration—it's a driving force for innovation. By merging cutting-edge language processing capabilities with the robust, scalable infrastructure of MongoDB Atlas, this alliance is empowering organizations to create tomorrow's AI applications, today. Whether it’s intelligent chatbots, personalized platforms, or complex workflow automations, MongoDB and deepset are paving the way for businesses to unlock new levels of efficiency and insight. At the core of this partnership is deepset’s advanced RAG and Agent technology, which enables efficient language processing and precise query resolution—essential components for developing sophisticated AI solutions. Complementing this is MongoDB’s reliable cloud database technology, providing unmatched scalability, fault tolerance, and the ability to effortlessly craft robust applications. The seamless integration of these technologies offers developers a powerful toolkit to create applications that prioritize fast time to value, innovation, and precision. MongoDB’s infrastructure ensures security, reliability, and efficiency, freeing developers to focus their efforts on enhancing application functionality without worrying about foundational stability. Through this strategic alliance, MongoDB and deepset are empowering developers to push the boundaries of intelligent application development. Together, they are delivering solutions that are not only highly responsive and innovative but also expertly balanced across security, reliability, and efficiency—meeting the demands of today’s dynamic markets with confidence. Jumpstart your journey Dive into deepset's comprehensive guide on RAG integration with MongoDB Atlas. Then get started with deepset Studio Edition (free) to start building. Transform your data experience and redefine the way you interact with information today! Learn more about MongoDB and deepset's partnership through our partner ecosystem page .

June 24, 2025
Artificial Intelligence

PointHealth AI: Scaling Precision Medicine for Millions

For years, the healthcare industry has grappled with a persistent, frustrating challenge: the absence of a unified, precise approach to patient treatment. Patients often endure "trial-and-error prescribing," leading to delayed recovery and a system bogged down by inefficiency. The core problem lies in scaling precision medicine—making advanced, individualized care accessible to millions of people. This was the big obstacle that Rachel Gollub, CTO and co-founder of the VC-backed startup PointHealth AI , set out to overcome. With a vision to integrate precision medicine into mainstream healthcare, Gollub and her team are transforming how care is delivered, a mission significantly bolstered by their pivotal partnership with MongoDB . Uncovering the gaps in healthcare treatment decisions Over a decade working within the insurance industry, Gollub and her co-founder, Joe Waggoner, observed a frustrating reality: persistent gaps in how treatment decisions were made. This wasn't just about inefficiency; it directly impacted patients, who often experienced "trial-and-error prescribing" that delayed their recovery. As Gollub states, they witnessed "the frustrating gaps in treatment decision-making." It motivated them to seek a better solution. The fundamental challenge they faced was scaling precision medicine. How could something so powerful be made accessible to millions rather than just a select few hundred? The biggest obstacle wasn't solely about the technology itself; it was about seamlessly integrating that technology into existing healthcare workflows. How PointHealth AI eliminates treatment guesswork PointHealth AI's approach involves a proprietary AI reinforcement learning model. This system analyzes a range of data, including similar patient cases, detailed medical histories, drug interactions, and pharmacogenomic insights. When a physician enters a diagnosis into their health record system, PointHealth AI generates a comprehensive patient report. This report offers tailored treatments, actionable insights, and clinical considerations, all designed to guide decision-making. Gollub explains the company’s mission: "to integrate precision medicine into mainstream healthcare, ensuring every diagnosis leads to the right treatment from the start." Its focus is on "eliminating guesswork and optimizing care from the very first prescription." The objective is "to deliver personalized, data-driven treatment recommendations." Its strategy for implementation involves direct partnerships with insurance companies and employers. By embedding its technology directly into these healthcare workflows, PointHealth AI aims to ensure widespread accessibility across the entire system. It’s also collaborating with health systems, electronic health record (EHR) companies, and other insurers. The natural choice: Why PointHealth AI chose MongoDB Atlas A significant enabler of this progress has been PointHealth AI's partnership with MongoDB. Gollub's prior experience with both self-hosted and managed MongoDB provided confidence in its performance and reliability. MongoDB Atlas was a "natural choice" when selecting a data platform for PointHealth AI. It offered the features the team was looking for, including vector search , text search , and managed scalability . The provision of Atlas credits also swayed the decision. PointHealth AI had specific requirements for its data platform. It needed "high security, HIPAA compliance, auto-scaling, fast throughput, and powerful search capabilities." The fact that MongoDB Atlas provided these features within a single, managed solution was huge. MongoDB Atlas ensures seamless backups and uptime through its managed database infrastructure. Its vector and text search capabilities are critical for effectively training AI models. The scaling experience has been "seamless," according to Gollub. The MongoDB team has offered "invaluable guidance in architecting a scalable system." This support has enabled PointHealth AI to optimize for performance while remaining on budget. Gollub emphasizes that "HIPAA compliance, scalability, expert support, and advisory sessions have all played critical roles in shaping our infrastructure." The MongoDB for Startups program has proven impactful. The "free technical advisor sessions provided a clear roadmap for our database architecture." The Atlas credits offered flexibility, allowing the team to "fine-tune our approach without financial strain." Furthermore, the "invaluable expert recommendations and troubleshooting support from the MongoDB advisor team" have been a vital resource. Gollub extends a "huge thank you to the MongoDB Atlas team for their support in building and scaling our system, and handling such an unusual use case." From pilots to Series A: PointHealth AI's next steps Looking forward, PointHealth AI has an ambitious roadmap for the current year. Its focus includes launching pilot installations and expanding partnerships with insurance and EHR companies. It’s also dedicated to refining its AI model to support a wider range of health conditions beyond depression. The overarching goal is to bring "precision-driven treatment recommendations to physicians and patients." The aim, Gollub said, is to "launch successful pilots, acquire new customers, and complete our Series A round." As Gollub states, "Precision medicine isn’t the future—it’s now." The team possesses the technology to deliver targeted treatment options, aiming to ensure patients receive the correct care from the outset. Their vision is to shape a healthcare system where personalized treatments are the standard. Visit PointHealth AI to learn more about how this innovative startup is making advanced, individualized care accessible to millions. Join the MongoDB for Startups program to start building faster and scaling further with MongoDB!

June 11, 2025
Artificial Intelligence

Enhancing AI Observability with MongoDB and Langtrace

Building high-performance AI applications isn’t just about choosing the right models—it’s also about understanding how they behave in real-world scenarios. Langtrace offers the tools necessary to gain deep insights into AI performance, ensuring efficiency, accuracy, and scalability. San Francisco-based Langtrace AI was founded in 2024 with a mission of providing cutting-edge observability solutions for AI-driven applications. While still in its early stages, Langtrace AI has rapidly gained traction in the developer community, positioning itself as a key player in AI monitoring and optimization. Its open-source approach fosters collaboration, enabling organizations of all sizes to benefit from advanced tracing and evaluation capabilities. The company’s flagship product, Langtrace AI, is an open-source observability tool designed for building applications and AI agents that leverage large language models (LLMs). Langtrace AI enables developers to collect and analyze traces and metrics, optimizing performance and accuracy. Built on OpenTelemetry standards, Langtrace AI offers real-time tracing, evaluations, and metrics for popular LLMs, frameworks, and vector databases, with integration support for both TypeScript and Python. Beyond its core observability tools, Langtrace AI is continuously evolving to address the challenges of AI scalability and efficiency. By leveraging OpenTelemetry, the company ensures seamless interoperability with various observability vendors. Its strategic partnership with MongoDB enables enhanced database performance tracking and optimization, ensuring that AI applications remain efficient even under high computational loads. Langtrace AI's technology stack Langtrace AI is built on a streamlined—yet powerful—technology stack, designed for efficiency and scalability. Its SDK integrates OpenTelemetry libraries, ensuring tracing without disruptions. On the backend, MongoDB works with the rest of their tech stack, to manage metadata and trace storage effectively. For the client-side, Next.js powers the interface, utilizing cloud-deployed API functions to deliver robust performance and scalability. Figure 1. How Langtrace AI uses MongoDB Atlas to power AI traceability and feedback loops “We have been a MongoDB customer for the last three years and have primarily used MongoDB as our metadata store. Given our longstanding confidence in MongoDB's capabilities, we were thrilled to see the launch of MongoDB Atlas Vector Search and quickly integrated it into our feedback system, which is a RAG (retrieval-augmented generation) architecture that powers real-time feedback and insights from our users. Eventually, we added native support to trace MongoDB Atlas Vector Search to not only trace our feedback system but also to make it natively available to all MongoDB Atlas Vector Search customers by partnering officially with MongoDB.” Karthik Kalyanaraman, Co Founder and CTO, Langtrace AI. Use cases and impact The integration of Langtrace AI with MongoDB has proven transformative for developers using MongoDB Atlas Vector Search . As highlighted in Langtrace AI's MongoDB partnership announcement , our collaboration equips users with the tools needed to monitor and optimize AI applications, enhancing performance by tracking query efficiency, identifying bottlenecks, and improving model accuracy. The partnership enhances observability within the MongoDB ecosystem, facilitating faster, more reliable application development. Integrating MongoDB Atlas with advanced observability tools like Langtrace AI offers a powerful approach to monitoring and optimizing AI-driven applications. By tracing every stage of the vector search process—from embedding generation to query execution—MongoDB Atlas provides deep insights that allow developers to fine-tune performance and ensure smooth, efficient system operations. To explore how Langtrace AI integrates with MongoDB Atlas for real-time tracing and optimization of vector search operations, check out this insightful blog by Langtrace AI, where they walk through the process in detail. Opportunities for growth and the evolving AI ecosystem Looking ahead, Langtrace AI is excited about the prospects of expanding the collaboration with MongoDB. As developers craft sophisticated AI agents using MongoDB Atlas, the partnership aims to equip them with the advanced tools necessary to fully leverage these powerful database solutions. Together, both companies support developers in navigating increasingly complex AI workflows efficiently. As the AI landscape shifts towards non-deterministic systems with real-time decision-making, the demand for advanced observability and developer tools intensifies. MongoDB is pivotal in this transformation, providing solutions that optimize AI-driven applications and ensuring seamless development as the ecosystem evolves. Explore further Interested in learning more about Langtrace AI and MongoDB partnership? Discover the enriching capabilities Langtrace AI brings to developers within the MongoDB ecosystem. Learn about tracing MongoDB Atlas Vector Search with Langtrace AI to improve AI model performance. Access comprehensive documentation for integrating Langtrace AI with MongoDB Atlas. Start enhancing your AI applications today and experience the power of optimized observability. To learn more about building AI-powered apps with MongoDB, check out our AI Learning Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

June 9, 2025
Artificial Intelligence

Navigating the AI Revolution: The Importance of Adaptation

In 1999, Steve Ballmer gave a famous speech in which he said that the “key to industry transformation, the key to success is developers developers developers developers developers developers developers, developers developers developers developers developers developers developers! Yes!” A similar mantra applies when discussing how to succeed with AI: adaptation, adaptation, adaptation! Artificial intelligence has already begun to transform how we work and live, and the changes AI is bringing to the world will only accelerate. Businesses rely ever more heavily on software to run and execute their strategies. So, to keep up with competitors, their processes and products must deliver what end-users increasingly expect: speed, ease of use, personalization—and, of course, AI features. Delivering all of these things (and doing so well) requires having the right tech stack and software foundation in place and then successfully executing. To better understand the challenges organizations adopting AI face, MongoDB and Capgemini recently worked with the research organization TDWI to assess the state of AI readiness across industries. The road ahead Based on a survey “representing a diverse mix of industries and company sizes,” TDWI’s “The State of Data and Operational Readiness for AI ” contains lots of super interesting findings. One I found particularly compelling is the percentage of companies with AI apps in production: businesses largely recognize the potential AI holds, but only 11% of survey respondents indicated that they had AI applications in production. Still only 11%! We’re well past the days of exploring whether AI is relevant. Now, every organization sees the value. The question is no longer ‘if’ but ‘how fast and how effectively’ they can scale it. Mark Oost, VP, AI and Generative AI Group Offer Leader, Capgemini There’s clearly work to be done; data readiness challenges highlighted in the report include managing diverse data types, ensuring accessibility, and providing sufficient compute power. Less than half (39%) of companies surveyed manage newer data formats, and only 41% feel they have enough compute. The report also shows how much AI has changed the very definition of software, and how software is developed and managed. Specifically, AI applications continuously adapt, and they learn and respond to end-user behavior in real-time; they can also autonomously make decisions and execute tasks. All of which depends on having a solid, flexible software foundation. Because the agility and adaptability of software are intrinsically linked to the data infrastructure upon which it's built, rigid legacy systems cannot keep pace with the demands of AI-driven change. So modern database solutions (like, ahem, MongoDB)—built with change in mind—are an essential part of a successful AI technology stack. Keeping up with change The tech stack can be said to comprise three layers: at the “top,” the interface or user experience layer; then the business logic layer; and a data foundation at the bottom. With AI, the same layers are there, but they’ve evolved: Unlike traditional software applications, AI applications are dynamic . Because AI-enriched software can reason and learn, the demands placed on the stack have changed. For example, AI-powered experiences include natural language interfaces, augmented reality, and those that anticipate user needs by learning from other interactions (and from data). In contrast, traditional software is largely static: it requires inputs or events to execute tasks, and its logic is limited by pre-defined rules. A database underpinning AI software must, therefore, be flexible and adaptable, and able to handle all types of data; it must enable high-quality data retrieval; it must respond instantly to new information; and it has to deliver the core requirements of all data solutions: security, resilience, scalability, and performance. So, to take action and generate trustworthy, reliable responses, AI-powered software needs access to up-to-date, context-rich data. Without the right data foundation in place, even the most robust AI strategy will fail. Figure 1. The frequency of change across eras of technology. Keeping up with AI can be head-spinning, both because of the many players in the space (the number of AI startups has jumped sharply since 2022, when ChatGPT was first released 1 ), and because of the accelerating pace of AI capabilities. Organizations that want to stay ahead must evolve faster than ever. As the figure above dramatically illustrates, this sort of adaptability is essential for survival. Execution, execution, execution But AI success requires more than just the right technology: expert execution is critical. Put another way, the difference between success and failure when adapting to any paradigm shift isn’t just having the right tools; it’s knowing how to wield those tools. So, while others experiment, MongoDB has been delivering real-world successes, helping organizations modernize their architectures for the AI era, and building AI applications with speed and confidence. For example, MongoDB teamed up with the Swiss bank Lombard Odier to modernize its banking tech systems. We worked with the bank to create customizable generative AI tooling, including scripts and prompts tailored for the bank’s unique tech stack, which accelerated its modernization by automating integration testing and code generation for seamless deployment. And, after Victoria’s Secret transformed its database architecture with MongoDB Atlas , the company used MongoDB Atlas Vector Search to power an AI-powered visual search system that makes targeted recommendations and helps customers find products. Another way MongoDB helps organizations succeed with AI is by offering access to both technology partners and professional services expertise. For example, MongoDB has integrations with companies across the AI landscape—including leading tech companies (AWS, Google Cloud, Microsoft), system integrators (Capgemini), and innovators like Anthropic, LangChain, and Together AI. Adapt (or else) In the AI era, what organizations need to do is abundantly clear: modernize and adapt, or risk being left behind. Just look at the history of smartphones, which have had an outsized impact on business and communication. For example, in its Q4 2007 report (which came out a few months after the first iPhone’s release), Apple reported earnings of $6.22 billion, of which iPhone sales comprised less than 2% 2 ; in Q1 2025, the company reported earnings of $124.3 billion, of which 56% was iPhone sales. 3 The mobile application market is now estimated to be in the hundreds of billions of dollars, and there are more smartphones than there are people in the world. 4 The rise of smartphones has also led to a huge increase in the number of people globally who use the internet. 5 However, saying “you need to adapt!” is much easier said than done. TWDI’s research, therefore, is both important and useful—it offers companies a roadmap for the future, and helps them answer their most pressing questions as they confront the rise of AI. Click here to read the full TDWI report . To learn more about how MongoDB can help you create transformative, AI-powered experiences, check out MongoDB for Artificial Intelligence . P.S. ICYMI, here’s Steve Ballmer’s famous “developers!” speech. 1 https://ourworldindata.org/grapher/newly-funded-artificial-intelligence-companies 2 https://www.apple.com/newsroom/2007/10/22Apple-Reports-Fourth-Quarter-Results/ 3 https://www.apple.com/newsroom/pdfs/fy2025-q1/FY25_Q1_Consolidated_Financial_Statements.pdf 4 ttps://www.weforum.org/stories/2023/04/charted-there-are-more-phones-than-people-in-the-world/ 5 https://ourworldindata.org/grapher/number-of-internet-users

June 4, 2025
Artificial Intelligence

Ready to get Started with MongoDB Atlas?

Start Free