MongoDB Blog
Announcements, updates, news, and more
MongoDB.local NYC 2025: Defining the Ideal Database for the AI Era
Yesterday, we welcomed thousands of developers and executives to MongoDB.local NYC, the latest stop in our global .local series. Over the past year, we’ve connected with tens of thousands of partners and customers in 20 cities worldwide. But it’s especially meaningful to be in New York—where MongoDB was founded and where we are still headquartered. During the event, we introduced new capabilities that advance MongoDB’s position as the world’s leading modern database. With MongoDB 8.2, our most feature-rich and performant release yet, we are raising the bar for what developers can achieve. We also shared more about our Voyage AI embedding models and rerankers, which bring state-of-the-art accuracy and efficiency to building trustworthy, reliable AI applications. And with Search and Vector Search now in public preview for both MongoDB Community Edition and Enterprise Server , we are putting powerful retrieval capabilities directly into customers’ environments—wherever they prefer to run. I am particularly excited about the launch of the MongoDB Application Modernization Platform , or AMP. Enterprises everywhere are grappling with the massive costs of legacy systems that cannot support the demands of AI. AMP is not a simple “lift-and-shift.” It is a repeatable, end-to-end platform that combines AI-powered tooling, proven techniques, and specialized talent to reinvent critical business systems while minimizing cost and risk. Early results are impressive: enterprises moving from old systems to MongoDB are doing so two to three times faster, and tasks like code rewriting are accelerating by an order of magnitude. Figure 1. MongoDB.local NYC keynote. Watch the full keynote on YouTube. Becoming the world’s most popular modern database When I reflect on MongoDB’s journey, I’m struck by how far we’ve come. When I joined just over a decade ago, we had only a few thousand customers. Today, MongoDB serves nearly 60,000 organizations across every industry and vertical, including more than 70% of the Fortune 500 and cutting-edge AI-native startups. Yet the reason behind our growth remains the same. Relational databases built in the 1970s were never designed for the scale and complexity of modern applications. They were rigid, hard to scale, and slow to adapt. Our founders, who had lived those limitations first-hand while building DoubleClick, set out to create something better: a database model designed for the realities of the modern world. The document model was born. Based in JSON, the document model is intuitive, flexible, and powerful. It allows developers to represent complex, interdependent, and constantly changing data in a natural way. And, as we enter the era of AI, those same qualities—adaptability, scalability, and security—are more critical than ever. The database a company chooses will be one of the most strategic decisions determining the success of its AI initiatives. Generative AI applications have already begun delivering productivity gains, writing code, drafting documents, and answering questions. But the real transformation lies ahead with agentic AI —applications that perceive, decide, and act. These intelligent agents don’t just follow workflows; they pursue outcomes, reasoning about the best steps to achieve them. And in that loop, the database is indispensable. It provides the memory that allows agents to perceive context, the facts that allow them to decide intelligently, and the state that will enable them to act coherently. This is why a company’s data is its most valuable asset. Large language models (LLMs) may generate responses, but it is the database that provides continuity, collaboration, and true intelligence. The future of AI is not only about reasoning—it is about context, memory, and the power of your data. The ideal database for transformative AI So what does the ideal database for agentic AI look like? It must reflect today’s complexity and tomorrow’s change. It must speak the language of AI, which is increasingly JSON. It must integrate advanced retrieval across raw data, metadata, and embeddings—not just exact matching but meaning and intent. It must bridge private data and LLMs with the highest-quality embeddings and rerankers. And it must deliver the performance, scalability, and security required to power mission-critical applications at a global scale. This is precisely what MongoDB delivers. We don’t simply check the boxes on this list—we define them. We’re only just getting started That’s why I am so optimistic about our future. The energy and creativity we see at every MongoDB.local event remind me of the passion that has always fueled this company. As our customers continue to innovate, I know MongoDB is in the perfect position to help them succeed in the AI era. We can’t wait to see what you build next. To see more announcements and for the latest product updates, visit our What’s New page. And head to the MongoDB.local hub to see where we’ll be next.
Celebrating Excellence: MongoDB Global Partner Awards 2025
In a world being reshaped by AI and rapid technological change, one thing is clear: our partners are powering the future with MongoDB. Together, we help customers modernize legacy systems, solve challenges from security to budget constraints, and build the next wave of AI-powered applications. That’s why we’re proud to announce the annual MongoDB Global Partner Awards — celebrating partners who led the way in 2025. From pioneering AI and modernization to advancing public sector innovation to building bold go-to-market collaborations, these partners set the standard for excellence. Their leadership doesn’t just move the needle — it redefines what’s possible. Global Cloud Partner of the Year: Microsoft We are proud to recognize Microsoft for exceptional year-over-year growth as MongoDB’s Global Cloud Partner of the Year. Together, MongoDB and Microsoft have delivered strong momentum across industries such as healthcare, telecommunications, and financial services, helping organizations build great applications that deliver exceptional customer experiences. Microsoft’s deep commitment to collaboration, customer success, and cloud leadership makes it an indispensable part of MongoDB’s partner ecosystem. The strength of the partnership continues to grow; in fact, MongoDB was recently selected as a Microsoft partner for a “Unify your data solution play,” which enables customers to benefit from the joint integrations and go-to-market (GTM) resources between MongoDB Atlas on Azure and Native Microsoft services. Global AI Cloud Partner of the Year: Amazon Web Services (AWS) AWS has been a driving force in helping customers unlock the full potential of AI with MongoDB, highlighted by our work with Novo Nordisk , who leveraged Amazon Bedrock and MongoDB Atlas to build an AI solution that cut one of their most time-intensive workflows from 12 weeks to 10 minutes. The work with Novo Nordisk is just one example of many that showcases the power of our partnership to create business differentiation for customers in the gen AI era. MongoDB was also a generative AI Competency launch partner for AWS, further tightening our collaboration in AI. From breakthrough generative AI use cases and beyond, our partnership empowers organizations to move faster, innovate more boldly, and transform with confidence. Together, AWS and MongoDB are shaping what’s possible in the AI era. Global Cloud GTM Partner of the Year: Google Cloud Google Cloud is being honored for accelerating new business through impactful joint GTM initiatives. MongoDB's partnership with Google Cloud has set the standard for meaningful collaboration—driving new business and delivering impact across some of the world’s most complex global enterprises. The joint Google Cloud and MongoDB Sales Development Representative program has been the cornerstone of this success, ensuring early-stage talent get the opportunity to work with the largest organisations in the world whilst learning a sales playbook that will serve them well for the rest of their career. Google Cloud continues to be a driving force in MongoDB’s global growth thanks to its joint commitment to innovative GTM strategies. Global Systems Integrator Partner of the Year: Accenture Accenture has demonstrated exceptional commitment as a Global SI Partner, establishing a dedicated center of excellence for MongoDB within its software engineering service line. Together, MongoDB and Accenture have delivered transformative customer outcomes across industries, from payment modernization for a leading bank to data transformation for a major manufacturer. Meanwhile, closer collaboration with Accenture’s BFSI business unit has continued to fuel global customer success. By combining MongoDB’s modern database platform with Accenture’s deep industry expertise, our partnership continues to help customers modernize, unlock data-driven insights, and accelerate digital transformation at enterprise scale. Global Public Sector Partner of the Year: Accenture Federal Services Accenture Federal Services has played a pivotal role in advancing MongoDB’s presence in the public sector. Thanks to its scale, expertise, and focus on customer outcomes, it has driven remarkable year-over-year growth and has supported critical government missions in coordination with MongoDB. MongoDB and Accenture Federal Services are helping government agencies meet their efficiency goals by modernizing legacy applications, seamlessly consolidating platforms, and streamlining architectures, all while reducing costs. We are excited to have Accenture Federal Services as a key sponsor of our inaugural MongoDB Public Sector Summit in January 2026. Global Tech Partner of the Year: Confluent Confluent —the data streaming platform built by the co-creators of Apache Kafka®—continues to be a strategic partner with more than 550 joint customer deployments delivering impact across industries worldwide. Over the past year, MongoDB and Confluent have strengthened global go-to-market (GTM) alignment, focusing acceleration of co-sell engagement across EMEA and APAC. Together, MongoDB and Confluent have delivered gen AI quickstarts, no-code streaming demos, and co-authored agentic AI thought leadership to help customers accelerate innovation with data in motion and build event-driven AI applications. Our partnership is anchored in strong field collaboration, with ongoing co-sponsored AI workshops and hands-on developer events. A standout highlight of our GTM collaboration was a joint gen AI Developer Day with Confluent and LangChain, where AI leaders engaged 80+ developers to showcase how our combined platforms enable cost-effective, explainable, and personalized multi-agent systems. Global ISV Partner of the Year: BigID BigID has remained a standout ISV partner for MongoDB, consistently delivering strong results for customers across financial services, insurance, and healthcare. Together, we have launched impactful joint GTM initiatives, from customer events to tailored incentive programs that have accelerated growth opportunities. BigID continues to be recognized as a leader in data security, privacy, and AI data management, and thanks to our close global alignment, is further strengthening MongoDB’s position as a trusted partner for organizations operating in highly regulated industries. Global AI Tech Partner of the Year: LangChain MongoDB’s partnership with LangChain has unlocked powerful new integrations that make it easier for developers to build retrieval-augmented generation (RAG) applications and intelligent agents on MongoDB. From hybrid search and parent document retrievers to short- and long-term memory capabilities, these joint solutions are helping developers push the boundaries of what’s possible with AI. Through joint workshops, webinars, and hands-on training, we have equipped developers with the tools and knowledge to adopt these capabilities at scale. Momentum continues to build rapidly, and adoption of both the LangChain/MongoDB and LangGraph/MongoDB packages continues to grow, highlighting the strength of our collaboration and the thriving developer ecosystem that MongoDB and LangChain are enabling together. Global AI SI Partner of the Year: Pureinsights Pureinsights accelerates intelligent search and AI application development with its powerful Discovery Platform. A standout capability is its integration with Voyage AI by MongoDB , delivering advanced embeddings, multimodal embedding, and result reranking, earning recognition for its strong proof point track record and differentiated value in enterprise-grade use cases. With a focus on implementing generative AI, vector search, and RAG use cases, Pureinsights continues to empower clients to innovate quickly, reliably, and at scale. Global Modernization Partner of the Year: gravity9 gravity9 has established itself as a trusted MongoDB partner by delivering consistent impact through modernization and jumpstart projects across industries and geographies, powered by AI. As a strategic implementation partner, gravity9 specializes in designing and delivering cloud-native, scalable solutions that help organizations modernize legacy systems, adopt new technologies, accelerate time-to-value, and prepare for the AI era. By combining deep technical expertise with an agile delivery model, gravity9 enables customers to unlock transformation opportunities, whether moving workloads to the cloud, building new AI experiences, or optimizing existing infrastructure. gravity9’s close collaboration with MongoDB’s Professional Services teams has generated consistently high customer ratings, demonstrating the quality and reliability of their work. Global Impact Partner of the Year: IBM IBM is being recognized for the Impact Partner of the Year award for their strategic contributions across a variety of large, industry-leading clients. IBM has played a critical role in securing large contracts with several multinational financial institutions and is investing more in expanding the partnership globally. The partnership continues to grow, including with Atlas & Watsonx.ai, and increasing numbers of differentiated projects on the IBM Z Systems or LinuxOne infrastructure. IBM is a trusted vendor for large Enterprises, and is a strategic partner in over 25% of MongoDB's largest customers. Global Cloud - Certified DBaaS Partner of the Year: Alibaba Alibaba Cloud has established itself as a strategic MongoDB partner by driving innovation with ApsaraDB for MongoDB and utilizing AI to help organizations build modern applications. With a strong focus on key verticals such as Gaming, Automotive, Retail, and Fintech, Alibaba Cloud is enabling enterprises to modernize faster and unlock new opportunities across industries. By combining cutting-edge data solutions with a bold global expansion strategy, Alibaba Cloud empowers customers worldwide to accelerate transformation, whether scaling digital platforms, delivering new customer experiences, or optimizing mission-critical workloads. Looking ahead Congratulations to all of the 2025 Global Partner Award winners! Their commitment to innovation, collaboration, and customer success has—and will have—a lasting impact on organizations worldwide. These awards not only recognize the past year’s achievements, but also underscore MongoDB’s vision for what we, alongside our partners, will build together in the future. To learn more about the MongoDB Partner Program, please visit our partners page .
The Future of AI Software Development is Agentic
Today in New York, our flagship MongoDB.local event is bringing together thousands of developers and tech leaders to discuss the future of building with MongoDB. Among the many exciting innovations and product announcements shared during the event, one theme has stood out: empowering developers to reliably build with AI and create AI solutions at scale on MongoDB. This post will explore how these advancements are set to accelerate developer productivity in the AI era. Ship faster with the MongoDB MCP Server Software development is rapidly evolving with AI tools powered by large language models (LLMs). From AI-driven editors like VS Code with GitHub Copilot and Windsurf, to terminal-based coding agents like Claude Code, these tools are transforming how developers work. While these tools bring tremendous productivity gains already, coding agents are still limited by the context they have. Since databases hold the core of most application-related data, access to configuration details, schemas, and sample data from databases is essential for generating accurate code and optimized queries. With Anthropic’s introduction of the Model Context Protocol (MCP) in November 2024, a new way emerged to connect AI agents with data sources and services. Database connection and interaction quickly became one of the most popular use cases for MCP in agentic coding. Today, we’re excited to announce the general availability (GA) of the MongoDB MCP Server, giving AI assistants and agents access to the context they need to explore, manage, and generate better code with MongoDB. Building on our public preview used by thousands of developers, the GA release introduces key capabilities to strengthen production readiness: Enterprise-grade authentication (OIDC, LDAP, Kerberos) and proxy connectivity. Self-hosted remote deployment support, enabling shared deployments across teams, streamlined setup, and centralized configuration. Note that we recommend following security best practices , such as implementing authentication for remote deployments. Accessible as a bundle with the MongoDB for VS Code extension , it delivers a complete experience: visually explore your database with the extension or interact with the same connection through your AI assistant, all without switching context. Figure 1. Overview of the MongoDB MCP Server. Meeting developers where they are with n8n and CrewAI integrations AI is transforming how developers build with MongoDB, not just in coding workflows, but also in creating AI applications and agents. From retrieval-augmented generation (RAG) to powering agent memory, these systems demand a database that can handle diverse data types—such as unstructured text (e.g., messages, code, documents), vectors, and graphs—all while supporting comprehensive retrieval mechanisms at scale like vector and hybrid search. MongoDB delivers this in a single, unified platform: the flexible document model supports the varied data agents need to store, while advanced, natively integrated search capabilities eliminate the need for separate vector databases. With Voyage AI by MongoDB providing state-of-the-art embedding models and rerankers, developers get a complete foundation for building intelligent agents without added infrastructure complexity. As part of our commitment to making MongoDB as easy to use as possible, we’re excited to announce new integrations with n8n and CrewAI . n8n has emerged as one of the most popular platforms for building AI solutions, thanks to its visual interface and out-of-the-box components that make it simple and accessible to create reliable AI workflows. This integration adds official support for MongoDB Atlas Vector Search , enabling developers to build RAG and agentic RAG systems through a flexible, visual interface. It also introduces an agent chat memory node for n8n agents, allowing conversations to persist by storing message history in MongoDB. Figure 2. Example workflow with n8n and MongoDB powering an AI agent. Meanwhile, CrewAI—a fast-growing open-source framework for building and orchestrating AI agents—makes multi-agent collaboration more accessible to developers. As AI agents take on increasingly complex and productive workflows such as online research, report writing, and enterprise document analysis, multiple specialized agents need to interact and delegate tasks with each other effectively. CrewAI provides an easy and approachable way to build such multi-agent systems. Our official integration adds support for MongoDB Atlas Vector Search , empowering developers to build agents that leverage RAG at scale. Learn how to implement agentic RAG with MongoDB Atlas and CrewAI. The future is agentic AI is fundamentally reshaping the entire software development lifecycle, including for developers building with MongoDB. New technology like the MongoDB MCP Server is paving the way for database-aware agentic coding, representing the future of software development. At the same time, we’re committed to meeting developers where they are: integrating our capabilities into their favorite frameworks and tools so they can benefit from MongoDB’s reliability and scalability to build AI apps and agents with ease. Start building your applications with the MongoDB MCP Server today by following the Get Started guide . Visit the AI Learning Hub to learn more about building AI applications with MongoDB.
MongoDB Queryable Encryption Expands Search Power
Today, MongoDB is expanding the power of Queryable Encryption by introducing support for prefix, suffix, and substring queries. Now in public preview, these capabilities extend the technology beyond equality and range queries, unlocking broader use cases for secure, expressive search on encrypted data. Developed by the MongoDB Cryptography Research Group , Queryable Encryption is a groundbreaking, industry-first in use encryption technology. It enables customers to encrypt sensitive application data, store it in encrypted form in the MongoDB database, and perform expressive queries directly on that encrypted data. This release provides organizations with the tools to perform flexible text searches on encrypted data, such as matching partial names, keywords, or identifiers, without ever exposing the underlying information. This helps strengthen data protection, simplify compliance, and remove the need for complex workarounds such as external search indexes, all without any changes to the application code. With support for prefix, suffix, and substring queries, Queryable Encryption enables organizations to protect sensitive data throughout its lifecycle: at rest, in transit, and in use. As a result, teams can build secure, privacy-preserving applications without compromising functionality or performance. Queryable Encryption is available at no additional cost in MongoDB Atlas , Enterprise Advanced , and Community Edition . Encryption: Securing data across its lifecycle Many organizations must store and search sensitive data, such as personally identifiable information (PII) like names, Social Security numbers, or medical details, to power their applications. Implementing this securely presents real challenges. Encrypting data at rest and in transit is widely adopted and table stakes. However, encrypting data while it is actively being used, known as encryption in use, has historically been much harder to realize. The dilemma is that traditional encryption makes data unreadable, preventing databases from running queries without first decrypting it. For instance, a healthcare provider may need to find all patients with diagnoses that include the word “diabetes.” However, without decrypting the medical records, the database cannot search for that term. To work around this, many organizations either leave sensitive fields unencrypted or use complex and less secure workarounds, such as building separate search indexes. Both approaches add operational overhead and increase the risk of unauthorized access. They also make it harder to comply with regulations like the Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI-DSS), or General Data Protection Regulation (GDPR), where violations can carry significant fines. To fully protect sensitive data and meet compliance requirements, organizations need the ability to encrypt data in use, in transit, and at rest without compromising operational efficiency. Building secure applications with fewer tradeoffs MongoDB Queryable Encryption solves this quandary. It protects sensitive data while eliminating the tradeoff between security and development velocity. Organizations can encrypt sensitive data, such as personally identifiable information (PII) or protected health information (PHI), while still running queries directly on that data without exposing it to the database server. With support for prefix, suffix, and substring queries (in public preview), Queryable Encryption enables MongoDB applications to encrypt sensitive fields such as names, email addresses, notes, and ID numbers while still performing native partial-match searches on encrypted data. This eliminates the impasse between protecting sensitive information and enabling essential application functionality. For business leaders, Queryable Encryption strengthens data protection, supports compliance requirements, and reduces the risk of data exposure. This helps safeguard reputation, avoid costly fines, and eliminate the need for complex third-party solutions. For developers, advanced encrypted search is built directly into MongoDB’s query language. This eliminates the need for code changes, external indexes, or client-side workarounds while simplifying architectures and reducing overhead. Some examples of what organizations can now achieve: PII Search for compliance and usability: Regulations such as GDPR and HIPAA mandate strict privacy of personal information. With prefix queries, teams can retrieve users by last name or email prefix while ensuring the underlying data remains encrypted. This makes compliance easier without reducing search functionality. Keyword filtering in support workflows: Customer service notes often contain sensitive details in free-text fields. With substring query support, teams can search encrypted notes for specific keywords, e.g. “refund,” “escalation,” or “urgent”. This is possible without exposing the contents of those notes. Secure ID validation: Identity workflows often rely on partial identifiers such as the last digits of a Social Security Number in the U.S., a National Insurance Number in the UK, or an Aadhaar Number in India. Suffix queries enable these lookups on encrypted fields without revealing full values. This reduces the risk of data leaks in regulated environments. Case management for public agencies: Case numbers and reference IDs in public sector applications often follow structured formats. Now agencies can securely retrieve records using a prefix query based on region- or office-based prefixes without exposing sensitive case metadata, e.g. “NYC-” or “EUR-”. Note: This functionality is in public preview. Therefore, MongoDB recommends that these new Queryable Encryption features not be used for production workloads until they are generally available in 2026. MongoDB wants to build and improve Queryable Encryption with customer needs and use cases in mind. As General Availability approaches, customers are encouraged to contact their account team or share feedback through the MongoDB Feedback Engine . Robust data protection at every stage MongoDB offers unmatched protection for sensitive data throughout its entire lifecycle with Queryable Encryption. This includes data in transit, at rest, or in use. With the addition of prefix, suffix, and substring query support, Queryable Encryption meets even more of the demands of modern applications, unlocking new use cases. To learn more about Queryable Encryption and how it works, explore the features documentation page . To get started using Queryable Encryption, read the Quick Start Guide .
Supercharge Self-Managed Apps With Search and Vector Search Capabilities
MongoDB is excited to announce the public preview of search and vector search capabilities for use with MongoDB Community Edition and MongoDB Enterprise Server. These new capabilities empower developers to prototype, iterate, and build sophisticated, AI-powered applications directly in self-managed environments with robust search functionality. Versatility is one of the reasons why developers love MongoDB. MongoDB can run anywhere. 1 This includes local setups where many developers kickstart their MongoDB journey, to the largest enterprise data centers when it is time to scale, and MongoDB’s fully managed cloud service, MongoDB Atlas . Regardless of where development takes place, MongoDB effortlessly integrates with any developer's workflow. MongoDB Community Edition is the free, source-available version of MongoDB that millions of developers use to learn, test, and grow their skills. MongoDB Enterprise Server is the commercial version of MongoDB’s core database. It offers additional enterprise-grade features for companies that prefer to self-manage their deployments on-premises or in public, private, or hybrid cloud environments. With native search and vector search capabilities now available for use with Community Edition and Enterprise Server, MongoDB aims to deliver a simpler and consistent experience for building great applications wherever they are deployed. What is search and vector search? Similar to the offerings in MongoDB Atlas, MongoDB Community Edition and MongoDB Enterprise Server now support two distinct yet complementary search capabilities: Full-text search is an embedded capability that delivers a seamless, scalable experience for building relevance-based app features. Vector search enables developers to build intelligent applications powered by semantic search and generative AI using native, full-featured vector database capabilities. There are no functional limitations on the core search aggregation stages in this public preview. Therefore, $search , $searchMeta , and $vectorSearch are all supported with functional parity to what is available in Atlas, excluding features in a preview state. For more information, check out the search and vector search documentation pages. Solving developer challenges with integrated search Historically, integrating advanced search features into self-managed applications often required bolting on external search engines or vector databases to MongoDB. This approach created friction at every stage for developers and organizations, leading to: Architectural complexity: Managing and synchronizing data across multiple, disparate systems added layers of complexity, demanded additional skills, and complicated development workflows. Operational overhead: Handling separate provisioning, security, upgrades, and monitoring for each system placed a heavy load on DevOps teams. Decreased developer productivity: Developers are forced to learn and use different query APIs and languages for both the database and the search engine. This resulted in frequent context switching, steeper learning curves, and slower release cycles. Consistency challenges: Aligning the primary database with separate search or vector indexes risked producing out-of-sync results. Despite promotions of transactional guarantees and data consistency, these indexes were only eventually consistent. This led to incomplete results in rapidly changing environments. With search and vector search now integrated into MongoDB Community Edition and MongoDB Enterprise Server, these trade–offs disappear. Developers can now create powerful search capabilities using MongoDB's familiar query framework, removing the synchronization burden and the need to manage multiple single-purpose systems. This release simplifies data architecture, reduces operational overhead, and accelerates application development. With these capabilities, developers can harness sophisticated out-of-the-box capabilities to build a variety of powerful applications. Potential use cases include: table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Use Case Description Keyword/Full-text search Autocomplete and fuzzy search Create real-time suggestions and correct spelling errors as users type, improving the search experience Search faceting Apply quick filtering options in applications like e-commerce, so users can narrow down search results based on categories, price ranges, and more Internal search tools Build search tools for internal use or for applications with sensitive data that require on-premises deployment Vector search AI-powered semantic search Implement semantic search and recommendation systems to provide more relevant results than traditional keyword matching Retrieval-augmented generation (RAG) Use search to retrieve factual data from a knowledge base to bring accurate, context-aware data into large language model (LLM) applications AI agents Create agents that utilize tools to collect context, communicate with external systems, and execute actions Hybrid search Hybrid search Combine keyword and vector search techniques Data processing Text analysis Perform text analysis directly in the MongoDB database MongoDB offers native integrations with frameworks such as LangChain , LangGraph , and LlamaIndex . This streamlines workflows, accelerates development, and embeds RAG or agentic features directly into applications. To learn more about other AI frameworks supported by MongoDB, check out this documentation . MongoDB’s partners and champions are already experiencing the benefits from utilizing search and vector search across a wider range of environments: “We’re thrilled that MongoDB search and vector search are now accessible in the already popular MongoDB Community Edition. Now our customers can leverage MongoDB and LangChain in either deployment mode and in their preferred environment to build cutting-edge LLM applications.”—Harrison Chase, CEO, LangChain. “MongoDB has helped Clarifresh build awesome software, and I’ve always been impressed with its rock-solid foundations. With search and vector search capabilities now available in MongoDB Community Edition, we gain the confidence of accessible source code, the flexibility to deploy anywhere, and the promise of community-driven extensibility. It’s an exciting milestone that reaffirms MongoDB’s commitment to developers.”—Luke Thompson, MongoDB Champion, Clarifresh. “We’re excited about the next interaction of search experiences in MongoDB Community Edition. Our customers want the highest flexibility to be able to run their search and gen AI-enabled applications, and bringing this functionality to Community unlocks a whole new way to build and test anywhere.”—Jerry Liu, CEO, LlamaIndex. “Participating in the Private Preview of Full-text and Vector Search for MongoDB Community has been an exciting opportunity. Having $search, $searchMeta, and $vectorSearch directly in Community Edition brings the same powerful capabilities we use in Atlas—without additional systems or integrations. Even in early preview, it’s already streamlining workflows and producing faster, more relevant results.”—Michael Höller, MongoDB Champion, akazia Consulting. Accessing the public preview The public preview is available for free and is intended for testing, evaluation, and feedback purposes only. Search and Vector Search with MongoDB Community Edition. The new capabilities are compatible with MongoDB version 8.2+, and operate on a separate binary, mongot, which interacts with the standard mongod database binary. To get started, ensure that: A MongoDB Community Server cluster is running using one of the following three methods: Download MongoDB Community Server version 8.2 from the MongoDB Downloads page . As of public preview, this feature is available for self-managed deployments on supported Linux distributions and architectures for MongoDB Community Edition version 8.2+. Download the ```mongot``` binary from the MongoDB Downloads page . Pull the container image for Community Server 8.2 from a public Docker Hub repository . Coming soon: Deploy using the MongoDB Controllers for Kubernetes Operator (Search Support for Community Server is planned for version 1.5+ ). Search and Vector Search for use with MongoDB Enterprise Server . The new capabilities are deployed as self-managed search nodes in a customer's Kubernetes environment. This will seamlessly connect to any MongoDB Enterprise Server clusters, residing inside or outside Kubernetes itself. To get started, ensure that: A MongoDB Enterprise Server cluster is running. version 8.0.10+ (for MongoDB Controllers for Kubernetes operator 1.4). version 8.2+ (for MongoDB Controllers for Kubernetes operator 1.5+). A Kubernetes environment. The MongoDB Controllers for Kubernetes Operator are installed in the Kubernetes cluster. Find installation instructions here . Comprehensive documentation for setup for MongoDB Community Edition and MongoDB Enterprise Server is also available. What's next? During the public preview, MongoDB will deliver additional updates and roadmap features based on customer feedback. After the public preview, these search and vector search capabilities are anticipated to be generally available for use with on-premise deployments. For Community Edition, these capabilities will be available at no additional cost as part of the Server Side Public License (SSPL) . For MongoDB Enterprise Server, these capabilities will be included in a new paid subscription offering that will launch in the future. Pricing and packaging details for the subscription will be available closer to launch. For developers seeking a fully managed experience in the cloud, MongoDB Atlas offers a production-ready version of these capabilities today. MongoDB would love to hear feedback! Suggest new features or vote on existing ideas at feedback.mongodb.com . The input is critical for shaping the future of this product. Users can contact their MongoDB account team to provide more comprehensive feedback. Check out MongoDB’s documentation to learn how to get started with Search and Vector Search in MongoDB Community Edition and MongoDB Enterprise Server . 1 MongoDB can be deployed as a fully managed multi-cloud service across all major public cloud providers, in private clouds, locally, on-premesis and hybrid environments.
Unlock AI With MongoDB and LTIMindtree’s BlueVerse Foundry
Many enterprises are eager to capitalize on gen AI to transform operations and stay competitive, but most remain stuck in proofs of concept that never scale. The problem isn’t ambition. It’s architecture. Rigid legacy systems, brittle pipelines, and fragmented data make it hard to move from idea to impact. That’s why LTIMindtree partnered with MongoDB to create BlueVerse Foundry : a no-code, full-stack AI platform powered by MongoDB Atlas , built to help enterprises quickly go from prototype to production without compromising governance, performance, or flexibility. The power of MongoDB: Data without limits At the heart of this platform is MongoDB Atlas, a multi-cloud database that redefines how enterprises manage and use data for AI. Unlike traditional relational databases, MongoDB’s document model adapts naturally to complex, evolving data, without the friction of rigid schemas or heavy extract, transform, and load pipelines. For AI workloads that rely on diverse formats like vector embeddings, images, or audio, MongoDB is purpose built. Its real-time data capabilities eliminate delays and enable continuous learning and querying. Search is another differentiator. With MongoDB Atlas Search and Atlas Vector Search , MongoDB enables enterprises to combine semantic and keyword queries for highly accurate, context-aware results. GraphRAG adds another layer, connecting relationships in data through retrieval-augmented generation (RAG) to reveal deeper insights. Features like semantic caching ensure performance remains high even under pressure, while built-in support for both public and private cloud deployments makes it easy to scale. Together, these capabilities turn MongoDB from a data store into an AI acceleration engine, supporting everything from retrieval to real-time interaction to full-stack observability. The challenge: Building with limitations Traditional systems were never designed for the kind of data modern AI requires. As enterprises embrace gen AI models that integrate structured and unstructured data, legacy infrastructure shows its cracks. Real-time processing becomes cumbersome, multiple environments create redundancy, and rising computing needs inflate costs. Building AI solutions often demands complex coding, meticulous model training, and extensive infrastructure planning, resulting in a delayed time to market. Add to that the imperative of producing responsible AI, and the challenge becomes even steeper. Models must not only perform but also be accurate, unbiased, and aligned with ethical standards. Enterprises are left juggling AI economics, data security, lineage tracking, and governance, all while trying to deliver tangible business value. This is precisely why a flexible, scalable, and AI-ready data foundation like MongoDB is critical. Its ability to handle diverse data types and provide real-time access directly addresses the limitations of traditional systems when it comes to gen AI. The solution: A smarter way to scale AI With BlueVerse Foundry and MongoDB Atlas, enterprises get the best of both worlds: LTIMindtree’s rapid no-code orchestration and MongoDB’s flexible, scalable data layer. This joint solution eliminates common AI bottlenecks and accelerates deployment, without the need for complex infrastructure or custom code. BlueVerse Foundry’s modular, no-code architecture enables enterprises to quickly build, deploy, and scale AI agents and apps without getting bogged down by technical complexity. This is significantly amplified by MongoDB’s inherent scalability, schema flexibility, and native RAG capabilities, which were key reasons for LTIMindtree choosing MongoDB as the foundational data layer. With features like the no-code agent builder, agent marketplace, and business-process-automation blueprints, enterprises can create tailored solutions that are ready for production, all powered by MongoDB Atlas. A synergistic partnership: Smarter together The collaboration between MongoDB and LTIMindtree’s BlueVerse Foundry brings together powerful AI capabilities with a future-ready database backbone. This partnership highlights how MongoDB’s AI narrative and broader partner strategy focus on enabling enterprises to build intelligent applications faster and more efficiently. Together, they simplify deployment, enable seamless integration with existing systems, and create a platform that can scale effortlessly as enterprise needs evolve. What makes this partnership stand out is the ability to turn ideas into impact faster. With no-code tools, prebuilt agents, and MongoDB’s flexible data model, enterprises don’t need to wait months to see results. They can use their existing infrastructure, plug in seamlessly, and start delivering real-time AI-driven insights almost immediately. Governance, performance, and scalability aren’t afterthoughts; they’re built into every layer of this ecosystem. “We’re seeing a shift from experimentation to execution—enterprises are ready to scale gen AI, but they need the right data foundation,” said Haim Ribbi, Vice President of Global CSI, VAR and Tech Partner at MongoDB. “That’s where MongoDB Atlas fits in, and where an agentic platform like LTIMindtree’s BlueVerse Foundry uses it to its full potential for innovation.” Real-world impact: From data to differentiated experiences This joint solution is already delivering real-world impact. A leading streaming platform used LTIMindtree’s solution, powered by MongoDB, to personalize content recommendations in real time. With MongoDB handling the heavy lifting of diverse data management and live queries, the company saw a 30% rise in user engagement and a 20% improvement in retention. Central to this transformation is the platform’s content hub, which acts as a unified data catalog, organizing enterprise information so it’s accessible, secure, and ready to power next-generation AI solutions with MongoDB’s robust data management. Whether dealing with text, images, or audio, the platform seamlessly manages multimodal data, eliminating the need for separate systems or processes. For businesses looking to accelerate development, BlueVerse Foundry and Marketplace offer a no-code builder, prebuilt agents, and templates, enabling teams to go from concept to deployment in a fraction of the time compared to traditional methods. BlueVerse Foundry’s RAG pipelines simplify building smart applications, using MongoDB Atlas Search and MongoDB Atlas Vector Search for highly effective RAG. Advanced orchestration connects directly with AI models, enabling rapid experimentation and deployment. A globally acclaimed media company has been using BlueVerse Foundry to automate content tagging and digital asset management, cutting its discovery time by 40% and reducing overheads by 15%—clear evidence of gen AI’s bottom-line impact when implemented right. BlueVerse Foundry’s strength lies in combining speed and control. By providing everything from ready-to-use user-experience kits, over 25 plug-and-play microservices, token-based economic models, 100+ safe listed large language models (LLMs), tools and agents, and full-stack observability, BlueVerse Foundry and Marketplace enables enterprises to move faster without losing sight of governance. Its support for voice interfaces, regional languages, Teams, mobile, and wearables like Meta AI Glasses ensures an omnichannel experience out of the box. Responsible AI: A built-in capability LTIMindtree doesn’t just build AI faster; it builds it responsibly. With built-in measures like LLM output evaluation, moderation, and audit trails, the platform ensures enterprises can trust the results their models generate. This is further supported by MongoDB’s robust security features and data governance capabilities, ensuring a secure and ethical AI ecosystem. It’s not just about preventing hallucinations or bias; it’s about creating an ecosystem where quality, transparency, and ethics are fundamental, not optional. Scaling: Streamlined for the long term The platform’s libraries, app galleries, and FinOps tooling enable businesses to test, deploy, and expand with confidence. Powered by MongoDB Atlas’s inherent scalability and multi-cloud flexibility, BlueVerse Foundry is built for long-term AI success, not just early experimentation. Enterprise AI: From possibility to production The BlueVerse Foundry and Marketplace, powered by MongoDB, is more than a technological partnership; it’s a new standard for enterprise AI. It combines deep AI expertise with an agile data foundation, helping organizations escape the trap of endless proofs of concept and unlock meaningful value. For enterprises still unsure about gen AI’s return on investment, this solution offers a proven path forward, grounded in real-world success, scalability, and impact. The future of AI isn’t something to wait for. With LTIMindtree and MongoDB, it’s already here. Explore how LTIMindtree and MongoDB are transforming gen AI from a concept into an enterprise-ready reality. Learn more about building AI applications with MongoDB through the AI Learning Hub .
Circles Uses MongoDB to Fuel Jetpac’s Rapid Global Expansion
Founded in Singapore in 2014, Circles has grown to become a global telecommunication company revolutionizing the industry with its cutting-edge SaaS platform. Present in 14 countries, Circles empowers telco operators worldwide to launch innovative digital brands and refresh existing ones, accelerating their transformation into ‘techcos’. MongoDB Atlas is at the heart of Circles’ success, enabling one of Circles’ biggest product launches, Jetpac, in 2022. Kelvin Chua, Head of Markets, and Circles' first employee, described Circles’ experience at the recent MongoDB .local Singapore, in July 2025 . During a Fireside session, Chua shared insights into Circles’ journey and his own close relationship with MongoDB. Here’s the full conversation. For the Fireside discussion with Kelvin Chua, skip to 23'40. Before we dive into your work with MongoDB, could you please introduce yourself? Sure. I am currently the Head of Markets for Circles, but I am also their first employee. I've been working in the telco space for more than 20 years, and I've been around the ecosystem of startups a lot, helping build and scale startups. Actually, the first time I used MongoDB was for a startup in the [Silicon] Valley. So you have a pretty long standing relationship with MongoDB. Can you tell us a bit more about that? As I said, my relationship with MongoDB dates back to my start-up days, when MongoDB was still in its infancy. I chose MongoDB to handle about 5 million documents per hour. That was back in 2013. From there, I started looking at how MongoDB scales. Years later, I continued to leverage MongoDB to help build Circles in Singapore, but also for scaling the company globally across Pakistan, Mexico, and other regions. Figure 1. Kelvin Chua, Head of Markets, Circles, speaking at MongoDB.local Singapore in July 2025. How [did] Circles’ journey with MongoDB start? Circles was built on the Community Edition of MongoDB back in 2014 . At the time, our team was using Node.js, and I immediately knew that MongoDB and a NoSQL database model was the right choice to build and scale the business. As a fan of Node.js, it was very natural to feel the ease of using MongoDB. I feel like using Node.js for your workload and then using MongoDB for the backend creates the best tandem. As we transitioned to other development environments and languages, including Golang, MongoDB remained at the core of our database operations, mostly because of its flexibility, the ease of prototyping, and the scalability. We never really saw the need to change from MongoDB as our requirements for a document store are always fulfilled by it. More recently, Circles launched a very successful offering: Jetpac. … This was also built on MongoD, but before we dive into this, can you share more about what sparked the idea for this product? As you know, the COVID-19 pandemic put a hold on all international travel. So when 2022 rolled in, we were expecting a boom in travel again as restrictions eased up. This is when we had the idea for Jetpac, which is basically a travel tech solution providing seamless roaming and innovative travel lifestyle products. You had a pretty challenging timeline to work against, though? Yes! We had a massive challenge because we had six weeks to build Jetpac right from zero. That included solutioning, strategizing, and team-building. Having had ten years of experience working with MongoDB, I knew we needed a NoSQL database, especially to keep track of what people are buying, how they are using the packs, and how to present usage to customers. So Jetpac was built on MongoDB Atlas in just six weeks, in time to launch before the end of year holiday period where international travel was expected to resume. Since then, Jetpac expanded really quickly: revenue grew 500% between January and November 2024, and it is now available in 200 countries. Jetpac is not just a Singapore product anymore—it’s a global brand. Can you explain why you recently decided to move from MongoDB Community to Atlas for your operations in Singapore? Sure. After several years running and scaling the business on MongoDB Community Edition, we decided that it was time to move to MongoDB Atlas because we wanted to optimize efficiencies and reduce operational costs across Circles’ markets, as well as reduce and ease the regulatory compliance risk and burden posed by the Singapore telco industry. It all started when we decided to run an internal comparison to look into how many resources were spent on maintaining our database internally, versus moving to MongoDB Atlas. We realized that we were running very inefficient clusters—many clusters with only about 10% utilization per cluster. That cost goes to the cloud provider. We found that we could aggregate these clusters into MongoDB Atlas to improve utilization and save money. Another main benefit of moving to MongoDB Atlas was the flexibility and productivity that this offered our engineering and developer team. That is something we hear a lot from our customers, how MongoDB Atlas really helps empower their engineering team. How did that show up for you and your team? MongoDB Atlas amplifies everything very fast. It allows engineers to make mistakes in sandbox environments in a way where they don’t get constrained by wondering, ‘What can I break?’ If they make a mistake, they can spin up a new instance and move forward. That is really valuable. Also, with automation and workflows streamlined under MongoDB Atlas, we have really been able to expedite new projects. For example, I was in India a few weeks ago for one of our new Jetpac B2B projects. We were able to shortcut our process by about a week just because contractors could access MongoDB Atlas and select schemas immediately—no delays in consulting environments! What does the future hold for Circles? What are your priorities? What technologies are you investing in? I think AI is really a big priority for us. Actually, some of our users might already have noticed that our app is starting to evolve into a more AI-powered application— we provide predictions, automatic waivers, personalized special offers, and more. As a company, Circles plans to continue leveraging MongoDB as we scale these AI operations, particularly for retrieval-augmented generation (RAG) projects where we want to rely on MongoDB’s vector search capabilities . I love hearing that and am really looking forward to seeing all of these projects come to life! Thank you so much for being with us, Kelvin. Thank you. Learn more about MongoDB Atlas through the learning hub . Visit our product page to learn more about MongoDB Community Edition.
MongoDB Atlas Now Available in the Vercel Marketplace
We are pleased to announce that MongoDB Atlas is now available in the Vercel Marketplace . It’s now easier than ever to leverage MongoDB’s flexible document model, distributed architecture, and versatile built-in search capabilities from within the Vercel ecosystem. The combination of Vercel, the company that provides the tools and infrastructure for developers to build on the AI Cloud, and MongoDB, the world’s leading modern database, creates a supercharged offering that uniquely enables developers to rapidly build, scale, and adapt AI applications. In the words of Tom Occhino, Chief Product Officer, Vercel, "We’re excited to partner with MongoDB to bring Atlas into the Vercel Marketplace. By combining MongoDB’s flexible data platform with Vercel’s focus on developer experience, we’re giving our joint community a faster path to build and scale intelligent applications on the AI Cloud.” Andrew Davidson, SVP Products, MongoDB, added, "Vercel powers many of the best experiences on the web, with an exceptional focus on developer experience from open source to their AI Cloud. We are thrilled to be launching onto the Vercel Marketplace, supercharging our joint community with the power of MongoDB's flexible document model with integrated search and vector search." The Vercel Marketplace is the best way for developers hosting web applications on Vercel to manage third-party dependencies. It’s easy to use—with a few clicks, developers can integrate tools for analytics, authentication, logging, testing, and more. Now that MongoDB is part of the Marketplace, you can simply follow the intuitive deployment process and let MongoDB Atlas persist data for web applications built on Vercel. Vercel: Build and deploy on the AI Cloud AI-powered applications and tools have fundamentally changed the landscape of web development. Developers are increasingly tasked with building applications that can process unstructured data, adapt to changing requirements, and scale dynamically—all while maintaining the speed and reliability users expect. Traditional relational databases and hosting solutions can fall short in this new paradigm, creating friction that slows development and limits innovation. Vercel has carved for itself a key place in this new landscape. Originally known for creating and maintaining Next.js , one of the most popular frameworks for building web applications, Vercel has built on that early success to evolve far beyond its frontend origins. The Vercel Marketplace serves as a central hub where developers can discover and manage their third-party services. v0 by Vercel lets developers turn their ideas into interactive web apps, fast, with AI that generates production-grade code from natural language. And Vercel’s AI SDK provides a free, open-source library that gives developers the tools they need to build AI-powered products. The whole ecosystem is incredibly powerful. Anything you create with v0 can be deployed to Vercel. The Marketplace creates a frictionless experience for integrating disparate tools and services, including MongoDB Atlas, without leaving the Vercel ecosystem, further simplifying deployments. Clearly, Vercel’s scope has grown to be a one-stop shop for the creation and hosting of web applications, an “AI Cloud.” MongoDB enhances the Vercel experience MongoDB and Vercel share a commitment to the developer experience, freeing up developer time to focus on developing rather than getting bogged down with infrastructure concerns. MongoDB, with its flexible data model, distributed architecture, and versatile search functionality, acts as a natural complement to Vercel, a classic case of the whole being greater than the sum of its parts. MongoDB’s document model allows developers, both human and agentic, to model their domain intuitively and work with both structured and unstructured data, allowing for fast iteration and powerful abstractions. Via sharding and replica sets , MongoDB scales up to meet developer needs, offering an easy-to-use, performant, and scalable developer experience at the data layer to accompany Vercel's scalability at the application layer. MongoDB offers a myriad of ways to search your data (vector, semantic, even hybrid), meeting the requirements for virtually any AI use case. The Marketplace integration means developers can provision a MongoDB Atlas database directly from their Vercel dashboard, configure connections, and start building—all without context switching between different platforms or dealing with complex setup procedures and fractured billing. It’s never been easier to use MongoDB Atlas with Vercel. Looking ahead This integration marks a key milestone in the deepening of the partnership between MongoDB and Vercel. As both companies continue to grow in the AI space, developers can expect even more powerful tools and capabilities from the successful partnership. The combination of MongoDB Atlas and Vercel provides a strong foundation for developers who want to build the next generation of web and AI applications, simply and scalably. Get started today and experience how MongoDB Atlas and Vercel can supercharge your application development workflow. Interested in trying Vercel and MongoDB together? Take a look at our documentation Install the integration from the Vercel Marketplace directly
How MongoDB Helps Your Brand Thrive in the Age of AI
The Zero Moment of Truth (ZMOT) was coined by Google to describe the moment when a user researches a product online before buying—typically through search, reviews, or videos. In a world where AI agents are intermediating shopping decisions (such as through assistant bots, personal agents, or even procurement AIs), the traditional concept of ZMOT starts to break down, because: The “moment” is no longer directly human. The “truth” might be algorithmically filtered. The user delegates the decision process (partially or fully) to an agent. For retailers, this isn't a minor trend—it’s a "change everything" moment. The traditional customer journey is being radically rewired. For decades, the battle was to win the top spot on a search engine results page. But what happens when the customer isn't a person searching, but is instead an AI agent executing a command like, "Buy me the best-value noise-canceling headphones"? If your brand isn't visible to that agent, you are, for all practical purposes, invisible. The brands that will win in this new landscape are the ones that can make their products and services discoverable and transactable not just by humans, but by AI. This shift presents a profound challenge that goes beyond marketing. Brands are shifting their direct relationship with the customer, handing it over to an AI intermediary. Traditional strategies built for human psychology and search engine algorithms become obsolete when the shopper is an AI agent. The core challenges are therefore immense: How do you build trust with an algorithm? How do you communicate your brand's value in a machine-readable format? And most importantly, how do you ensure your product is the one an agent selects from a sea of competitors? This article is meant to provide you with clarity on what the future of online shopping will look like, how your brand will be affected by this new paradigm and why the MongoDB document model is the best underlying tool for organizing and exposing your product catalog to this upcoming agentic ecommerce era. So, how might we rename or reframe ZMOT for this agent-mediated paradigm? To understand this shift, let's first clarify what we mean by 'agentic AI' and 'agents.' Agentic AI refers to artificial intelligence systems capable of acting autonomously to achieve specific goals on behalf of a user, often by interacting with various tools and services. An 'agent' in this context is the specific AI entity that performs these actions. For example, imagine telling your AI assistant, ' Book me a flight to London next month within a £500 budget, departing in the morning .' An AI agent would then autonomously search, compare, and potentially book the flight for you, acting as your personal delegate. Ever since reading the news of OpenAI naming Instacart’s CEO their new Head of Applications, I haven’t stopped thinking about what this will mean for the world of e-commerce and (yes, I’m a millennial) how the term “googling” came to be and became part of our zeitgeist in the early 2000s. The world of e-commerce is on the brink of a similar paradigmatic shift. For years, brands have poured resources into search engine optimization (SEO), battling for coveted spots on search engine results pages. But what if the search engine as we know it gets disrupted? What if, instead of searching, customers simply ask an AI to find and buy for them? This isn't a far-off futuristic fantasy. It's happening now. With the rise of powerful AI assistants like OpenAI's Improved Shopping Results from ChatGPT Search and the new Operator agent, we are entering a new era of "agentic commerce." This is the Agentic Moment of Truth (AMOT): the precise point at which an autonomous agent, acting on behalf of a user, synthesizes data, context, and intent to make or recommend a purchase decision. For retailers, this is a "change everything" moment. The traditional customer journey, from discovery to purchase, is being radically rewired. The brands that will win in this new landscape are the ones that can make their products and services discoverable and transactable not just by humans, but by AI agents. Figure 1. Evolution of the customer journey thanks to agentic AI. The new customer flow: From ZMOT to AMOT For over a decade, marketers have been obsessed with the ZMOT. But, AI agents are collapsing the ZMOT. Instead of a human spending hours browsing websites, reading reviews, and comparing prices, an AI can do it in seconds. This new customer flow, driven by agents, looks something like this: The prompt: A user gives a natural language command to their AI assistant, like, "Find me the best noise-canceling headphones for under $200 with good battery life." The agent's work: The AI agent, like OpenAI's Operator, goes to work. It doesn't just crawl the web in the traditional sense. It interacts with various services and APIs to gather information, compare options, and make a recommendation. The transaction: Once the user approves the recommendation, the agent can complete the purchase, all without the user ever visiting a traditional e-commerce website. This shift has profound implications for retailers. If your brand isn't "agent-friendly," you're essentially invisible in this new world of commerce. So, how do you make your brand discoverable and transactable by AI agents? The answer is to build a remote MCP server. But what exactly is an MCP server, and what are the operational challenges for an e-commerce business in deploying one? An MCP (Model Context Protocol) server is an open standard that allows AI models to connect to and interact with external tools and data sources. Think of it as a universal language for AI. In our context, think of it as a universal translator that enables AI agents to understand and use your product catalog, inventory, pricing, and even checkout functionalities. While this is suitable for internal agentic applications, how can you provide third-party online agents with real-time, up-to-date, and commercially strategic product data? This is where a remote MCP server , powered by technologies like MongoDB Atlas , becomes not just a nice-to-have, but a mission-critical component of your tech stack. However, creating and deploying such a server generates significant operational challenges for an e-commerce business. You need to manage complex, dynamic data structures for product information, rapidly adapt to new AI agent requirements, ensure your infrastructure can scale globally and reliably, and, critically, protect sensitive customer and product data. By creating your own remote MCP server, you can expose your product catalog, inventory, pricing, and even checkout functionality to AI agents in a structured, machine-readable format, and MongoDB Atlas directly addresses these operational hurdles: Superior architecture (the document model): E-commerce data is inherently varied and complex, with products having diverse attributes. The flexible document model of MongoDB Atlas allows you to store product information in a rich, nested structure that mirrors real-world objects. Innovate faster: With the agility of the document model and MongoDB Atlas's developer-friendly environment, your teams can respond to the dynamic needs of agentic commerce at an unprecedented pace. You can rapidly iterate on how your product data is exposed and consumed by AI agents, testing new features and optimizing agent interactions without time-consuming database migrations or refactoring. This speed is crucial in a fast-evolving AI landscape. Build once, deploy everywhere: E-commerce demands low-latency access for agents and users across diverse geographic locations. MongoDB Atlas offers multi-cloud and multi-region deployment options, allowing you to deploy your remote MCP server and product catalog close to your agents and customers, wherever they are. This global distribution capability minimizes latency and ensures high availability, overcoming infrastructure management complexities and guaranteeing that your brand is always transactable. Built-in enterprise security: Exposing your valuable product catalog and transactional capabilities to AI agents requires robust security. MongoDB Atlas provides comprehensive, built-in enterprise-grade security features, including encryption at rest and in transit, network isolation, fine-grained access controls, and auditing. This ensures that your data is protected from unauthorized access and cyber threats, mitigating the significant security challenges associated with opening your systems to external AI interactions. Why retailers must act now The shift to agentic commerce is not a question of if, but when. The MCP Registry, a public directory for AI agents to discover MCP-compliant servers, is set to launch in the fall of 2025. This will be the "yellow pages" for AI agents, and if your brand isn't listed, you'll be left behind. Discover how MongoDB powers the future of retail and helps brands thrive in the age of AI. Learn more about MongoDB for Retail . Ready to boost your MongoDB skills? Visit the Atlas Learning Hub to get started.
MongoDB Engineering: Expanding Our Presence in Greater Toronto
Toronto has long been recognized as one of North America's fastest-growing tech hubs , boasting a diverse, world-class talent pool and a vibrant startup culture. As MongoDB continues to expand globally, Toronto stands out as a strategic location to drive engineering excellence, foster innovation, and cultivate a collaborative culture. We're currently hiring for three key product areas in the greater Toronto area: Identity and Access Management (IAM), Atlas Stream Processing, and Atlas Search. At MongoDB, our engineers are empowered to solve complex problems, take ownership of their work, and collaborate with world-class colleagues to build the future of data. As we scale our presence in Toronto, we aim to create an environment where local engineers can grow their careers, work on cutting-edge technology, and have a meaningful impact on the products that enable organizations around the globe to build the applications of today and tomorrow. Why Toronto? "Toronto is well known as one of the largest tech hubs in North America. We're constantly looking to attract the most talented engineers to work with, and are really excited about expanding into Toronto, which has previously been untapped," said Kevin Rosendahl , Director of Engineering for Atlas Search and Vector Search. The decision to invest in Toronto is strategic. According to Tim Sedgwick , Vice President of Engineering for Atlas Stream Processing and App Services, "It enables us to increase engineering capacity responsibly, access high-velocity teams, and establish an innovation hub that mirrors our company’s values. We’re building a long-term hub here, and we want top engineers shaping that foundation with us." Meet the teams Identity and Access Management (IAM) The IAM team at MongoDB is responsible for managing customer identities and access to MongoDB products. "If we're doing our job well, we're making you safe and secure and not getting in your way," said Harry Wolff , Director of Engineering for Atlas IAM. "We own login, registration, SSO for other teams within MongoDB, and provide features like customer federation so large companies can securely log in with their own credentials." IAM is becoming a key differentiator in MongoDB's ability to land major enterprise customers. "We're going from a means to an end to a concrete dependency that unblocks major enterprise deals," Wolff said. "Our security bar is growing higher, and the work we do actively contributes to signing major contracts. I find that really exciting." The new team in Toronto will focus on building a new enterprise-grade information architecture. "Right now, one company could have 50-plus organizations in Atlas. We're building an umbrella layer to consolidate resources, configure access at scale, and give customers greater auditability and control," said Wolff. "We're building brand new functionality that enterprise customers are asking for." Wolff also emphasized career development and growth: "I joined MongoDB as a senior UI engineer, helped start the IAM team, and now I’m a Director. The company invests in its people, and this Toronto team will have the opportunity to grow alongside the product and make their mark." Atlas Stream Processing Atlas Stream Processing enables developers to continuously process streams of data using the MongoDB aggregation framework. It simplifies the creation of event-driven applications by eliminating the need for specialized infrastructure, allowing developers to stay within the MongoDB ecosystem. "Stream Processing is core to powering modern, event-driven applications and delivering value from streaming data," said Tim Sedgwick. "Our goal is to meet developers where they are and make it easy to build with MongoDB." The product has been generally available for just over a year, and there is a lot of exciting work on the horizon. "Some of the things we're focusing on this year include new sources and sinks, like Kinesis and Apache Iceberg, user-defined functions (UDFs), and distributed processing," Sedgwick said. "We're still early in the product lifecycle, and we're constantly learning from customers to deliver immediate impact." The Stream Processing team is around 50 people, distributed across the U.S., and now expanding into Toronto. "It's a strategic growth lever for us. We're creating a long-term innovation hub here," Sedgwick said. "Toronto engineers will be shaping the foundation of this product." Career growth is deeply embedded in the team culture. "I was the founding lead engineer in Austin for Atlas App Services. That experience of helping grow a new engineering hub was invaluable in my career. Now I lead engineering for Stream Processing," Sedgwick shared. "Joining MongoDB in Toronto could be a similar launchpad for someone else's journey." Atlas Search Atlas Search and Atlas Vector Search provide developers with built-in, relevance-based retrieval capabilities in the MongoDB database. This eliminates the need to sync data with external search engines, allowing teams to focus on building their applications. "Search at MongoDB is a fascinating place to be right now," said Kevin Rosendahl. "We're providing cutting-edge capabilities that power AI applications, large-scale systems, and we're making those tools more accessible across all MongoDB deployments." The team is distributed across major U.S. tech hubs and is now expanding into Toronto. Rosendahl explained, "We look for engineers excited about collaborating on complex, large-scale systems. The goal is to make powerful tools simple and intuitive for developers." A major focus in the coming year is on integrating capabilities from Voyage AI , a recent MongoDB acquisition. "We're bringing intelligent, AI-powered search out of the box," Rosendahl said. "And we're making sure these tools are available for developers everywhere, whether they use our managed service or deploy MongoDB on their own." Rosendahl’s own growth at MongoDB reflects the opportunities available: "I started as an individual contributor helping launch Atlas Search. I becamea lead engineer, then a staff engineer on a research team, and now I'm a Director of Engineering. MongoDB has supported my career development every step of the way." "Our engineers prioritize working together to build the right thing. That creates a culture that values collaboration, communication, and low ego," he added. "We're always looking for the next generation of leaders." Why join MongoDB? Joining MongoDB's engineering organization means becoming part of a culture rooted in trust, innovation, and impact. Our engineers are encouraged to take initiative, pursue curiosity, and help shape the future of software development. "We always look for culture adds—people who make us better, not just the same," said Harry Wolff. "Diversity of perspective and opinions is important at MongoDB ." Whether you're passionate about redefining access control, building intelligent data pipelines, or scaling AI-powered search, MongoDB offers the opportunity to work on industry-defining products alongside some of the most talented and driven people in tech. "MongoDB can be a defining moment in your career - through the unique set of challenges you’ll solve and the amazing people you’ll work with," said Sedgwick. "Above all, what makes MongoDB great is the people." Learn more about #LifeAtMongoDB and join us in building the future of data— become part of our talent community today . Visit our careers page to check out our open roles.
Building an Interactive Manhattan Guide with Chatbot Demo Builder
Question: Know any good spots? Response: Based on the context provided, I can recommend a few good pizza spots in Manhattan: Joe's Pizza (7 Carmine St) - This is described as "one of the most beloved slice joints in the city" with a "thin, crispy crust and always fresh" pizza, perfect for a quick bite. Prince Street Pizza (27 Prince St) - Known for its "thick, square pepperoni slice with those crispy-edged 'roni cups." $1 Pizza Slices - The context mentions that $1 pizza slices can still be found, especially around 6th Ave and 8th St in Greenwich Village. This is described as a "quintessential NYC food experience." That’s from a conversation with a chatbot we built in about 10 minutes. There was no coding, no database setup—just a PDF and the Chatbot Demo Builder tool. What made the experience so valuable wasn’t just getting a good response, but seeing exactly what was happening behind the scenes after submitting our question. 🤖 Meet Chatbot Demo Builder Chatbot Demo Builder is the newest addition to the Atlas Search Playground . This no-code tool gives users an intuitive way to explore MongoDB’s vector search capabilities by building interactive Q&A bots with sample data or their own uploads. Powered by MongoDB Atlas Vector Search and Voyage AI , the Builder automatically handles index creation, query generation, and embedding, removing the common setup barriers that often slow down RAG ( retrieval-augmented generation ) experimentation. What makes the tool particularly powerful is its accessibility. Like all tools in the Atlas Search Playground, it runs entirely in your browser without requiring a MongoDB Atlas account, cluster, or collection. This means you can test ideas, iterate quickly, and share prototypes with teammates and stakeholders, all without spinning up additional infrastructure. With MongoDB.local NYC coming up on September 17, we thought it was the perfect time to put the Chatbot Demo Builder through its paces. So we decided to create something practical: a Q&A chatbot to help visitors explore Manhattan. 🧑💻 Building in the browser The entire process happened without leaving our browser. We started by uploading our Manhattan travel guide PDF to the Chatbot Demo Builder. Next, we configured how the chatbot would process our content using the Data Settings modal. For our chunking strategy, we chose recursive chunking with 500-token chunks and 50-token overlap. This preserves paragraph flow while ensuring important information isn’t split awkwardly across boundaries. For the embedding model, we selected voyage-3-large, which excels at general knowledge retrieval tasks. Once configured, we hit "Create Embeddings" and watched as the Builder processed our guide into a demo document collection containing metadata, chunked text, and vector embeddings. Figure 1. Data Settings modal used for chunking configuration and embedding model selection. 📍 Testing like tourists With embeddings generated, we started asking questions like curious visitors: "Where can I find a public restroom near Central Park?" "What are some good day trip ideas?" "What are some fun facts about New York City?" Each query highlighted the Builder's most powerful feature: complete transparency. When we asked about pizza, we could see the exact vector search query that ran, which chunks scored highest, and how the LLM prompt was constructed. This visibility turned experimentation from guesswork into informed iteration. We could understand not just what answers we got, but also why we got them and how to improve them. Figure 2. Vector search query and scored document results for the pizza recommendation question. 🧐 Optimizing for better results Fine-tuning our retrieval settings produced even better outputs. The Builder made these optimizations easily accessible and provided insight into exactly how they would affect results. We started by modifying numCandidates, which controls how many potential matches the system initially examines before selecting the best results. The recommended setting is at least 20 times higher than the limit value, allowing more potential matches to be evaluated before selecting the best ones; this trades a bit of latency for significantly better recall. For even higher precision, the Builder offers an exact nearest neighbor (ENN) search, which calculates distances to all indexed embeddings. While computationally intensive, it guarantees finding the exact nearest neighbors when accuracy is key. Since our Manhattan guide only had 25 documents, we could afford to use ENN without worrying about performance impact. Figure 3. Retrieval settings panel for adjusting search parameters and enabling exact nearest neighbor search. 💡 Sharing and takeaways Once we were happy with the responses, it was easy to share our work. The Builder generated a snapshot link that let the entire team test the chatbot for themselves without any additional setup. In just a few steps, we transformed a static travel PDF into a conversational guide for exploring Manhattan. Along the way, we saw how decisions about chunking strategies, embeddings, and retrieval settings directly affect answer quality. We also gained visibility into what was happening behind the scenes, giving us the insights we needed to optimize these decisions. Figure 4. Output panel tabs displaying data source, vector index, search query, and prompt details By the end, we had a chatbot capable of providing helpful local insights about Manhattan, from day trip ideas to restaurant recommendations, all while giving us complete visibility into how it generated its answers. 🗽 Beyond the big apple Chatbot Demo Builder makes it easy to explore RAG techniques. Whether you're prototyping a customer support bot, building an internal knowledge assistant, or creating an interactive travel guide, Chatbot Demo Builder allows you to gain a clearer understanding of what works best for your AI use case. Ready to get started? Try the Chatbot Demo Builder in the Atlas Search Playground, or check out the official documentation to learn about the other tools available. And if you're joining us in New York for MongoDB.local , consider this your preview of what the city has to offer—courtesy of a chatbot that knows its way around Manhattan.
MongoDB and Hope AI: Craft Enterprise Code with AI
The world of software development is constantly evolving, and the demand for tools that streamline processes, increase efficiency, and enable developers to easily create robust applications continues to rise. For this context, MongoDB and Bit.dev have teamed up to bring a transformative integration to the table, combining MongoDB’s leading database platform with Bit Cloud’s AI-powered, component-based development platform, featuring the Hope AI agent. Bit Cloud is the platform that brings powerful AI and composability capabilities directly to developers through Hope AI. Designed to support smarter, faster development, Hope AI enables architecture planning with control, code generation, collaborative management, and production-ready output. This partnership showcases how Bit Cloud, with Hope AI, empowers developers to innovate efficiently without sacrificing control. Let’s explore the features of Hope AI and see how it transforms the development process. Figure 1. The release process. Going from concept to plan: AI-generated code architecture One of Hope AI’s standout features is its ability to create a code architecture based on user input. This isn’t just about diving straight into coding—Hope AI first provides developers with a clear, visual plan for implementation. Think of it as having an AI architect that listens to your idea, understands your goals, and crafts a tailored blueprint for your application. At this stage, Hope AI does not generate any code; this enables developers to focus on shaping the architecture to fit their unique requirements. Whether they need to make tweaks or add entirely new elements, this phase is highly customizable. Developers can approve the initial plan at their own pace, knowing that any changes are an integral part of the process. Generating code for new or existing applications Once the architecture is approved, Hope AI seamlessly transitions into code generation. This integration is ideal for developers building new applications as well as those enhancing existing ones. For existing applications, adding features is simple—Hope AI can work off the current application and produce code that integrates directly into the existing framework. This versatility positions Hope AI as an innovative tool for projects in all stages of development, helping developers spend less time working around limitations and more time realizing their creative visions. While its current focus is on web application development, Hope AI plans to expand its capabilities to mobile app development in future iterations, making this partnership even more promising for the broader developer community. Gaining full control and ongoing flexibility A major concern with AI-generated code is the possibility of losing control over what’s created. MongoDB and Hope AI address this head-on—developers maintain complete control and can review every line of the AI-generated code. If edits are required, users can make changes directly within the generated code, ensuring the final product aligns precisely with their vision and requirements. Beyond initial creation, Hope AI remains an active participant throughout the development process. Need additional components or features later? The AI is always available to assist, making sure your code evolves alongside your project. Achieving collaboration at the core One of the most exciting features introduced by Hope AI is the ability to “snap” the code. This functionality enables developers to share and manage code with teammates, promoting seamless collaboration. Teams can work together on the project, implement updates, and review progress without any barriers. This collaborative aspect is crucial for modern development teams that thrive on interconnected workflows. Taking a privacy-first approach to code creation Another remarkable feature of Hope AI is its emphasis on privacy. The code generated by Hope AI is 100% private—no sharing for AI model training, no public access to your work. The data is protected, and only the designated project members can access the code. In today’s data-sensitive landscape, this level of privacy is critical. Developers and organizations can trust that their intellectual property is secure and that the AI agent isn’t repurposing their unique codebase for other uses. Integrating MongoDB: Configuring MongoDB Atlas credentials MongoDB plays a crucial role in this collaboration. Hope AI is designed to use MongoDB Community Edition by default, offering developers access to MongoDB’s fast, efficient, and reliable database. However, developers and organizations have the flexibility to choose MongoDB Atlas if they prefer a fully managed, customizable solution. Hope AI fully supports integration with MongoDB Atlas, allowing seamless configuration for those who opt for it. MongoDB Atlas opens up possibilities for advanced database management, including automatic scaling, global data replication, and powerful analytics features. With this level of customization, developers can create applications that meet even the most complex infrastructure demands. Promoting the future of AI-assisted development The collaboration between MongoDB and Hope AI signals more than just a technical partnership; it represents a vision for the future of development. By harnessing the power of AI to streamline coding and by empowering developers to maintain control over their projects, this integration creates an environment where creativity meets efficiency. As Hope AI expands into mobile app support and MongoDB continues to innovate on its platform capabilities, this partnership will likely set the stage for groundbreaking developments that appeal to large-scale organizations. Choosing MongoDB and Hope AI For IT decision-makers, the MongoDB-Hope AI partnership is a win for teams wanting to build faster without compromising quality or security. Here’s why this collaboration stands out: Rapid prototyping and scaling: The AI-powered architecture design and code generation significantly reduce project timelines while safeguarding customization. Secure code management: Privacy-first code generation is designed to prioritize data protection and support security best practices. Enhanced team collaboration: The Snap functionality ensures teamwork thrives, making Hope AI suitable for distributed teams. On-premises availability: The product extends its flexibility by offering an on-premises deployment option, catering to businesses that require hosting within their own infrastructures. Integrated MongoDB services: MongoDB Atlas credentials provide unparalleled database management flexibility while MongoDB Community Edition caters to developers just getting started. Building the future together The integration of MongoDB and Hope AI empowers developers everywhere. Whether you’re designing from scratch, enhancing an existing app, or scaling your team’s efforts across platforms, this partnership promises the tools and capabilities to bring your ideas to life with unprecedented efficiency and control. MongoDB and Hope AI are building the future—and developers are at the center of this exciting transformation. Ready to unlock the full potential of AI-powered development? Visit the MongoDB AI Learning Hub to learn how to begin building AI applications with MongoDB. Connect with Hope AI today to see how MongoDB and Hope AI can transform your ideas into reality.